Skip to main content

A Reflection on the AI Safety Summit in the UK 

Category
Artificial Intelligence
Safety
Date

By Ifeanyi Chukwu, Research Software Technician

The past few years have witnessed a tremendous surge in the influence of artificial intelligence (AI), driven by the widespread availability of high-performance computing machines that empower the development of robust AI models and tools. In fact, in 2023, it comes as no surprise that AI, has been named the Collins Word of the Year 2023.  

This designation reflects the remarkable strides made in the development and proliferation of AI, which has gained a prominent position in the world's technological landscape. AI, often described as the intelligence of machines and computer systems, is an incredible technological advancement that aims to enable machines to emulate human-like thinking and decision-making. The field of AI is evolving rapidly, with profound implications for various aspects of our lives, and it has been compared to the transformative effects of electricity and the internet. 

As this impressive technology is fast evolving, with the tremendous benefit of AI comes tremendous risks, if it gets under the control of malicious players.

For instance, the same AI, which is revolutionising the way new medicines are created, could also be used for bioterrorism, to create some poisons with intent of causing harm. Hence the AI Safety summit in the UK is timely.

Being the first global summit of its kind, it has the intentions of having a shared, global understanding of the risks of the fast-governing technology and how to collectively regulate it. 

 Key Themes from the AI Safety Summit 

The summit, hosted by the UK at Bletchley Park, brought together experts, innovators, and policymakers to explore the challenges and opportunities presented by AI. Those attending included such tech bigwigs as Microsoft OpenAI, Google DeepMind, Meta and Elon Musk, and politically the US vice-president Kamala Harris and European Commission president, Ursula von der Leyen.

Several noteworthy points noted in the summit: 

The issue of AI safety within countries and across the globe is a continuing discussion and will require collaboration across countries and industries. The next summit will be held in South Korea in six months’ time. 

Balancing AI Promise and Peril: The summit saw a discussion on how AI is heralded for its potential to revolutionise various fields, from healthcare to work processes. Nevertheless, it was also pointed out that it poses the risk of being used for harmful purposes, such as bioterrorism or cyberattacks, highlighting the need for comprehensive safety measures. 

Loss of control over AI, another central focus of the summit was the potential risks of AI taking control from humans, such as creating unstoppable computer viruses or exploitation for malicious purposes. Part of the discussion was on how to avoid these scenarios. 

The discussions also emphasised the need for regulatory frameworks that address the global impact of AI technology while ensuring responsible innovation. History has shown that, in the absence of government regulations, companies may prioritise profit over the well-being and safety of citizens. 

Therefore, striking a balance between fostering innovation and safeguarding the public is essential. 

Integrations of AI into Society brings a lot of benefits in practical terms, it also come with a potential risk, such as proliferation of bias, election disruption and exacerbating global inequalities.

For instance, as the UK plans to roll out AI tools to assist teachers with repetitive tasks, such as providing personalised curriculum to students via virtual tutors, what we don’t want is for to AI modules to enhance the already existing bias against those from disadvantaged backgrounds. 

 Some questions unanswered 

As impressive as a global strategy of AI Safety may sound, it does leave some questions unanswered. For instance, the growing advancement of AI has significantly relied on open-source code, enabling everyone to have broad access to and contribute to the development and use of AI algorithms. This open-source paradigm has fostered collaboration and innovation from all actors in the field and has made AI more ubiquitous.  

However, the summit may likely result in increased AI regulation, with the intention of preventing AI models from falling into the wrong hands, which could lead to the end of the open-source model. An unintended consequence is that the end of the open-source era, may indirectly reduce collaboration and limit innovation.  

Open source democratises access to AI tools, allowing everyone to use and contribute to the development of AI tools such as TensorFlow, Keras, PyTorch, these tools are used for building machine learning modules. If these open access ends, it will mean the AI tools will only in the hand of select few.  

 AI safety at the University of Leeds 

Organisations and institutes within the field of AI are increasingly recognising the importance of aligning their AI goals with the broader agenda of safety. At a high level the University of Leeds has announced the first iteration of Guidance to Staff on the use of Artificial Intelligence, but as with the safety summit, this has many moving parts and will be a continuous work in progress as AI developments take place and safety tools are created. 

Conclusion 

As AI continues to evolve and play an increasingly central role in our lives, the question of safety becomes paramount. The AI Safety Summit in the UK has served as a platform for addressing these concerns, from the ethical use in various domains, to the need for responsible regulation. While it is essential to strike a balance between fostering innovation and ensuring public safety, the world must collectively work towards harnessing the power of AI for the greater good, without losing control of these transformative technologies or allowing malicious actors to exploit them. The future of AI holds great promise, but it also demands careful consideration and ethical responsibility to navigate the potential pitfalls. 

Ifeanyi is a research software technician in the Data Analytics Team at LIDA. The team manages the institutes' in-house trusted research environment, LASER, and provide an end-to-end service for research teams.

With a background on Applied Artificial Intelligence, Ifeanyi is passionate about developments in AI, and how best to use it for the best societal outcome.