The UK government has expressed a desire to take an independent stance on artificial intelligence (AI) regulation, planning to adopt a different approach from its major Western counterparts, such as the EU and the US. Feryal Clark, the UK Minister for AI and Digital Government, emphasized in an interview with CNBC that the UK must "do its own thing" and ensure that necessary regulations on AI model safety are implemented early on.
Image Source Note: The image was generated by AI, and the image is licensed through Midjourney.
Clark mentioned that the UK government has established good relationships with several AI companies, such as OpenAI and Google DeepMind, which voluntarily open their models to the government for safety testing. She stated, "Safety must be integrated from the early stages of model development, so we will work with the industry to develop relevant safety measures."
This view is supported by UK Prime Minister Keir Starmer, who pointed out that post-Brexit, the UK has more freedom in regulation and can choose the regulatory model that best suits its needs. Starmer noted that while there are different regulatory models around the world, including those of the EU and the US, the UK can select the approach that aligns most with its interests.
So far, the UK has not formally introduced legislation specifically targeting AI, instead relying on various regulatory bodies to manage under existing rules. This contrasts sharply with the EU, which has rolled out a comprehensive AI bill aimed at creating unified rules for the technology. Meanwhile, the US lacks any federal AI regulation and instead has a patchwork of state and local regulatory frameworks.
Although the UK government committed in 2022 to regulate "frontier" AI models, specific details on safety legislation have yet to be announced, with plans to propose formal rules after consulting with the industry. Chris Mooney, a partner at law firm Marriott Harrison, believes that the UK's "wait-and-see" attitude towards AI regulation is unclear, leading to dissatisfaction and unease among businesses.
In terms of copyright, the UK government is also reviewing the existing copyright framework to assess whether exceptions should be made for AI developers using works from artists and media publications for model training. Sachin Dev Duggal, CEO of AI startup Builder.ai, expressed concern over the government's action plan, arguing that advancing regulation without clear rules is "marginally reckless."
Despite this, some industry insiders believe the UK could adopt a more flexible regulatory approach. Russ Shaw, founder of Tech London Advocates, stated that the UK is striving to find a "third way" in AI safety and regulation, which involves creating specific regulatory provisions based on different industries, such as finance and healthcare.
Key Points:
🌍 The UK aims to be independent from the EU and US in AI regulation, establishing rules that serve its own interests.
🤝 The UK government has built good relationships with major AI companies and is committed to ensuring safety at the early stages of model development.
📝 While no formal AI regulatory laws have been introduced, the government plans to engage in extensive consultations with the industry to formulate relevant rules.