The Evolution of AI Regulation: A Global Perspective

Developing AI is really perplexing task for humans and for AI, is to understand us. However there have been several developments in this field and these developments are very tumultuous that government can no longer ignore, while some governments are waking up to this alarm and some are snoozing the alarm. Hence we must talk to regimes that are awake and study there routine, that how they tackle AI. One woke government I will discuss is going to be United Kingdom. UK had already made big cash (budget) announcements.

The following table can be allude to understand it:
AI announcement from UK

  • 100 million pounds to help realize new AI innovations and support regulators' technical capabilities.
  • 10 million pounds to jumpstart regulator AI capabilities - for research, everyday ability to address AI risks in their domain, and practical tools to build a foundation on AI expertise.
  • Committing 1.5 billion pounds to build a supercomputer in the public sector.
  • 80 million pounds to boost AI research through 9 new research hubs across the UK.
  • 3rd largest number of AI unicorns and start-ups in the world.

These budget allocations are made to regulate AI and for research and development purposes. In this article, I will be discussing the pro-innovative and contextual-based approach of the UK, the Artificial Intelligence Regulation Bill; a private member bill proposed by Lord Holmes of Richmond (Conservative), and recent AI summits around the world. For this article, we may need to get familiar with the types of AI.

There are broadly 3 types:

  • Highly capable general-purpose AI: Foundation models that can perform a wide variety of tasks and match or exceed the capabilities present in today's most advanced models. Generally, such models will span from novice through to expert capabilities with some even showing superhuman performance across a range of tasks.
     
  • Highly capable narrow AI: Foundation models that can perform a narrow set of tasks, normally within a specific field such as biology, with capabilities that match or exceed those present in today's most advanced models. Generally, such models will demonstrate superhuman abilities on these narrow tasks or domains.
     
  • Agentic AI or AI agents: An emerging subset of AI technologies that can competently complete tasks over long timeframes and with multiple steps. These systems can use tools such as coding environments, the internet, and narrow AI models to complete tasks.

UK's Philosophy for AI: PRO-INNOVATIVE APPROACH

The UK White paper which was presented in 2023 targeted the existing technologies of AI. The framework revolved around 5 regulations or context based approach. That included- Safety, security and robustness, appropriate transparency and explainability, Fairness, Accountability and governance, Contestability and redress. The above stated framework more deeply study the sector specific measures because, obviously the level of risk will be determined by where and how AI is used.

Whereas Pro –Innovative approach is overall philosophy for UK, which aims to create an environment where AI can flourish, recognizing its potential to revolutionize public services and drive economic growth. A pro-innovation approach to AI regulation that aims to balance the benefits and risks of these technologies. That also supports, encourages the development and adoption to AI.

That is well talked about the approaches, but what about the new emerging once? One of them is Highly capable general-purpose AI or frontier AI (as defined by UK). The government feels that, developer in this category currently face least clear legal responsibilities. The risk may inextricably extend itself when a high capable system is a general one and can be used in a wide range of applications. Hence, the UK government has provided 9 safety processes on how a company can ensure and maintain the safety of AI technologies.

Alongside these measures the government had also set up the AI Safety Institute (a non-statutory body); this institute will act as filling the gap of knowledge between the government and AI system, especially the most powerful once. In addition, it will research new techniques for understanding and mitigating AI risk, and conduct fundamental research on how to keep people safe in the face of fast and unpredictable progress in AI.UK white paper gave boarder attention to highly capable general- purpose AI because as may it sound the AI is highly capable and will pose more risk and therefore more attention will be given to that (according to UK's government).

The government approach to that was a non-statutory one for implement the 5 regulation (context based approach) stated above. Reasons given for that, these non-statutory bodies are the key components in developing the AI which a quiet rapidly developing field and offers a critical adaptability.

But there is ofcourse a statutory body to regard. But scholars believe the more resources are indeed needed to regulate the statutory duty effectively. Statutory, without any debate is the need of the hour to watch over the AI capabilities and companies that hold power to develop those capabilities.

What I Think Might Be An Issue

The core for the AI regulation / framework truly depends transparency of data between the companies and government. So what if a company refuses to share their precious data / resources? Why would a company go open source on that. However some have agreed to. Even if they site there data , how can a government ensure the protection to that data, so that data is not jeopardies in a wrong hand or even to black hat hacker? And the most visible risk to us from AI is, the creation of fake content, the generation of non- sexual and sexual imagery and AI hallucinations. These issues would generally come up with LLM's (Large Language Models). What safety can any government grant their citizens (us)?

Artificial Intelligence (Regulation) Bill – AI AUTHORITY IN UK

This bill would establish an AI authority to ensure the existing regulations are been taken into account, to undertake the gap analysis of regulatory bodies in respect with AI, basically a watch dog. This authority will also be doing most important part for any government that is monitoring its economy which can be affected by upcoming AI advancements; assuming that AI and economy is inextricable. This bill will also establish sandbox for AI and regulations which shall mirror some UK White Paper for AI.

The bill cover AI as an umbrella term and cover several technologies and terminologies under that under the word ‘AI' and that are Narrow AI, Artificial general intelligence (AGI, also known as ‘strong AI') , Machine learning and Deep learning. On the other hand they also identify 12 broad challenges of AI. The bill also stated, the statement from a consultancy firm PwC, that UK's GDP will be 10.3% higher in 2030 because of AI. The bill highlighted the fact that legislative actions are inevitable for any country.

AI Summits

  1. AI Safety Summit - November 1-2 , 2023.
    This was the first ever AI summit, which marked the signed ‘Bletchley Park Declaration'. 28 countries participated at this summit. India too along with China, USA and European Union was part to it. The concerns were again the same identifying the risk from AI and tackling it with science based approach with collaborations of other countries. The declarations focuses on counter to frontier AI so that it will not pose threat to human freedom, democracy or don't promote any racism. From Indian perspective we are following risk based, user-harm approach.
    It focuses on International cooperation.
     
  2. AI Seoul Summit - May 21-22, 2024
    This summit was duly held in the light of the AI Safety Summit of 2023, a selected number of global industrial leaders were invited to update upon their work that they have done, which was obligated to do in Bletchley Park Declaration. There were 3 critical priorities were addressed in this summit, which were – Safety, Innovation and Inclusivity. 16 AI tech companies signed ‘develop AI safely'. One of the major outcome from the summit the leaders agreed to establish first International Network Of AI Safety Institute, this reflects their efforts towards developing AI safety science, that further will include sharing information about models, their limitations, capabilities and risks, as well as monitoring specific.

14 companies have signed the Seoul AI Business Pledge. The following is the list of companies that agreed to join the Pledge:

  • Adobe
  • Anthropic
  • Cohere
  • Google
  • IBM
  • Kakao
  • KT
  • LG AI Research
  • Microsoft
  • NAVER
  • OpenAI
  • Salesforce
  • Samsung Electronics
  • SK Telecom
  • Cisco Systems (Joined 13 August 2024)

AI Action Summit – Feb 10-11, 2025

The summit was organized in Paris, France, and co-chaired by India. Countries like China, Brazil, France, and Australia signed a joint statement on ‘inclusive and sustainable artificial intelligence for people and the planet'. There were 57 countries in this summit. The countries discussed:

  • Public-interest AI
  • Future of work
  • Innovation and culture
  • Trust in AI
Conclusion
Now that we have an idea how governments are shaping itself for AI generation. What I really think will come down to the implantation of all the regulation effectively to which results are visible. So now future belongs to AI developers and law makers and you can just exist and may protest if you fell something edgy. But the bottom line is we just exists and be subservient.

Share this Article

You May Like

Comments

Submit Your Article



Copyright Filing
Online Copyright Registration


Popular Articles

How To File For Mutual Divorce In Delhi

Titile

How To File For Mutual Divorce In Delhi Mutual Consent Divorce is the Simplest Way to Obtain a D...

Increased Age For Girls Marriage

Titile

It is hoped that the Prohibition of Child Marriage (Amendment) Bill, 2021, which intends to inc...

Facade of Social Media

Titile

One may very easily get absorbed in the lives of others as one scrolls through a Facebook news ...

Section 482 CrPc - Quashing Of FIR: Guid...

Titile

The Inherent power under Section 482 in The Code Of Criminal Procedure, 1973 (37th Chapter of t...

Lawyers Registration
Lawyers Membership - Get Clients Online


File caveat In Supreme Court Instantly