File Copyright Online - File mutual Divorce in Delhi - Online Legal Advice - Lawyers in India

Towards Regulation of AI

The Term Artificial Intelligence (AI) was first coined by John McCarthy,1 who defined AI as the science and art of making intelligent machines, particularly computer programs.2 In layman's terms, AI is the science and engineering of making machines more intelligent and capable of performing more complex tasks. It is about obtaining patterns to make sense of big data. AI benefits various areas such as governance, decision making, and development.

It is changing our lives with inventions like self-driving vehicle safety 3 and voice assistants like Siri and Alexa.4 AI experiments have produced favorable outcomes on some occasions and extremely drastic outcomes on others. For instance, the first fatal accident involving AI occurred when an Uber self-driving car killed a man in Arizona.5

Moreover, there is always a high risk of AI technology being misused in egregious ways, such as privacy violations. Therefore, the government must provide smart regulations for AI that encompass flexible, innovative, imaginative, and generally new forms of social control.6
  1. Why[1] AI Has to Be Regulated
    AI is one component of the more general concept of intelligent machines. The rise of smart machines is both a challenge and an opportunity for the foreseeable future. AI is a broad philosophy, not one technology: it is a juice-powering algorithm. When algorithms become manifested in a specific application, the role of regulation comes. While there is much investment in AI technology, investment in AI safety is lacking.[2] As such, there should be a government committee to oversee anything related to AI, just as there are authorities for managing food and drug safety,[3] automotive safety,[4] and aircraft safety.[5

     Such a committee is vital because the government should oversee things that possess a risk to the public-AI unequivocally has the potential to endanger the public and thus, should be regulated like other dangerous industries. The main obstacle to creating a regulatory agency is that government procedures take time. The government and other corporate organizations usually take action only after some disaster takes place, followed by public protests.[6]

    On the other hand, the technology industry's approach is to work within the boundary of AI ethics and principles. For instance, Google has pledged not to use AI algorithms for weapons and other technologies that cause "overall harm," but can we expect that every organization will practice similar restraint?[7] No, we cannot. Researchers and application developers are often pressured to finish projects within a time constraint. The danger of pressured development is that even a relatively small mistake can result in extremely disastrous consequences.[8] Thus, it is necessary to incorporate AI ethics into laws so the government can better regulate AI.

    Tech giant executives like Tesla Chief Executive Officer, Elon Musk, and Google Chief Executive Officer, Sundar Pichai, have already raised concerns regarding AI regulation.14 Musk has mentioned that, as a generation, we are already far behind in passing regulations for AI because the technology has already advanced tremendously and continues to expand every day.15 Pichai has said that it is too important not to impose AI regulations and calls for a "sensible approach."16 Microsoft President, Brad Smith, is another distinguished figure who supports regulating AI.17
  2. Why Arguments Against AI Regulation are Unpersuasive
    Many researchers believe that while AI used in the public domain should be regulated, the development of new applications in the private sector should remain unregulated. These researchers believe science should not be regulated or restricted because it will hamper innovations that can be life-changing.18 For example, AI is currently used to detect earthquakes,19 and in the future, may help doctors better predict patients' risks of cancer.20 However, this argument is inherently flawed because it assumes that AI regulation will result in a complete restriction of AI.

    In reality, AI regulations will allow the continued use of the technology, but in a more secure, transparent, and responsible manner. More importantly, this argument also assumes that AI innovation is always beneficial, and any regulation would stifle innovation. However, there have been many cases in which the application of AI technology has been extremely dangerous and controversial, such as law enforcement's use of Clearview AI's facial recognition products[9] and Cambridge Analytica's use of personal data to target voters.[10]

    The second argument asserted by researchers is that putting limitations on technological advancements will inevitably be ineffective because people do not diligently follow regulations. For instance, despite the medical community agreeing not to experiment on human embryos, a Chinese scientist, He Jiankui, ignored Chinese regulations on genome editing and created gene-edited babies.[11] Although some people might choose to ignore AI regulations, it does not mean that AI should not be regulated at all. If this logic was generally accepted, then no law would ever get passed simply because there are individuals who will not abide by them. This is not the case as evidenced by today's society governed by laws.

    Another argument against AI regulation is that the regulators and policymakers are unfit to draft regulations because they are uneducated about AI.[12] This argument fails to recognize that AI regulation is not dependent on human knowledge as humans are no longer capable of regulating advanced technology that can cause risks beyond the scope of government oversight.[13] Moreover, governments regulating new and unfamiliar technology is not a new concept. The U.S. government successfully passed regulations for automobiles[14] and railways,[15] which are both examples of technologies that were once revolutionary.

    Furthermore, AI regulation is needed to address three main questions: who can use AI, on whom will AI be used, and for what purpose?[16] These three areas can be easily regulated by the government without the need for any technical expertise. For example, the U.S. Government did not need to understand the underlying technology used in steam engines to regulate the fare prices of railways.[17] AI can also be regulated with existing resources rather than leaving AI completely unregulated.
  3. Considerations to Make Prior to Regulating
    When drafting regulations for AI, the first thing to keep in mind is that the speed of regulation should match the rate at which the technology is evolving. Second, before moving forward with new laws and regulations, the government must balance the risks and benefits of AI. Our minds are critical because they notice the risks first, but AI is a technology that allows us to understand things far beyond the comprehension of our minds: AI surpasses our natural intelligence.
    For example, Deepmind's AlphaGo is not constrained by the limits of human knowledge because it combines its own neural network with a powerful search algorithm to play board games.[18] Google now employs this technology in its data cooling systems, which is forty percent more energy efficient than the traditional methods of data cooling.[19] Furthermore, with AI's energy potency, we can cut back carbon emissions.[20] In the medical field, we can use algorithms to make better clinical judgments and save the healthcare industry billions of dollars.[21] In agriculture, we can better utilize each acre of land by detecting plant diseases, controlling pests, and automating equipment.[22] Everywhere we look, if we apply this technology, we will see incredible gains.

    If we only focus on the negative consequences of AI, the resulting regulations will stifle innovation. For example, flying today is much safer than it was decades ago.[23] How did we evolve? Every single accident that happened was thoroughly investigated. Rather than abandoning the use of planes completely, we considered problems such as how and what went wrong. AI can be compared to the use of statistics in the 19th century, which allowed us to understand things in an entirely new way. For example, we used Apollo computers and statistics to take people to the moon and back.[24] We are doing something similar with AI but on a whole new level.
  4. The Emerging State of AI Laws
    It is not the technology itself that needs to be regulated, but the intended use of the technology. The solution is not to control and regulate the entire sphere of AI, but its actual applications, be it is self-driving cars, medical systems, or recreation. But here also lies the problem: there are potential outcomes and applications of AI technology that are still unknown. To solve this, we need to work in stages.

    The first stage is to follow a comprehensive set of principles, and several organizations have already provided a set of ethical principles for AI.[25] An excellent example is the Asilomars' set of AI principles-a set of twenty-three principles released in 2017.[26] The second stage is to understand the different levels of AI technology in need of regulation. The first level is data and ensuring it is not misused, the second level is the need for built-in privacy frameworks, and the final level is understanding where the technology has gone wrong.
The European Union is the most vigorous in proposing new AI rules and regulations.[27] With the advent of autonomous vehicles, many European countries including Belgium, Estonia, Germany, Finland, and Hungary enacted laws that allow for the testing of autonomous vehicles on their roads.[28]

In contrast, in the United States, the White House has maintained a "light touch" regulatory policy approach when it comes to AI.[29] Recently the White House released "Guidance for the Regulation of Artificial Intelligence Applications."[30] These guidelines follow an approach where sectorial regulators can formulate regulations within their separate jurisdictions.[31] This approach allows the central government to regulate some aspects of autonomous vehicles and state authorities to control others.

Data is another aspect of AI that needs regulatory attention. Laws concerning data are significant for AI since those laws can impact the use and growth of AI systems. In 2018, the European Union introduced the General Data Protection Regulation (GDPR),[32] which requires its member states to maintain a reasonably prohibitive regulatory approach for data privacy and usage.[33] The Catholic Church has also expressed the need for stricter ethical and moral standards on the development of AI.[34] The Rome Call for AI Ethics set forth six basic principles for AI: transparency, responsibility, impartiality, inclusion, reliability, and security.[35]

Countries such as the United States, Brazil, and the United Kingdom have already enacted data privacy laws.[36] Singapore, Australia, and Germany are actively considering such regulations and are having advanced discussions on this topic.[37] Also, many countries are concerned about the potential use of AI to power autonomous weapon systems. For example, Belgium has passed legislation to thwart the use or development of lethal autonomous weapons systems.[38]

AI is set to transform society through innovations across all spheres of human endeavor. However, there must be regulations to control AI and to hold its creators accountable when mishaps occur. By ensuring that AI is developed responsibly, we can not only make future generations believe in the power of technology, but more importantly, improve society through the power of AI technology.

  1. [].
  2. Wyatt Berlinic, Why AI Safety Is Important, WYABER (July 7, 2019), [].
  3. What We Do, U.S. FOOD & DRUG ADMIN. (Mar. 28, 2018), [].
  5. Safety: The Foundation of Everything We Do, FED. AVIATION ADMIN. (Nov. 6, 2019, 3:01 PM), [].
  6. See Jillian D'Onfro, Google Scraps Its AI Ethics Board Less than Two Weeks After Launch in the Wake of Employee Protest, FORBES (Apr. 4, 2019, 7:52 PM), [].
  7. Mark Bergen, Google Renounces AI for Weapons; Will Still Work with Military, BLOOMBERG (June 7, 2018, 3:40 PM),,pursue%20future%20lucrative%20g overnment%20deals [].
  8. See generally Geoff White, Use of Facial Recognition Tech 'Dangerously Irresponsible', BBC NEWS
  9. Kashmir Hill, The Secretive Company that Might End Privacy as We Know It, N.Y. TIMES (Feb. 10, 2020), [].
  10. Nicholas Confessore, Cambridge Analytica and Facebook: The Scandal and the Fallout So Far, N.Y. TIMES (Apr. 4, 2018), [].
  11. Sharon Begley, Amid Uproar, Chinese Scientist Defends Creating Gene-Edited Babies, STAT (Nov. 28, 2018), [].
  12. Tristan Greene, US Government Is Clueless About AI and Shouldn't Be Allowed to Regulate It, NEXT WEB (Oct. 24, 2017), [].
  13. Michael Spencer, Artificial Intelligence Regulation May Be Impossible, FORBES (Mar. 2, 2019, 9:34 PM), ulation-will-be-impossible/?sh=6670ff2e11ed [].
  14. See generally The Interstate Commerce Act Is Passed, U.S. SENATE, [].
  15. Federal Legislation Makes Airbags Mandatory, HISTORY (July 28, 2019), [].
  16. We Can't Regulate AI, AI MYTHS,
  17. See generally The Interstate Commerce Act Is Passed, supra note 26.
  18. David Silver & Demis Hassabis, AlphaGo Zero: Starting from Scratch, DEEPMIND (Oct. 18, 2017), [].
  19. Will Knight, Google Just Gave Control Over Data Center Cooling to an AI, MIT TECH. REV. (Aug. 17, 2018),,centers%20to%20an%20AI%20algorithm.&text=This%20system%20previously%20made%20reco mmendations,percent%20in%20those%20cooling%20systems [].
  20. James Vincent, Here's How AI Can Help Fight Climate Change According to the Field's Top Thinkers, VERGE (June 25, 2019, 8:02 AM), artificial-intelligence-ml-climate-change-fight-tackle [].
  21. See Bernard Marr, How Is AI Used in Healthcare - 5 Powerful Real-World Examples that Show the Latest Advances, FORBES (July 27, 2018, 12:41 AM) [].
  22. The Future of AI in Agriculture: Intel-Powered AI Helps Optimize Crop Yields, INTEL, [].
  23. Mark Ellwood, What Flying Was Like 30 Years Ago, COND´┐Ż NAST TRAVELER (Aug. 28, 2017), [
  24. See Charles Fishman, The Amazing Handmade Tech that Powered Apollo 11's Moon Voyage, HISTORY (July 17, 2019), [].
  25. See Thilo Hagendorff, The Ethics of AI Ethics: An Evaluation of Guidelines, 30 MINDS & MACHS. 99 (2020).
  26. TechTarget Contributor, Asilomar AI Principles, TECHTARGET (Feb. 2019), https: // [].
  27. See Shaping Europe's Digital Future: Artificial Intelligence, EUROPEAN COMM'N (Jan. 8, 2021), [].
  28. Kathleen Walch, AI Laws Are Coming, FORBES (Feb. 20, 2020, 11:00 PM), [].
  29. Brandi Vincent, White House Proposes 'Light Touch Regulatory Approach' for Artificial Intelligence, NEXTGOV (Jan. 7, 2020), []
  30. Clyde Wayne Crews Jr., How the White House "Guidance for Regulation of Artificial Intelligence"
    Invites Overregulation, FORBES (Apr. 15, 2020, 11:35 AM), /sites/waynecrews/2020/04/15/how-the-white-house-guidance-for-regulation-of-artificial-intelligence-invites-overregulation/#31fedaf53a2c [].
  31. Lee Tiedrich, AI Update: White House Issues 10 Principles for Artificial Intelligence Regulation, INSIDE TECH MEDIA (Jan. 14, 2020), [].
  32. Data Protection in the EU, EUROPEAN COMM'N, [].
  33. See generally Juliana De Groot, What Is the General Data Protection Regulation? Understanding & Complying with GDPR Requirements in 2019, DIGIT. GUARDIAN (Sept. 30, 2020), data-protection [] (describing GDPR regulations).
  34. Taylor Lyles, The Catholic Church Proposes AI Regulations that 'Protect People', VERGE (Feb. 28, 2020, 4:06 PM), [].
  35. Lance Eliot, Pope Francis Offers 'Rome Call for AI Ethics' to Step-Up AI Wakefulness, Which Is a
    Wake-Up Call for AI Self-Driving Cars too, FORBES (Mar. 10, 2020, 11:31 AM), [].
  36. A Practical Guide to Data Privacy Laws by Country, I-SIGHT SOFTWARE (Nov. 5, 2018), [].
  37. Walch, supra note 40.
  38. Mary Wareham, The Killer Robots Ban Is Coming. What Will Belgium Do?, HUM. RTS. WATCH (May 30, 2018, 12:00 AM), coming-what-will-Belgium-do [].

Law Article in India

Ask A Lawyers

You May Like

Legal Question & Answers

Lawyers in India - Search By City

Copyright Filing
Online Copyright Registration


How To File For Mutual Divorce In Delhi


How To File For Mutual Divorce In Delhi Mutual Consent Divorce is the Simplest Way to Obtain a D...

Increased Age For Girls Marriage


It is hoped that the Prohibition of Child Marriage (Amendment) Bill, 2021, which intends to inc...

Facade of Social Media


One may very easily get absorbed in the lives of others as one scrolls through a Facebook news ...

Section 482 CrPc - Quashing Of FIR: Guid...


The Inherent power under Section 482 in The Code Of Criminal Procedure, 1973 (37th Chapter of t...

The Uniform Civil Code (UCC) in India: A...


The Uniform Civil Code (UCC) is a concept that proposes the unification of personal laws across...

Role Of Artificial Intelligence In Legal...


Artificial intelligence (AI) is revolutionizing various sectors of the economy, and the legal i...

Lawyers Registration
Lawyers Membership - Get Clients Online

File caveat In Supreme Court Instantly