The Term Artificial Intelligence (AI) was first coined by John McCarthy,1 who
defined AI as the science and art of making intelligent machines, particularly
computer programs.2 In layman's terms, AI is the science and engineering of
making machines more intelligent and capable of performing more complex tasks.
It is about obtaining patterns to make sense of big data. AI benefits various
areas such as governance, decision making, and development.
It is changing our
lives with inventions like self-driving vehicle safety 3 and voice assistants
like Siri and Alexa.4 AI experiments have produced favorable outcomes on some
occasions and extremely drastic outcomes on others. For instance, the first
fatal accident involving AI occurred when an Uber self-driving car killed a man
in Arizona.5
Moreover, there is always a high risk of AI technology being
misused in egregious ways, such as privacy violations. Therefore, the government
must provide smart regulations for AI that encompass flexible, innovative,
imaginative, and generally new forms of social control.6
- Why[1] AI Has to Be Regulated
AI is one component of the more general concept of intelligent machines. The
rise of smart machines is both a challenge and an opportunity for the
foreseeable future. AI is a broad philosophy, not one technology: it is a
juice-powering algorithm. When algorithms become manifested in a specific
application, the role of regulation comes. While there is much investment in AI
technology, investment in AI safety is lacking.[2] As such, there should be a
government committee to oversee anything related to AI, just as there are
authorities for managing food and drug safety,[3] automotive safety,[4] and
aircraft safety.[5
Such a committee is vital because the government should
oversee things that possess a risk to the public-AI unequivocally has the
potential to endanger the public and thus, should be regulated like other
dangerous industries. The main obstacle to creating a regulatory agency is that
government procedures take time. The government and other corporate
organizations usually take action only after some disaster takes place, followed
by public protests.[6]
On the other hand, the technology industry's approach is to work within the
boundary of AI ethics and principles. For instance, Google has pledged not to
use AI algorithms for weapons and other technologies that cause "overall harm,"
but can we expect that every organization will practice similar
restraint?[7] No, we cannot. Researchers and application developers are often
pressured to finish projects within a time constraint. The danger of pressured
development is that even a relatively small mistake can result in extremely
disastrous consequences.[8] Thus, it is necessary to incorporate AI ethics into
laws so the government can better regulate AI.
Tech giant executives like Tesla Chief Executive Officer, Elon Musk, and Google
Chief Executive Officer, Sundar Pichai, have already raised concerns regarding
AI regulation.14 Musk has mentioned that, as a generation, we are already far
behind in passing regulations for AI because the technology has already advanced
tremendously and continues to expand every day.15 Pichai has said that it is too
important not to impose AI regulations and calls for a "sensible
approach."16 Microsoft President, Brad Smith, is another distinguished figure
who supports regulating AI.17
- Why Arguments Against AI Regulation are Unpersuasive
Many researchers believe that while AI used in the public domain should be
regulated, the development of new applications in the private sector should
remain unregulated. These researchers believe science should not be regulated or
restricted because it will hamper innovations that can be life-changing.18 For
example, AI is currently used to detect earthquakes,19 and in the future, may
help doctors better predict patients' risks of cancer.20 However, this argument
is inherently flawed because it assumes that AI regulation
will result in a complete restriction of AI.
In reality, AI regulations will
allow the continued use of the technology, but in a more secure, transparent,
and responsible manner. More importantly, this argument also assumes that AI
innovation is always beneficial, and any regulation would stifle innovation.
However, there have been many cases in which the application of AI technology
has been extremely dangerous and controversial, such as law enforcement's use of Clearview AI's facial recognition products[9] and Cambridge Analytica's use of
personal data to target voters.[10]
The second argument asserted by researchers is that putting limitations on
technological advancements will inevitably be ineffective because people do not
diligently follow regulations. For instance, despite the medical community
agreeing not to experiment on human embryos, a Chinese scientist, He Jiankui,
ignored Chinese regulations on genome editing and created gene-edited
babies.[11] Although some people might choose to ignore AI regulations, it does
not mean that AI should not be regulated at all. If this logic was generally
accepted, then no law would ever get passed simply because there are individuals
who will not abide by them. This is not the case as evidenced by today's society
governed by laws.
Another argument against AI regulation is that the regulators and policymakers
are unfit to draft regulations because they are uneducated about AI.[12] This
argument fails to recognize that AI regulation is not dependent on human
knowledge as humans are no longer capable of regulating advanced technology that
can cause risks beyond the scope of government oversight.[13] Moreover,
governments regulating new and unfamiliar technology is not a new concept. The
U.S. government successfully passed regulations for automobiles[14] and
railways,[15] which are both examples of technologies that were once
revolutionary.
Furthermore, AI regulation is needed to address three main questions: who can
use AI, on whom will AI be used, and for what purpose?[16] These three areas can
be easily regulated by the government without the need for any technical
expertise. For example, the U.S. Government did not need to understand the
underlying technology used in steam engines to regulate the fare prices of
railways.[17] AI can also be regulated with existing resources rather than
leaving AI completely unregulated.
- Considerations to Make Prior to Regulating
When drafting regulations for AI, the first thing to keep in mind is that the
speed of regulation should match the rate at which the technology is evolving.
Second, before moving forward with new laws and regulations, the government must
balance the risks and benefits of AI. Our minds are critical because they notice
the risks first, but AI is a technology that allows us to understand things far
beyond the comprehension of our minds: AI surpasses our natural intelligence.
For example, Deepmind's AlphaGo is not constrained by the limits of human
knowledge because it combines its own neural network with a powerful search
algorithm to play board games.[18] Google now employs this technology in its
data cooling systems, which is forty percent more energy efficient than the
traditional methods of data cooling.[19] Furthermore, with AI's energy potency,
we can cut back carbon emissions.[20] In the medical field, we can use
algorithms to make better clinical judgments and save the healthcare industry
billions of dollars.[21] In agriculture, we can better utilize each acre of land
by detecting plant diseases, controlling pests, and automating
equipment.[22] Everywhere we look, if we apply this technology, we will see
incredible gains.
If we only focus on the negative consequences of AI, the resulting regulations
will stifle innovation. For example, flying today is much safer than it was
decades ago.[23] How did we evolve? Every single accident that happened was
thoroughly investigated. Rather than abandoning the use of planes completely, we
considered problems such as how and what went wrong. AI can be compared to the
use of statistics in the 19th century, which allowed us to understand things in
an entirely new way. For example, we used Apollo computers and statistics to
take people to the moon and back.[24] We are doing something similar with AI but
on a whole new level.
- The Emerging State of AI Laws
It is not the technology itself that needs to be regulated, but the intended use
of the technology. The solution is not to control and regulate the entire sphere
of AI, but its actual applications, be it is self-driving cars, medical systems,
or recreation. But here also lies the problem: there are potential outcomes and
applications of AI technology that are still unknown. To solve this, we need to
work in stages.
The first stage is to follow a comprehensive set of principles,
and several organizations have already provided a set of ethical principles for
AI.[25] An excellent example is the Asilomars' set of AI principles-a set of
twenty-three principles released in 2017.[26] The second stage is to understand
the different levels of AI technology in need of regulation. The first level is
data and ensuring it is not misused, the second level is the need for built-in
privacy frameworks, and the final level is understanding where the technology
has gone wrong.
The European Union is the most vigorous in proposing new AI rules and
regulations.[27] With the advent of autonomous vehicles, many European countries
including Belgium, Estonia, Germany, Finland, and Hungary enacted laws that
allow for the testing of autonomous vehicles on their roads.[28]
In contrast, in the United States, the White House has maintained a "light
touch" regulatory policy approach when it comes to AI.[29] Recently the White
House released "Guidance for the Regulation of Artificial Intelligence
Applications."[30] These guidelines follow an approach where sectorial
regulators can formulate regulations within their separate
jurisdictions.[31] This approach allows the central government to regulate some
aspects of autonomous vehicles and state authorities to control others.
Data is another aspect of AI that needs regulatory attention. Laws concerning
data are significant for AI since those laws can impact the use and growth of AI
systems. In 2018, the European Union introduced the General Data Protection
Regulation (GDPR),[32] which requires its member states to maintain a
reasonably prohibitive regulatory approach for data privacy and usage.[33] The
Catholic Church has also expressed the need for stricter
ethical and moral standards on the development of AI.[34] The Rome Call for AI
Ethics set forth six basic principles for AI: transparency, responsibility,
impartiality, inclusion, reliability, and security.[35]
Countries such as the United States, Brazil, and the United Kingdom have already
enacted data privacy laws.[36] Singapore, Australia, and Germany are actively
considering such regulations and are having advanced discussions on this
topic.[37] Also, many countries are concerned about the potential use of AI to
power autonomous weapon systems. For example, Belgium has passed legislation to
thwart the use or development of lethal autonomous weapons systems.[38]
Conclusion
AI is set to transform society through innovations across all spheres of human
endeavor. However, there must be regulations to control AI and to hold its
creators accountable when mishaps occur. By ensuring that AI is developed
responsibly, we can not only make future generations believe in the power of
technology, but more importantly, improve society through the power of AI
technology.
End-Notes:
- https://www.eurofound.europa.eu/observatories/eurwork/industrial-relations-dictionary/smart-regulation
[https://perma.cc/4E2D-MG68].
- Wyatt Berlinic, Why AI Safety Is Important, WYABER (July 7, 2019),
https://wyaber.com/why-ai-safety-is-important/ [https://perma.cc/UH7N-3EKL].
- What We Do, U.S. FOOD & DRUG ADMIN. (Mar. 28, 2018), https://www.fda.gov/aboutfda/what-we-do [https://perma.cc/H9LY-54FV].
- NAT'L HIGHWAY TRAFFIC SAFETY ADMIN., https://www.nhtsa.gov/ [https: //perma.cc/M2M6-UX9S].
- Safety: The Foundation of Everything We Do, FED. AVIATION ADMIN. (Nov.
6, 2019, 3:01 PM), https://www.faa.gov/about/safety_efficiency/
[https://perma.cc/7MGQ-SMN4].
- See Jillian D'Onfro, Google Scraps Its AI Ethics Board Less than Two Weeks
After Launch in the Wake of Employee Protest, FORBES (Apr. 4, 2019, 7:52 PM),
https://www.forbes.com/sites/jilliandonfro/2019/04/04/google-cancels-its-ai-ethics-board-less-than-two-weeks-after-launch-in-the-wakeof-employee-protest/?sh=64d50f056e28
[https://perma.cc/F2EJ-MYRP].
- Mark Bergen, Google Renounces AI for Weapons; Will Still Work with
Military, BLOOMBERG (June 7, 2018, 3:40 PM), https://www.bloomberg.com/news/articles/2018-06-07/googlerenounces-ai-for-weapons-but-will-still-sell-tomilitary#:~:text=Google%20pledged%20not%20to%20use,pursue%20future%20lucrative%20g
overnment%20deals [https://perma.cc/A8TU-RVBN].
- See generally Geoff White, Use of Facial Recognition Tech 'Dangerously
Irresponsible', BBC NEWS
- Kashmir Hill, The Secretive Company that Might End Privacy as We Know
It, N.Y. TIMES (Feb. 10, 2020), https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html
[https://perma.cc/2ZQG-JM2Y].
- Nicholas Confessore, Cambridge Analytica and Facebook: The Scandal and
the Fallout So Far, N.Y. TIMES (Apr. 4, 2018),
https://www.nytimes.com/2018/04/04/us/politics/cambridgeanalytica-scandal-fallout.html
[https://perma.cc/8MFY-YTNB].
- Sharon Begley, Amid Uproar, Chinese Scientist Defends Creating
Gene-Edited Babies, STAT (Nov. 28, 2018), https://www.statnews.com/2018/11/28/chinese-scientist-defends-creating-gene-edited-babies/
[https://perma.cc/KE3U-YEN4].
- Tristan Greene, US Government Is Clueless About AI and Shouldn't Be
Allowed to Regulate It, NEXT WEB (Oct. 24, 2017), https://thenextweb.com/artificial-intelligence/2017/10/24/us-government-is-clueless-about-ai-and-shouldnt-be-allowed-to-regulate-it/
[https://perma.cc/9SSM5UWY].
- Michael Spencer, Artificial Intelligence Regulation May Be Impossible,
FORBES (Mar. 2, 2019, 9:34 PM), https://www.forbes.com/sites/cognitiveworld/2019/03/02/artificial-intelligence-reg ulation-will-be-impossible/?sh=6670ff2e11ed
[https://perma.cc/MJ83-JWV7].
- See generally The Interstate Commerce Act Is Passed, U.S. SENATE,
https://www.senate.gov/artandhistory/history/minute/Interstate_Commerce_Act_Is_Passed.htm
[https://perma.cc/R4PV-RK7V].
- Federal Legislation Makes Airbags Mandatory, HISTORY (July 28, 2019),
https://www.history.com/this-day-in-history/federal-legislation-makes-airbags-mandatory
[https://perma.cc/2QY3-NTMW].
- We Can't Regulate AI, AI MYTHS, https://www.aimyths.org/we-cant-regulate-ai https://perma.cc/5Y65-RHKT
- See generally The Interstate Commerce Act Is Passed, supra note 26.
- David Silver & Demis Hassabis, AlphaGo Zero: Starting from Scratch,
DEEPMIND (Oct. 18, 2017),
https://deepmind.com/blog/article/alphago-zero-starting-scratch
[https://perma.cc/ZRP2-SZMU].
- Will Knight, Google Just Gave Control Over Data Center Cooling to an AI,
MIT TECH. REV. (Aug. 17, 2018), https://www.technologyreview.com/2018/08/17/140987/google-just-gave-control-over-data-center-cooling-to-an-ai/#:~:text=Google%20revealed%20today%20that%20it,centers%20to%20an%20AI%20algorithm.&text=This%20system%20previously%20made%20reco
mmendations,percent%20in%20those%20cooling%20systems
[https://perma.cc/3G7R-RRX2].
- James Vincent, Here's How AI Can Help Fight Climate Change According to
the Field's Top Thinkers, VERGE (June 25, 2019, 8:02 AM), https://www.theverge.com/2019/6/25/18744034/ai artificial-intelligence-ml-climate-change-fight-tackle
[https://perma.cc/B93A-JE9Z].
- See Bernard Marr, How Is AI Used in Healthcare - 5 Powerful Real-World
Examples that Show the Latest Advances, FORBES (July 27, 2018, 12:41 AM)
https://www.forbes.com/sites/bernardmarr/2018/07/27/how-is-ai-used-in-healthcare-5-powerful-real-world-examples-that-show-thelatest-advances/?sh=ac7c3f05dfbe
[https://perma.cc/R4PV-RK7V].
- The Future of AI in Agriculture: Intel-Powered AI Helps Optimize Crop
Yields, INTEL, https://www.intel.in/content/www/in/en/big-data/article/agriculture-harvests-big-data.html
[https://perma.cc/CZ8P-52LV].
- Mark Ellwood, What Flying Was Like 30 Years Ago, CONDÉ NAST TRAVELER (Aug. 28, 2017),
https://www.cntraveler.com/story/what-flying-was-like-30-years-ago
[https://perma.cc
/4KMP-URTH].
- See Charles Fishman, The Amazing Handmade Tech that Powered Apollo 11's
Moon Voyage, HISTORY (July 17, 2019), https://www.history.com/news/moon-landing-technology-inventionscomputers-heat-shield-rovers
[https://perma.cc/KQA8-RATP].
- See Thilo Hagendorff, The Ethics of AI Ethics: An Evaluation of
Guidelines, 30 MINDS & MACHS. 99 (2020).
- TechTarget Contributor, Asilomar AI Principles, TECHTARGET (Feb. 2019),
https: //whatis.techtarget.com/definition/Asilomar-AI-Principles
[https://perma.cc/3CVV-SLHT].
- See Shaping Europe's Digital Future: Artificial Intelligence, EUROPEAN COMM'N (Jan. 8, 2021),
https://ec.europa.eu/digital-single-market/en/artificial-intelligence
[https://perma.cc/JUE8-6LF8].
- Kathleen Walch, AI Laws Are Coming, FORBES (Feb. 20, 2020, 11:00 PM),
https://www.forbes.com/sites/cognitiveworld/2020/02/20/ai-laws-are-coming/?sh=38f5ecfa2b48
[https://perma.cc/GW9Q-DGRP].
- Brandi Vincent, White House Proposes 'Light Touch Regulatory Approach'
for Artificial Intelligence, NEXTGOV (Jan. 7, 2020),
https://www.nextgov.com/emerging-tech/2020/01/white-house-proposes-light-touch-regulatory-approach-artificial-intelligence/162276/
[https://perma.cc/9YPK-7EFJ]
- Clyde Wayne Crews Jr., How the White House "Guidance for Regulation of
Artificial Intelligence"
Invites Overregulation, FORBES (Apr. 15, 2020, 11:35 AM), https://www.forbes.com
/sites/waynecrews/2020/04/15/how-the-white-house-guidance-for-regulation-of-artificial-intelligence-invites-overregulation/#31fedaf53a2c
[https://perma.cc/BQZ9-V83Q].
- Lee Tiedrich, AI Update: White House Issues 10 Principles for Artificial
Intelligence Regulation, INSIDE TECH MEDIA (Jan. 14, 2020),
https://www.insidetechmedia.com/2020/01/14/ai-updatewhite-house-issues-10-principles-for-artificial-intelligence-regulation/
[https://perma.cc/V6BKGFJA].
- Data Protection in the EU, EUROPEAN COMM'N,
https://ec.europa.eu/info/law/lawtopic/data-protection/data-protection-eu_en
[https://perma.cc/2G5Y-K8TN].
- See generally Juliana De Groot, What Is the General Data Protection
Regulation? Understanding & Complying with GDPR Requirements in 2019,
DIGIT. GUARDIAN (Sept. 30, 2020),
https://digitalguardian.com/blog/what-gdpr-general-data-protection-regulation-understanding-and-complying-gdpr data-protection
[https://perma.cc/3JEF-TERJ] (describing GDPR regulations).
- Taylor Lyles, The Catholic Church Proposes AI Regulations that 'Protect
People', VERGE (Feb. 28, 2020, 4:06 PM), https://www.theverge.com/2020/2/28/21157667/catholic-church-ai-regulations-protect-people-ibm-microsoft-sign
[https://perma.cc/8RAH-CTR6].
- Lance Eliot, Pope Francis Offers 'Rome Call for AI Ethics' to Step-Up AI
Wakefulness, Which Is a
Wake-Up Call for AI Self-Driving Cars too, FORBES (Mar. 10, 2020, 11:31 AM),
https://www.forbes.com/sites/lanceeliot/2020/03/10/pope-francis-offers-rome-call-for-ai-ethics-to-step-up-ai-wokefulness-which-is-a-wake-up-call-for-ai-self-driving-carstoo/?sh=2d15ec567bae
[https://perma.cc/LXE7-ABXY].
- A Practical Guide to Data Privacy Laws by Country, I-SIGHT SOFTWARE
(Nov. 5, 2018),
https://i-sight.com/resources/a-practical-guide-to-data-privacy-laws-by-country/
[https://perma.cc/4ZCU-F63K].
- Walch, supra note 40.
- Mary Wareham, The Killer Robots Ban Is Coming. What Will Belgium Do?,
HUM. RTS. WATCH (May 30, 2018, 12:00 AM),
https://www.hrw.org/news/2018/05/30/killer-robots-ban coming-what-will-Belgium-do
[https://perma.cc/U7EG-24VR].
Please Drop Your Comments