Artificial intelligence, robotics and algorithms are increasingly being used in
a wide range of industries, from healthcare to finance. While there are many
benefits to using these technologies, there are also some legal and ethical
concerns that need to be considered.
Some of the legal concerns that have been raised include issues around data
privacy, liability and safety. For example, if a robot was to injure someone,
who would be held responsible .There are also ethical concerns about how these
technologies will be used. For example, if algorithms are used to make decisions
about things like loan approvals or insurance premiums, there could be a risk of
discrimination against certain groups of people.It's important that these
concerns are considered carefully before artificial intelligence, robotics and
algorithms are deployed more widely. Otherwise, there could be some negative
consequences that outweigh the benefits of using these technologies.
The legal and ethical aspects of each
When it comes to artificial intelligence, robotics, and algorithms, there are a
number of legal and ethical aspects to consider. For example, when developing
artificial intelligence, it is important to ensure that the technology does not
discriminate against certain groups of people or violate their privacy rights.
With robotics, there is the potential for robots to replace human workers in a
variety of industries, which could have a negative impact on employment levels.
And with algorithms, there is always the potential for bias and errors, which
can result in negative consequences for those who are affected by them.
These are just some of the many legal and ethical considerations that need to be
taken into account when developing and using these technologies. It is
imperative that developers do their best to create technologies that are fair
and ethical, and that users be aware of the potential risks and disadvantages
associated with each.
Regulation of artificial intelligence, robotics and algorithms:
When it comes to artificial intelligence, robotics and algorithms, there are a
number of legal and ethical considerations that need to be taken into account.
Here, we take a look at how these technologies are currently regulated and what
the future may hold in this regard.
Currently, there is no specific legislation in place governing the use of
artificial intelligence[1], robotics or algorithms. However, these technologies
are subject to the same laws and regulations as any other technology. This means
that they must comply with general laws such as those relating to data
protection, intellectual property and privacy.
There are also a number of ethical considerations that need to be taken into
account when using these technologies. For example, when designing algorithms,
care must be taken to ensure that they do not discriminate against certain
groups of people. Additionally, when using robots or other forms of artificial
intelligence, it is important to consider the potential impact on people's jobs
and livelihoods.
Looking to the future, it is likely that we will see more specific legislation
surrounding the use of these technologies. This could include regulations
relating to their design and implementation, as well as rules governing their
use in specific industries or sectors. As these technologies become more
prevalent in society, it is crucial that we have
The future of artificial intelligence, robotics and algorithms
There is no doubt that artificial intelligence (AI), robotics and algorithms are
revolutionising the way we live and work. But as these technologies become
increasingly sophisticated, it is important to consider the legal and ethical
implications of their use.
When it comes to AI, there are concerns about everything from data privacy[2] to
the potential for robots to take over human jobs. With robotics, there are
issues around safety, particularly when it comes to autonomous vehicles. And
with algorithms[3], there is the risk of biased decision-making if they are not
properly designed[4] and monitored.
It is clear that we need to be thoughtful about how we use these technologies,
and that includes taking into account the legal and ethical implications of
their use.
The adoption and penetration of Artificial Intelligence in our lives today does
not necessitate any more enunciation or illustration. While the technology is
still considered to be in its infancy by many, so profound has been its presence
that we do not comprehend our reliance on it unless it is specifically pointed
out. From Siri, Alexa to Amazon and Netflix[5], there is hardly any sector that
has remained untouched by Artificial Intelligence.
Thus, the adoption of artificial intelligence is not the challenge but its
'regulation' is a slippery slope. Which leads us to questions such as whether we
need to regulate artificial intelligence at all. If yes, do we need a separate
regulatory framework or are the existing laws enough to regulate artificial
intelligence technology.
Artificial intelligence goes beyond normal computer programs and technological
functions by incorporating the intrinsic human ability to apply knowledge and
skills and learning as well as improving with time. This makes them human-like.
Since humans have rights and obligations, shouldn't human-likes have them too.
But at this point in time, there have been no regulations or adjudications by
the Courts acknowledging the legal status of artificial intelligence. Defining
the legal status of AI machines would be the first cogent step in the framing of
laws governing artificial intelligence and might even help with the application
of existing laws.
A pertinent step in the direction of having a structured framework was taken by
the Ministry of Industry and commerce when they set up an 18 member task
force[6] in 2017 to highlight and address the concerns and challenges in the
adoption of artificial intelligence and facilitate the growth of such technology
in India. The Task Force came up with a report in March 2018 in which they
provided recommendations for the steps to be taken in the formulation of a
policy[7].
The Report identified ten sectors which have the greatest potential to benefit
from the adoption of artificial intelligence and also cater to the development
of artificial intelligence-based technologies. The report also highlighted the
major challenges which the implementation of artificial intelligence might face
when done on large scale, namely:
- Encouraging data collection, archiving, and availability with adequate safeguards, possibly via data marketplaces/exchanges;
- Ensuring data security, protection, privacy, and ethical via regulatory and technological frameworks;
- Digitization of systems and processes with IoT systems[8] whilst providing adequate protection from cyber-attacks[9];
- Deployment of autonomous products and mitigation of impact on employment and safety.
The Task Force also suggested setting up of an "Inter–Ministerial National
Artificial Intelligence Mission"[10], for a period of 5 years, with funding of
around INR 1200 Crores, to act as a nodal agency to coordinate all AI-related
activities in India.
Core Legal Issues:
When we look at the adoption of artificial intelligence from a legal and
regulatory point of view, the main issue we need to consider is, are the
existing laws sufficient to address the legal issues which might arise or do we
need a new set of laws to regulate the artificial intelligence technologies.
Whilst certain aspects like intellectual property rights and use of data to
develop artificial intelligence might be covered under the existing laws, there
are some legal issues which might need a new set of regulation to overlook the
artificial intelligence technology.
Liability of Artificial Intelligence
The current legal regime does not have a framework where a robot or an
artificial intelligence program might be held liable or accountable in case a
third party suffers any damage due to any act or omission by the program. For
instance, let us consider a situation where a self-driven car controlled via an
artificial intelligence program gets into an accident. How will the liability be
apportioned in such a scenario.
The more complex the artificial intelligence program, the harder it will be to
apply simple rules of liability on them. The issue of apportionment of liability
will also arise when the cause of harm cannot be traced back to any human
element, or where any act or omission by the artificial intelligence technology
which has caused damage could have been avoided by human intervention.
One more instance where the current legal regime may not be able to help is
where the artificial intelligence enters into a contractual obligation after
negotiating the terms and conditions of the contract and subsequently there is a
breach of contract.
In the judicial pronouncement of United States v Athlone Indus Inc[11] it was
held by the court that since robots and artificial intelligence programs are not
natural or legal persons, they cannot be held liable even if any devastating
damage may be caused. This traditional rule may need reconsideration with the
adoption of highly intelligent technology.
The pertinent legal question here is what kind of rules, regulations and laws
will govern these situations and who is to decide it, where the fact is that
artificial intelligence entities are not considered to be subject of law.
Personhood of Artificial Intelligence Entities
Personhood of an entity[12] is an extremely important factor to assign rights
and obligations. Personhood can either be natural or legal. Attribution of
personhood is important from the point of view that it would help identify as to
who would ultimately be bearing the consequences of an act or omission.
Artificial intelligence entities, to have any rights or obligations should be
assigned personhood to avoid any legal loopholes. "Electronic personhood"[13]
could be attributed to such entities in situations where they interact
independently with third parties and take autonomous decisions.
Protection of Privacy and Data
For the development of better artificial intelligence technologies, the free
flow of data is crucial as it is the main fuel on which these technologies run.
Thus, artificial intelligence technologies must be developed in such a way that
they comply with the existing laws of privacy, confidentiality, anonymity and
other data protection framework in place. There must be regulations which ensure
that there is no misuse of personal data or security breach.
There should be
mechanisms that enable users to stop processing their personal data and to
invoke the right to be forgotten. It further remains to be seen whether the
current data protection/security obligations should be imposed on AI and other
similar automated decision-making entities to preserve individual's right to
privacy which was declared as a fundamental right by the Hon'ble Supreme Court
in
KS Puttaswamy & Anr. v Union of India and Ors[14].
This also calls for an
all-inclusive data privacy regime which would apply to both private and public
sector and would govern the protection of data, including data used in
developing artificial intelligence. Similarly, surveillance laws also would need
a revisiting for circumstances which include the use of fingerprints or facial
recognition through artificial intelligence and machine learning technologies.
At this point in time there are a lot of loose ends to be tied up like the
rights and responsibilities of the person who controls the data for developing
artificial intelligence or the rights of the data subjects whose data is being
used to develop such technologies. The double-edged sword situation between
development of artificial intelligence and the access of data for further
additional purposes also needs to be deliberated upon.
Artificial intelligence (AI) refers to the simulation of human intelligence by
software-coded heuristics. Nowadays this code is prevalent in everything from
cloud-based, enterprise applications to consumer apps and even embedded
firmware.
The year 2022 brought AI into the mainstream through widespread familiarity with
applications of Generative Pre-Training Transformer. The most popular
application is OpenAI's Chat GPT[15]. The widespread fascination with ChatGPT
made it synonymous with AI in the minds of most consumers. However, it
represents only a small portion of the ways that AI technology is being used
today.
The ideal characteristic of artificial intelligence is its ability to
rationalize and take actions that have the best chance of achieving a specific
goal. A subset of artificial intelligence is machine learning (ML), which refers
to the concept that computer programs can automatically learn from and adapt to
new data without being assisted by humans. Deep learning techniques enable this
automatic learning through the absorption of huge amounts of unstructured data
such as text, images, or video.
- Artificial intelligence (AI) refers to the simulation or approximation of human intelligence in machines.
- The goals of artificial intelligence include computer-enhanced learning, reasoning, and perception.
- AI is being used today across different industries from finance to healthcare.
- Weak AI tends to be simple and single-task oriented, while strong AI carries on tasks that are more complex and human-like.
- Some critics fear that the extensive use of advanced AI can have a negative effect on society.
Artificial intelligence is based on the principle that human intelligence can be
defined in a way that a machine can easily mimic and execute the tasks, from the
most simple to those that are even more complex. The goals of artificial
intelligence include mimicking human cognitive activity. Researchers and
developers in the field are making surprisingly rapid strides in mimicking
activities such as learning, reasoning, and perception, to the extent that these
can be concretely defined. Some believe that innovators may soon be able to
develop systems that exceed the capacity of humans to learn or reason out any
subject. But others remain skeptical because all cognitive activity is laced
with value judgments that are subject to human experience.
As technology advances, previous benchmarks that defined artificial intelligence
become outdated. For example, machines that calculate basic functions or
recognize text through optical character recognition are no longer considered to
embody artificial intelligence, since this function is now taken for granted as
an inherent computer function.
AI is continuously evolving to benefit many different industries. Machines are
wired using a cross-disciplinary approach based on mathematics, computer
science, linguistics, psychology, and more.
Algorithms often play a very important part in the structure of artificial
intelligence, where simple algorithms are used in simple applications, while
more complex ones help frame strong artificial intelligence.
Applications of Artificial Intelligence
The applications for artificial intelligence are endless. The technology can be
applied to many different sectors and industries. AI is being tested and used in
the healthcare industry for suggesting drug dosages, identifying treatments, and
for aiding in surgical procedures in the operating room.
Other examples of machines with artificial intelligence include computers that
play chess and self driving cars. Each of these machines must weigh the
consequences of any action they take, as each action will impact the end result.
In chess, the end result is winning the game. For self-driving cars, the
computer system must account for all external data and compute it to act in a
way that prevents a collision.
Artificial intelligence also has applications in the financial industry, where
it is used to detect and flag activity in banking and finance such as unusual
debit card usage and large account deposits-all of which help a bank's fraud
department. Applications for AI are also being used to help streamline and make
trading easier. This is done by making supply, demand, and pricing of securities
easier to estimate.
Types of Artificial Intelligence
Artificial intelligence can be divided into two different categories: weak and
strong.
Weak artificial intelligence embodies a system designed to carry out one
particular job. Weak AI systems include video games such as the chess example
from above and personal assistants such as Amazon's Alexa and Apple's Siri. You
ask the assistant a question, and it answers it for you.
Strong Artificial Intelligence systems are systems that carry on the tasks
considered to be human-like. These tend to be more complex and complicated
systems. They are programmed to handle situations in which they may be required
to problem solve without having a person intervene. These kinds of systems can
be found in applications like self-driving cars or in hospital operating rooms.
Special Considerations
Since its beginning, artificial intelligence has come under scrutiny from
scientists and the public alike. One common theme is the idea that machines will
become so highly developed that humans will not be able to keep up and they will
take off on their own, redesigning themselves at an exponential rate.
Another is that machines can hack into people's privacy and even be weaponized.
Other arguments debate the ethics of artificial intelligence and whether
intelligent systems such as robots should be treated with the same rights as
humans.
Self-driving cars have been fairly controversial as their machines tend to be
designed for the lowest possible risk and the least casualties. If presented
with a scenario of colliding with one person or another at the same time, these
cars would calculate the option that would cause the least amount of damage.
Another contentious issue many people have with artificial intelligence is how
it may affect human employment. With many industries looking to automate certain
jobs through the use of intelligent machinery, there is a concern that people
would be pushed out of the workforce. Self-driving cars may remove the need for
taxis and car-share programs, while manufacturers may easily replace human labor
with machines, making people's skills obsolete.
The first artificial intelligence is thought to be a checkers-playing computer
built by Oxford University (UK) computer scientists in 1951
Artificial intelligence can be categorized into one of four types:
Reactive AI uses algorithms to optimize outputs based on a set of inputs.
Chess-playing AIs, for example, are reactive systems that optimize the best
strategy to win the game. Reactive AI tends to be fairly static, unable to learn
or adapt to novel situations. Thus, it will produce the same output given
identical inputs.
Limited memory AI can adapt to past experience or update itself based on new
observations or data. Often, the amount of updating is limited (hence the name),
and the length of memory is relatively short. Autonomous vehicles, for example,
can "read the road" and adapt to novel situations, even "learning" from past
experience.
Theory-of-mind AI are fully-adaptive and have an extensive ability to learn and
retain past experiences. These types of AI include advanced chat-bots that could
pass the Turning Test fooling a person into believing the AI was a human being.
While advanced and impressive, these AI are not self-aware.
Self-aware AI, as the name suggests, become sentient and aware of their own
existence. Still in the realm of science fiction, some experts believe that an
AI will never become conscious or "alive".
How Is AI Used Today?
AI is used extensively across a range of applications today, with varying levels
of sophistication. Recommendation algorithms that suggest what you might like
next are popular AI implementations, as are chatbots that appear on websites or
in the form of smart speakers (e.g., Alexa or Siri). AI is used to make
predictions in terms of weather and financial forecasting, to streamline
production processes, and to cut down on various forms of redundant cognitive
labor (e.g., tax accounting or editing). AI is also used to play games, operate
autonomous vehicles, process language, and more.
OpenAI released its Chat GPT tool late in 2022. It rapidly gained in popularity
with millions of users being added each month in 2023. ChatGPT is considered a
Weak AI, but it's not strictly reactive and can respond creatively to a wide
variety of topics.
Artificial Intelligence for Health Care:
In healthcare settings, AI is used to assist in diagnostics. AI is very good at
identifying small anomalies in scans and can better triangulate diagnoses from a
patient's symptoms and vitals. AI is also used to classify patients, maintain
and track medical records, and deal with health insurance claims. Future
innovations are thought to include AI-assisted robotic surgery, virtual nurses
or doctors, and collaborative clinical judgment.
Conclusion:
It is not enough to simply create a new technology and put it out into the world
– we must also consider the legal and ethical implications of doing so. This is
especially true for artificial intelligence, robotics, and algorithms, which
have the potential to greatly impact our society. In this article, we'll explore
some of the legal and ethical issues surrounding these technologies.
In this evolving world of technology with the capabilities of autonomous
decision making, it is inevitable that the implementation of such technology
will have legal implications. There is a need for a legal definition of
artificial intelligence entities in judicial terms to ensure regulatory
transparency.
While addressing the legal issues, it is important that there is a balance
between the protection of rights of individuals and the need to ensure
consistent technological growth. Proper regulations would also ensure that broad
ethical standards are adhered to. The established legal principles would not
only help in the development of the sector but will also ensure that there are
proper safeguards in place.
End-Notes:
- Artificial intelligence leverages computers and machines to mimic the problem-solving and decision-making capabilities of the human mind.
- Data privacy generally means the ability of a person to determine for themselves when, how, and to what extent personal information about them is shared with or communicated to others. This personal information can be one's name, location, contact information, or online or real-world behavior.
- An algorithm is a procedure used for solving a problem or performing a computation. Algorithms act as an exact list of instructions that conduct specified actions step by step in either hardware- or software-based routines. Algorithms are widely used throughout all areas of IT.
- There are numerous examples of algorithms leading to inequality. One such example is the US state of Indiana, where an algorithm unfairly categorized incomplete paperwork for welfare applications as 'failure to cooperate'. This resulted in millions of people being denied access to cash benefits, healthcare, and food stamps for three years. This also led to the death of cancer patient Omega Young, who was unable to pay for her treatment. The problem is that biased data fed into systems by biased people leads to biased algorithms, which in turn leads to biased outcomes and inequality. When we repeat past practices, not only do algorithms automate the status quo and perpetrate bias and injustice, but they also amplify the biases and injustices of our society.
- Commerce and Industry Minister Smt. Nirmala Sitharaman has constituted a Task Force on Artificial Intelligence (AI) for India's Economic Transformation. The Minister said with rapid development in the fields of information technology and hardware, the world is about to witness a fourth industrial revolution. She said driven by the power of big data, high computing capacity, artificial intelligence, and analytics, Industry 4.0 aims to digitize the manufacturing sector. Smt. Sitharaman said the panel will comprise experts, academics, researchers, and industry leaders and will explore possibilities to leverage AI for development across various fields.
- The Task Force on Artificial Intelligence was established by the Ministry of Commerce and Industry to leverage AI for economic benefits and provide policy recommendations on the deployment of AI for India.
- The Internet of Things (IoT) describes the network of physical objects-"things"-that are embedded with sensors, software, and other technologies for the purpose of connecting and exchanging data with other devices and systems over the internet.
- A cyberattack is a malicious and deliberate attempt by an individual or organization to breach the information system of another individual or organization.
- Case Comment on United States v. Athlone Indus., Inc. The present case at length has discussed the advancement of technology, especially with regard to AI. It's a leading case and this case is used to answer all questions related to AI. This case helps explain the role of AI in the health sector, how robots are used in surgery, and liability is questioned. It held that the person who is behind using AI would be liable.
- AI has not been granted the status of legal personhood and so it does not receive recognition for its work; rather, the person for whom it performed the task gets the reward and recognition for it.
- The term is used to describe the potential legal status of the most sophisticated autonomous robots so that they may have "specific rights and obligations, including that of making good any damage they may cause, and applying electronic personality to cases where robots make smart autonomous decisions or otherwise.
- Puttaswamy & Anr. v Union of India and Ors (2017) 10 SCC 1, AIR 2017 SC 4161. The nine Judge Bench in this case unanimously reaffirmed the right to privacy as a fundamental right under the Constitution of India. The Court held that the right to privacy was integral to freedoms guaranteed across fundamental rights and was an intrinsic aspect of dignity, autonomy, and liberty.
- ChatGPT is a form of generative AI-a tool that lets users enter prompts to receive humanlike images, text, or videos that are created by AI. ChatGPT is similar to the automated chat services found on customer service websites, as people can ask it questions or request clarification on ChatGPT's replies.
Please Drop Your Comments