File Copyright Online - File mutual Divorce in Delhi - Online Legal Advice - Lawyers in India

Decoding Personhood: Evaluating The Viability Of Granting AI The Status Of Person

The introduction and application of AI to the contemporary world has become pervasive and entrenched in our world. AI has become exigent to an extent where it finds it application in every field. From a basic search engine to an automated driving vehicle, every function is a result of AI. AI leverages computers and machines to mimic the problem-solving and decision-making capabilities of human mind[1]. AI is an intelligence designed to carry out functions that usually required human intelligence, but more efficiently and faster. Its capabilities similar to that of human potential has posed a great question on the contemporary legal institutions. With potential akin to humans, it poses a risk of liability of similar nature.

Since, it is merely a set of programs and mechanical parts, it is plausible that it may have deficiencies and malfunctions. The question arises when the liability for such occurrences is left unanswered. There is no institution or legislation that can completely make the question obsolete by providing a concise answer of whether the liability falls upon the machine, the developer, or the user.

In 2017, a robot named Sophia was provided citizenship of Saudi Arabia[2]. The act of providing citizenship to an AI was a great step towards recognition and acceptance of AI into our legal realm. Saudi Arabia is not the only country, that has recognised AI and provided AI with human-like privileges. In Japan, a robot named Shibuya Mirai was given a residence permit[3]. In February, 2023, world's first AI powered 'robot lawyer' was to defend a US citizen against a speeding ticket[4].

The robot was designed to assist the defendant in court through earphones. The question is whether these AI systems are equipped enough to claim or retain this legal recognition. These steps create statements over acceptance and adoption of AI, but also create ineffable legal void.

On one hand where the gap of liability is still unanswered, there are states and institutions that have considered giving human like rights and recognition to AI. Scepticism infiltrates as to the status of AI in our legal system. The question is whether AI is given a status of "personality" and if so, why is there a gap in recognition of liability, and if not where the system fails to determine its status and why.

The paper delves into the arena of jurisprudence in order to determine whether AI can be assigned a status of "personality." It takes into account the principle of various jurists and their doctrines to ascertain if the act of providing recognition to AI as a "personality" can be validated by means of reasoning.

Whether Ai Is A Juristic Or Legal Personality?

Before we decide whether AI can be designated a status of legal personality, we need to analyse the concept of "legal person" recognised by law.

Salmond defines a 'person' as, "any being to whom the law regards as capable of rights or duties. Any being that is so capable, is a person whether human being or not, and nothing that is not so capable is a person even though he be a man."[5] Gray defines a 'person' as "entity to which rights and duties may be attributed." Thus, we know the most important aspect of a status of 'person' is its virtue of being capable of rights and duties. Rights and duties a possessed by legal and juristic person.

The virtue of being human is not the only criteria for assigning rights or duties, and thus, legal person is of two kinds, namely- natural person and juristic person. A natural person is a human being, and is assigned the legal status by the virtue of it being human. A juristic person, on the other hand, maybe defined as an institution upon whom the law confers a legal status and who in the eyes of possesses rights liabilities and duties as a natural person.

Status of 'person' under law is determined on four basic grounds. The fours grounds are- the capability to have rights and duties, the ability to sue and be sued, the capability of ownership or possession of property and the ability to enter contracts. To decide whether the status of 'person' could be assigned to AI, we will first need to analyse its ability on the abovesaid four grounds.

Rights And Duties:

AI is a program created by humans, and speaking in a layman's language, due to the virtue of it being inanimate it cannot be assigned legal rights. It may have right, in certain cases, but those properly cannot be called legal rights. The rights, if provided, to AI (under current situations) cannot be termed as legal rights due to two main reasons:
  1. It is not enforceable, unless through humans.
  2. It does not provide remedies (for artificial intelligence).
There is no legislation in contemporary world that properly so, recognizes rights and duties of an AI. It could be argued that there exist legislations regarding regulation of AI, but it would be inappropriate to recognise them as rights or duties. It is neither a right or a duty imposed on AI itself. These laws are created over its usage and thus, on the user, the developer, or the owner. It does not confer rights, neither it imposes duty on the AI system.

It can also be pointed out that corporate is not a living entity and still has rights, and thus the statement that "AI does not possess legal rights due to the virtue of it being inanimate" is not valid. To that, the answer is, corporation is a body consisting of animate characters, i.e., it consists humans as its members, whereas, AI has no such components.

In case of Sophia, the citizenship was provided but the legislation failed to answer if the citizenship brought along the rights and responsibility of a natural citizen. Also, citizenship of any State is subject to fulfilment of certain requirements[6]. Citizenship of Saudi Arabia could be obtained by birth, marriage, or naturalization. Since Sophia is an AI system, it cannot obtain citizenship by birth or even by marriage, because institution like marriage is above and beyond the needs and limits of AI systems.

Since the only possible way for AI to acquire a citizenship is through naturalization, the question arises if it fulfilled the requirements of acquiring citizenship through naturalization. The consequences of citizenship provide a person with the right to vote, fundamental rights/ rights of a person under lex loci, etc.

It is important to consider the idea of AI to be allowed to cast a vote. This further creates two chaotic scenarios in different views for the above-mentioned notion. If it is allowed to vote, it is possible that the decision-making power of AI can be altered, and thus it makes the whole idea of providing AI with human-like rights obsolete. There would be no point assigning it human-like rights and recognition if it does not function on its own, and would be a mere tool in the hands of its developer or user. The second issue arises if it isn't provided voting rights.

What would be the point of providing citizenship if not provided with the rights equivalent to that of citizens. Partial treatment of AI in providing human-like recognition without human-like rights is absolutely an abuse of power on the part of Institutions.

In the case of Shibuya Mirai, the conversational robot was provided with residence permit. Being a software, it has no physical existence, and thus does it neither meets the requirements of residential permit, nor applied for it.

In 1979, a robot employed by Ford automobile killed a local worker, misinterpreting it to be goods. The company provided compensation of USD 10,000,000 to the family of the victim[7]. In a case of human, the liability would be different and more serious. Duty, in a stricter sense means liability, hence, duty is case of AI is very different from that of humans.

Above are the examples of how the concept of right and duties is very vague for AI, and very different from humans. At the most, the rights and duties of AI can be best described as a situation similar to that of humans and pets. Like animals, the rights and duties of AI systems is exercised through humans. It can be protected and restricted by the human will like those for animals. In case of breach of any right or any harm done to AI, the rights can be exercised through humans i.e., the user, the owner, or the developer. This view clearly explains that the concept of rights and duties for AI is not akin to humans and thus it does not fit perfectly.

Capacity To Hold Property:

Property in a loose term is associated with a person's tangible assets. In a proper sense it includes, the tangible and intangible aspect of a person's asset, rights, and interest. Right on a property can be exercised in two ways:
  1. Ownership and
  2. Possession.
We will briefly analyse these concepts in relation to AI.

Possession is the most basic relation between a man and things.[8] Possession is the relation between the holder and the thing held. To analyse if AI can hold any such relation, we must first understand the essentials of legal possession. The two essentials are- 1) corpus possessionis. 2) animus possidendi.

Corpus possessionis relates to the physical relation to the object. Provided AI can be both a software programming and a complex robotic structure, we can understand that the criteria of physical relation can be fulfilled in certain cases. Animus possidendi relates to the mental element of possession. It denotes the 'will' to exercise control over the thing. Since, the program is incapable of having a free will, it is safe to say it does not fulfil the second requirement of possessing a property.

Thought, the exceptions are few, there are instances of valid possession where the essentials are unfulfilled. In the case of N.N. Majumdar V. State[9], the question of animus was brought before the High Court of Calcutta. The Court held mere existence of corpus without animus is inefficient to constitute possession. In usual course of nature of possessory rights, it can be concluded that AI is incapable of exercising possession over an object.

Ownership may manifest itself in two ways:
  1. Corporeal ownership i.e., Ownership of object.
  2. Incorporeal ownership i.e., Ownership of rights.
Incorporeal ownership is related to intangible matter and thus does not include rights of possession. However, it may include the right to exclude others of the use of the thing owned. Incorporeal ownership includes patent rights, copy-right, trademark, goodwill, etc. It is arguable if AI can hold incorporeal rights as with high technological advances there are data and art generated using AI, and the question to its ownership is unanswered.

Traditional use and work of AI included collecting, processing, and analysing large amount of data to provide better results, and with that view it can be said that needs no such rights, but with increased functions and applications of AI, there are AI systems that generate art like literature, picture, videos etc. Getty images filed a lawsuit against Stability AI in United Kingdom and United States of America for unauthorised use of 12 million images from its website without its permission.

Despite, expressed terms of use on Getty image website, the AI model associated metadata to train Stable Diffusion. The company claims that training on copyrighted material is for transformative purpose. It claims that the action weighs heavily in favour of fair use.[10]

Similar lawsuits are filed by artists against AI systems like Midjourney and DaviantArt. It is necessary to create a distinction between human and AI-created art and innovation, to keep the relevance of intellectual property created or invented by humans. Another problem with such technology is that, if not recognised, data can be generated artificially by a use of easy commands and used or misused by humans into claiming it as its own and defeating the objective of intellectual property rights.

Corporeal ownership means the ownership of tangible objects. In strict sense, the capability to hold property analyses one's ability to hold, transfer and dispose of estate. Just as an animal is incapable of holding property, AI is incapable of holding such ownership rights. The inability of AI to possess free will is one of the reasons of its inability to hold property. In the manner an animal cannot hold, use or transfer/ dispose of a property, owing to lack of competence in mental element, an AI is also incapable of holding property.

It can be said that it does functions that require human intelligence and hence, it could meet the requirement of mental capacity, but the question here is not of human intelligence but rather of the ability of having a free will and consent. However complex, AI is still a product of software programs and metal parts. It may be designed to meet human like intelligence or beyond, but the concept of emotions, will and consent are still beyond its abilities. It works on the instructions and regulation fed to it, and its responses can be manipulated, let alone be controlled. Hence, AI systems, however advance, are incapable of holding property.

Sue And Be Sued:

The ability to sue and be sued encompasses principles of liability and obligation. It also associates with itself concept of justice, intent, negligence, and punishment. With introduction on AI to every field, it is not a surprise if one tells it is used in legal sphere too.

An AI robot, DoNotPay, was designed to help customers defend their case in courts. It claimed to help and assist customers in legal matters. The company claimed it wanted to help customers deal with their legal matters in an effective and more cost-efficient manner. The robot was to dictate statements and help the user through earphones. It helped in a few cases and in other it made things worse. It not only made users confess guilt, but also provided citations for cases that did not exist. On March 3, 2023, a Chicago-based law firm filed a class action lawsuit against DoNotPay.

The robot was sued for practicing law without proper licence and competence.[11] The Board said "AI is a high school student, and we're sending it to law school."[12] The case was filed against the owner of the company, and he was threatened with a prison time sentence if he continued using the robot in such manner.

It is very evident that AI lacks capacity to sue or be sued, as the liability in such cases falls upon the user, developer, and the owner. In this case developer and owner being the company itself. Even if one literally sues AI, it would make no sense and would be of no avail. It would not be advisable to make AI itself liable. In civil case the liabilities are often remedial. It may include fine or damages. These fines or damages are observed to have punitive effect on humans, as thus, serve twofold purpose. Firstly, it compensated for the damage to the victim. Secondly, it incurs a loss over the wrongdoer, making it a lesson to not repeat such action. It renders punitive measures.

In case of AI, it would be useless to think such actions would make a difference. Since AI cannot hold an estate, it fails to serve the first purpose. It is to be noted the owner of AI may hold estate but it is not equivalent to the AI itself holding the estate or property. The second purpose cannot be served because AI has no mind of its own. It can be fed information and directions, but that works in a totally different manner. The punitive measure would not make any difference in the function of AI.

In 1979, Robert Williams, an employee working in Ford automobile was killed by a robot. The robot slammed Robert to death, misidentifying it with a moving object. The company provided compensation of USD 10,000,000 to the family of the victim. The same scenario in case of human would be very different. If the robot were a person, it would be charged with culpable homicide in such a case. Punishments offered to humans in such cases are of higher degree than just monetary liability. The theories of punishment mainly contain 5 theories. We will analyse the effect of these theories on AI in comparison with humans.

Firstly, Retributive theory states the principle of "an eye for an eye, and a tooth for a tooth." It is no longer applied on humans keeping in mind the human rights of living beings.

Secondly, Deterrent theory aims at inducing fear in minds of people so they do not attempt such wrong. The human being sentenced to jail for an act may not do it again in fear of same punishment but an AI will continue to do so if programmed and will fear no punishment. AI is incapable of processing or feeling emotions, years inside a prison will mean nothing to an AI system. An AI system may be kept in a room for years, and when it functions again it would make no change in its outcome.

Thirdly, preventive theory aims at eliminating the wrongdoer from such environment permanently. It does so through death penalty or life imprisonment. AI can be eliminated but that makes no sense as it has no effect on the system itself. It just means waste of efforts, innovations, and money.

Fourthly, reformative theory aims at reforming and making the wrongdoer a better individual. There is no way reforming an AI system, unless through reprogramming. Fifthly, Expiatory theory is inapplicable in modern society.

It is evident that AI lacks locus standi, thus, neither holds capacity to sue nor to be sued. Suing an AI system is like suing a ground you fell over, to no avail.

Ability To Contact:

A contract is an agreement enforceable by law. According to Section 10 of The Indian Contract Act, 1872, "All agreements are contract if they are made by the free consent of parties competent to contract, for a lawful consideration and with a lawful object, and are not hereby expressly declared to be void." The term "competent parties" is defined in Section 11. It reads- "Every person is competent to contract who is of the age of majority according to the law to which he is subject, and who is of sound mind, and is not disqualified from contracting by any law to which he is subject."

AI falls short on this definition on various grounds. Firstly, AI has no free consent. The concept of consent and will does not exist in case of AI in essence. Every work, decision, and step of AI is controlled and designed through programs. Secondly, the term 'person' used in Section 11 of Indian Contract Act makes AI incompetent per se. Thirdly, there is no criteria to determine age of AI. Fourthly, the term 'sound mind' refers to ability of making decisions. AI owns no decision made by it, hence lacks 'sound mind.'

Irrespective of states, the essentials of contract are the same. AI lacking the mental capacity, is incompetent to make contract.

Legal And Ethical Consequences Of Assigning The Status Of Person To AI

Zweck Vermogen believed that only humans can be persons and have rights. Brintz, a german jurist proposed 'purpose theory' which was endorsed in Europe by E.I. Bekker, Aloysand Demilius. It stated that an entity can be treated and termed as person for certain 'specific' purpose. The purpose of assigning the status is done for the sake of convenience in legal matters.

Assigning the status of 'person' does not serve the purpose of convenience, it adds to the complexity. Since, the AI system cannot stand capable of reimbursement or restitution to victim, the burden lies on the owner or the user, and the purpose stands unfulfilled. The concept of deterrence and punishment, achieve none of the dearest results or impacts rendering it unfruitful to assign such role to AI.

It might even create loopholes for developers, hackers, or owners to escape from their obligations and consequences. It may pave way for growth of cyber-crimes and help humans to exploit it without meeting their deserved consequences. It does nothing more than widening the gap of liability in contemporary legal structure. In 2017, European Parliament that AI-based robots should be attributed 'electronic personality' or personhood, which was opposed by more than 150 experts from field of robotics, AI, and law. These experts drafted an open letter rejecting such legislation, which was signed by professionals, from over 14 countries.[13] The proposal was later rejected by European parliament.

Conclusion
It is inadvisable to assign attribute of 'person' or personhood to AI. The right way to go about this issue could be to give AI a status of 'electronic character,' the purpose of such attribution might seem far-fetched and vague, but it does not change the current condition of responsibility and obligation. It assigns a status to the relation between humans and AI. It would mean that the current liability and obligations lie on user, owner, or developer but it creates a window for approach for change in such policies if there is a breakthrough in the field of AI.

The relation between AI and human can be considered to that of animals and humans. Just as protection of animals is a right more vested in the owner than the animal itself, the protection of AI is vested more in its developer that the system itself. The regulations and limitations are made by the virtue of choice and morals of humans. The status of 'electronic character' would create a bracket around such AI systems that could, depending upon the capacity and satisfaction of legislations of various countries, be treated as minors under the guidance of the guardians.

It is easier to argue that minors and AI systems are nothing alike as minors are still humans. The contention of such assignment is not to compare between the two. It is an analogy of the relation that AI and humans could have, based of development and status of such innovations. It is not obligatory to impose such minor-guardian relation, rather an easier was if applicability suits such cases. Also, the words 'capacity' and 'satisfaction' are subjective and flexible.

Just like definitions of jurisprudence and law could not be definite, the concept of 'electronic character' is subjective and it is more of an idea for future application than an urgent policy, if ever, the world experiences a breakthrough. The idea behind this is not to change current legal structure regarding AI but to prepare it for upcoming problems. If ever there is a possibility of AI reaching the capability of attaining 'free will' and 'mental capacity,' the minor-guardian relation can be dealt with as a consequence of a minor turning major.

It does not necessarily propose such application. Depending upon capacity of such systems and satisfaction of the then-applicable structure, it could or could not be adopted. The concept of 'law' and 'jurisprudence' could never be definite because it left the space for development of society and thus change in its application. The purpose of this paper is to keep a window for a change that might not be very far-fetched.

End-Notes:
  1. Jenna Arcand, Executive Spotlight: What AI Means For The Future Of Work, Work It Daily, July 05, 2023 https://www.workitdaily.com/
  2. "Sophia the robot gets a Saudi Arabian citizenship: First-ever robot citizen," The Economic Times, last modified October 30, 2017, https://economictimes.indiatimes.com/news/science/sophia-the-robot-gets-a-saudi-arabian-citizenship/first-ever-robot-citizen/slideshow/61355634.cms
  3. Anthony Cuthbertson, Tokyo: Artificial Intelligence 'Boy' Shibuya Mirai Becomes World's First AI Bot to Be Granted Residency, Newsweek, June 11, 2017 https://www.newsweek.com/tokyo-residency-artificial-intelligence-boy-shibuya-mirai-702382
  4. Digvijay, AI-powered Robot Lawyer 'World's First' To Represent Human Client In Court, India Times, January 08, 2023 https://www.indiatimes.com/trending/social-relevance/worlds-first-ai-powered-robot-lawyer-to-defend-human-in-court-589696.html
  5. Fitzgerald P.J., Salmond on Jurisprudence (12th: ed) p. 299
  6. Viony Kresna Sumantri, Legal Responsibility on Errors of the Artificial Intelligencebased Robot, Volume 6 Issue 2 (2019) pp. 337-352, Lentera Hukum, page 5.
  7. David Kravets, Robot Kills Human, Wired, Jan. 25, 1979 https://www.wired.com/2010/01/0125robot-kills-worker/
  8. Salmond on Jurisprudence (12th: ed) p. 265
  9. N.N. Majumdar V. State, AIR 1951 Cal 140.
  10. Tiana Loving, Current AI Copyright Cases, Copyright Alliance, March 30, 2023 https://copyrightalliance.org/current-ai-copyright-cases-part-1/
  11. Bharat Sharma, Law Firm Sues 'World's First Robot Lawyer' For Not Having A Degree, India Times, March 13, 2023 https://www.indiatimes.com/technology/news/law-firm-sues-worlds-first-robot-lawyer-595686.html
  12. Putting ChatGPT through law school, CBS News, last updated on January 26, 2023, https://www.cbsnews.com/news/robot-lawyer-wont-argue-court-jail-threats-do-not-pay/
  13. George Dvorsky, Experts Sign Open Letter Slamming Europe's Proposal to Recognize Robots as Legal Persons, Gizmodo, April 13, 2018 https://gizmodo.com/experts-sign-open-letter-slamming-europe-s-proposal-to-1825240003

Law Article in India

Ask A Lawyers

You May Like

Legal Question & Answers



Lawyers in India - Search By City

Copyright Filing
Online Copyright Registration


LawArticles

How To File For Mutual Divorce In Delhi

Titile

How To File For Mutual Divorce In Delhi Mutual Consent Divorce is the Simplest Way to Obtain a D...

Increased Age For Girls Marriage

Titile

It is hoped that the Prohibition of Child Marriage (Amendment) Bill, 2021, which intends to inc...

Facade of Social Media

Titile

One may very easily get absorbed in the lives of others as one scrolls through a Facebook news ...

Section 482 CrPc - Quashing Of FIR: Guid...

Titile

The Inherent power under Section 482 in The Code Of Criminal Procedure, 1973 (37th Chapter of t...

The Uniform Civil Code (UCC) in India: A...

Titile

The Uniform Civil Code (UCC) is a concept that proposes the unification of personal laws across...

Role Of Artificial Intelligence In Legal...

Titile

Artificial intelligence (AI) is revolutionizing various sectors of the economy, and the legal i...

Lawyers Registration
Lawyers Membership - Get Clients Online


File caveat In Supreme Court Instantly