File Copyright Online - File mutual Divorce in Delhi - Online Legal Advice - Lawyers in India

Artificial Intelligence and Crime: Charting the Future of Criminal Accountability

Artificial intelligence refers to the ability to reason, make judgments, and integrate processes in a way that differs from human intellect. It was developed through interactive systems and information technology. The development of information technologies has led us to the realization that artificial intelligence entities, or agents, exist and can behave independently with little to no human intervention.

Modern technological innovations are starting to supplement or even replace human endeavors as artificial intelligence-based entities such as self-driving automobiles and machine translation, robots and medical diagnosis software. The increasing presence and expanding application areas of artificial intelligence today have led to a parallel rise in damages resulting from interactions with humans or other systems. This intensification emphasizes the lack of a particular legal process for the independent acts of artificial intelligence and the ensuing harm.

This circumstance begs the questions of whether criminal culpability applies to artificial intelligence and whether such an application could supplement existing criminal law doctrines and general ideas. In the context of the legal status of artificial intelligence technologies, the identification of responsible parties for crimes committed by these technologies and the evaluation of these factors in the criminal justice process represent significant gaps in criminal law. This article aims to assess the criminal responsibility of artificial intelligence entities. This work advances our understanding of artificial intelligence's function in criminal law and provides solutions to issues in the field.

Introduction
Humanity has long dreamed of artificial intelligence, both in science fiction and philosophy. However, this is already a reality due to the exponential growth of technology in recent decades. Human reliance on artificial intelligence technology has grown significantly in the modern era. There is scarcely any aspect of daily life that has not been influenced by it, from drones to automated autos, computer science to medical science, and artificially intelligent assistants on phones to artificially intelligent solicitors. AI has saved a great deal of time and energy while also improving and simplifying human lives.

There is no authoritative definition of Artificial Intelligence but in common parlance, "Artificial Intelligence (AI) is a branch of computer science focused on creating systems or machines that can perform tasks typically requiring human intelligence. These tasks include learning from data, reasoning, problem-solving, understanding natural language, and recognizing patterns. AI systems are designed to mimic cognitive functions such as perception, decision-making, and learning. "Artificial" is defined by the Turkish Language Association as something that is not real or natural but is created or manufactured by human hands and mimics natural instances (Turkish Language Association, 2023). The term "intelligence" describes the unique human capacities for reasoning, inferring, seeing the world, comprehending, and coming to conclusions (Turkish Language Association, 2023).

Karaduman defines AI as "the ability of a controlled machine to perform tasks related to higher cognitive stages such as thinking, understanding, generalizing, and experiencing the past, typically attributed to human qualities".

According to some authors, artificial intelligence (AI) is the machine's capacity to carry out difficult tasks like comprehension, justification, learning, and decision-making—tasks that are normally performed by humans.

However, as with every technology, it has both advantages and disadvantages. Consider autonomous vehicles. On one hand, it has expanded mobility for social units such as the old and crippled. However, AI technology has caused several fatal mishaps. This raises legal concerns about the culpability of AI entities under criminal law for such offences. In March 2018, Elaine Herzberg, a homeless lady, died in a collision with an Uber test vehicle in Arizona, USA. This is the first death in a traffic accident caused by a self-driving automobile.

Many countries are employing autonomous weapons in their military forces. These dangerous weapons can determine their target on their own, analyses numerous modes of operation in a fraction of a second, and can murder humans without any human intervention. There are currently no laws in any country's criminal justice system that may punish these. Today, AI is becoming integrated in our day-to-day lives at breath-taking rate. An employee at a Kawasaki motorbike manufacturing plant in Japan in 1981 was killed by an AI robot working in the same factory. This had happened almost 40 years back. Now, due to rapid changes in technologies, AI's cognitive power has increased to the next level. If AI is not regulated well it can be a threat to society. The question is who should be held accountable for the crimes committed by AI?

What are AI-generated crimes?

AI-generated crimes occur when machines conduct crimes without human involvement. In some circumstances of AI-related crimes, no normal person can be held accountable. Machines have been utilised for a variety of purposes since antiquity, causing harm to humans, animals, property, and the environment.

The majority of incidents occurred owing to operator negligence or bad machine design. This may result in criminal culpability, but only for machine users, operators, or owners or supervisors, rather than the machine itself. For example, today, if someone murders a person using a knife, the person who uses the knife, not the knife itself, will be charged with a crime. A knife is simply a tool. However, AI can vary from ordinary machinery or instruments in some important characteristics that allow the direct application of criminal law.

More appealing. AI, in particular, can demonstrate great levels of autonomy. AI is autonomous, meaning it can make decisions without human intervention. AI has the ability to quickly receive input from multiple sources, establish targets, analyse results, and alter behavior to improve success rates. If a person commits an offence, that person will be held criminally guilty; but, if an AI commits the same crime without human intervention, no one will be held criminally liable. In a civilized community, allowing crimes to go unpunished is extremely dangerous.

Legal Concern related to Offence Committed by AI
AI technology raises several legal concerns. Who is criminally accountable if an AI entity causes harm to people or property? Who is responsible for an AI entity (e.g., robots), the producer/programmer (who may be a third party working for the producer), the user (owner/buyer), or an Act of God? Second, what components of crime must be proven in a case involving an artificial intelligence entity? Third, if an AI entity, such as a robot, is proven guilty, what punishment should be placed on it? There is an abundance of such legal issues which are yet to be settled.

There is limited legal precedent regarding criminal culpability for AI entities, particularly in India. Thus, the current research piece will explore into this topic. The goal is to identify general concepts that can guide future policies on the subject while being flexible and adaptable to quickly evolving technologies. This article aims to address the legal issue of criminal culpability for artificial intelligence entities.

General Elements of Criminal Liability
To prove criminal culpability for an offence, two factors must be met: physical (actus reus) and mental (mens rea). The term 'Actus reus' refers to an unlawful act or omission, whereas 'Aliens rea' refers to the guilty mind, including purpose, intention, and knowledge. Negligence and strict liability are exceptions to this general principle. If an entity, whether person, corporate, or artificial intelligence, meets these two criteria, it may face criminal charges. For instance, if a child younger than seven killed someone while pretending to use a loaded gun, they would not be prosecuted because mens rea is absent and they were just using a toy gun. What then is the mens rea of AI? Can AI possess "mens rea" or criminal intent? In response, three models are examined here for the criminal responsibility of AI in many contexts, as recommended by Israeli professor Gabriel Hallevy.

In order to impose criminal liability on any kind of entity, it must be proven that the above two elements existed. When it has been proven that a person committed the criminal act knowingly or with criminal intent, that person is held criminally liable for that offense. The relevant question concerning the criminal liability of AI entities is: How can these entities fulfill the two requirements of criminal liability?

Gabriel Hallevy proposes the imposition of criminal liability on AI entities using three possible models of liability: the Perpetration-by-Another liability model; the Natural-Probable-Consequence liability model; and the Direct liability model.
  1. The Perpetration-by-Another Liability Model
    This first model does not consider the AI entity as possessing any human attributes. The AI entity is considered an innocent agent. Accordingly, due to that legal viewpoint, a machine is a machine, and is never human. However, one cannot ignore an AI entity's capabilities, as mentioned above. Pursuant to this model, these capabilities are insufficient to deem the AI entity a perpetrator of an offense. These capabilities resemble the parallel capabilities of a mentally limited person, such as a child, or of a person who is mentally incompetent or who lacks a criminal state of mind.

    For the first example, a fictional illustration: A programmer creates a robot's software. He puts it in front of his enemy's house on purpose so that at night he mayset fire to his abandoned home. Even though the robot did the wrongdoing, the programmer is still held accountable.

    Imaginary example for the second scenario: A user purchases a robot and gives it orders to attack any third party. Here, the robot just obeys its master without using its knowledge or judgment.

    In the first case, only producer would be liable. In the second case, only the end user would be liable because the robot is a mere innocent intermediary. Legally speaking, an innocent agent who causes an offence to be committed, such as a child, mentally incompetent person, or someone who is not in a criminal state of mind, is held criminally accountable as a perpetrator-via-another. In these situations, the party arranging the crime (the perpetrator-via-another) is the true perpetrator in the first degree and is responsible for the actions of the innocent actor, while the intermediary is seen to be only an instrument, albeit a clever one. The conduct and the offender's mental state are used to evaluate his culpability.

    The Perpetration-by-Another liability model considers the action committed by the AI entity as if it had been the programmer's or the user's action. The legal basis for that is the instrumental usage of the AI entity as an innocent agent. No mental attribute required for the imposition of criminal liability is attributed to the AI entity.
     
  2. The Natural-Probable-Consequence Liability Model
    The second scenario is predicated on the producer/programmer or end user's capacity to predict the possibility of crimes being committed. Despite not intending to cause offence, the producer and user in this instance collaborate closely with the AI entity. Criminal culpability may result from two factors in such a scenario: (1) the producer's carelessness or negligence when creating the AI entity, or (2) the act the user commanded having a natural and likely outcome.

    One example of such a scenario: an AI robot or software, which is designed to function as an automatic pilot. The AI entity is programmed to protect the mission as part of the mission of flying the plane. During the flight, the human pilot activates the automatic pilot (which is the AI entity), and the program is initialized. At some point after activation of the automatic pilot, the human pilot sees an approaching storm and tries to abort the mission and return to base. The AI entity deems the human pilot's action as a threat to the mission and takes action in order to eliminate that threat. It might cut off the air supply to the pilot or activate the ejection seat, etc. As a result, the human pilot is killed by the AI entity's actions. Obviously, the programmer had not intended to kill anyone, especially not the human pilot, but nonetheless, the human pilot was killed as a result of the AI entity's actions, and these actions were done according to the program.

    Another example is AI software designed to detect threats from the internet and protect a computer system from these threats. A few days after the software is activated, it figures out that the best way to detect such threats is by entering web sites it defines as dangerous and destroying any software recognized as a threat. When the software does that, it is committing a computer offense, although the programmer did not intend for the AI entity to do so.

    In these situations, neither the programmers nor the users intended to utilise the AI entity to conduct the offence; they had no idea that it had not been committed, nor had they planned it. The second model could produce an appropriate legal response in such instances. This methodology is predicated on programmers' or users' capacity to predict when offences might be committed.

    According to the second model, a person might be held accountable for an offense, if that offense is a natural and probable consequence of that person's conduct. The natural-probable-consequence liability has been widely accepted in accomplice liability statutes and recodifications

    Natural-probable-consequence liability seems to be legally suitable for situations in which an AI entity committed an offense, while the programmer or user had no knowledge of it, had not intended it and had not participated in it. The natural probable-consequence liability model requires the programmer or user to be in a mental state of negligence, not more. Programmers or users are not required to know about any forthcoming commission of an offense as a result of their activity, but are required to know that such an offense is a natural, probable consequence of their actions.

    A negligent person, in a criminal context, is a person who has no knowledge of the offense, but a reasonable person should have known about it, since the specific offense is a natural probable consequence of that person's conduct. The natural-probable-consequence liability model would permit liability to be predicated upon negligence, even when the specific offense requires a different state of mind.
     
  3. The Direct Liability Model
    The third model does not assume any dependence of the AI entity on a specific programmer or user. The third model focuses on the AI entity itself. AI entities may eventually be able to operate completely on their own, without relying only on algorithms; instead, they will be able to learn from their experiences and observations to do tasks.

    As was previously said, the external factor (actus reus) and the internal element (mens rea) of an offence determine criminal responsibility. Anyone found guilty of both parts of a certain offence faces criminal charges for that particular offence. There are no further requirements needed to establish criminal responsibility.

In order to impose criminal liability on any kind of entity, the existence of these elements in the specific entity must be proven. When it has been proven that a person committed the offense in question with knowledge or intent, that person is held criminally liable for that offense. The relevant questions regarding the criminal liability of AI entities is: How can these entities fulfill the requirements of criminal liability? Do AI entities differ from humans in this context?

A person is considered criminally liable if they meet the requirements of both the internal and exterior elements of a particular offence. Why should an AI be immune from criminal prosecution if it satisfies every requirement of an offence? It is arguable that certain facets of human civilization are immune from criminal prosecution, notwithstanding the establishment of both the exterior and internal components. These groups in society include young children and mentally ill people.

The mentally ill are presumed to lack the fault element of the specific offense, due to their mental illness (doli incapax) mentally ill people lack the cognitive capacities to discriminate between right and wrong and the ability to restrain impulsive behavior

An AI algorithm should analyse the factual data it receives through its receptors using all of its capabilities when it is operating correctly. But an intriguing legal question would be whether, in the event that an AI program malfunctions and corrupts its analytical powers, the defence of insanity may be invoked?

There is no reason to stop an AI entity from being held criminally liable for an offence if it has established all of the components of that offence, both internal and external. If programmers and/or users are subject to criminal culpability through any other legal pathway, then the criminal liability of the AI entity does not supersede their criminal liability. Criminal responsibility should be increased rather than decreased. In addition to the criminal accountability of the human programmer or user, the AI entity is also subject to criminal liability.

It might be summed up that the criminal liability of an AI entity according to the direct liability model is not different from the relevant criminal liability of a human. In some cases, some adjustments are necessary, but substantively, it is the very same criminal liability, which is based upon the same elements and examined in the same ways.

Conclusion of the three proposed models:
When an AI entity pretends to be a defenseless party in the commission of a certain crime, and the programmer is the only one who gave the order to carry out the crime, the application of the Perpetration-by-In that case, a different model—the first liability model—is the more suitable legal model. In the same scenario, the direct responsibility model (the third liability model) is most appropriate to be applied to the criminal liability of the AI entity's programmer when the programmer is itself an AI entity (when an AI entity programs another AI entity to conduct a specific offence). In that case, the third liability model is used in addition to, not in place of, the first liability model. Therefore, given these circumstances, the AI entity

Punishment considerations
Punishing an AI initially appears absurd, but it is not. First, let's define punishment.
According to H.L.A. Hart, punishment consists of five aspects, including pain and unpleasant consequences. Second, it must be for a violation of the legal rule. Third, it must be of an actual or alleged perpetrator for his crime. Fourth, it must be purposely administered by people other than the offender, and fifth, it must be imposed and administered by an authority created by a legal system against which the act is committed.

For a moment, consider how AI is tried, prosecuted, convicted, and held criminally responsible. Now, the next question is how to penalize AI. How may it be punished with imprisonment, the death sentence, or a fine? When AI, unlike AI robots, lacks a physical body, it is difficult to determine who to arrest and imprison. The AI may not have the financial resources or a bank account to pay a fine. Similar concerns occurred when corporations' criminal responsibility was recognized. Punishing a company for crimes required certain adaptations, just as punishing an AI does. The principal punishments under criminal law include capital penalty, imprisonment, community service, and victim Compensation and Fines. Minor tweaks can be made to apply these punishments to AI without undermining their intended purpose.

Applying Theories of Punishment to AI
Punishment can be used as a method of reducing the incidence of criminal behavior either by punishing the offenders severely and setting an example for potential criminals to abstain from crimes, incapacitating and preventing them from doing further offenses, or by reforming them to be better individuals. We can apply theories of punishment to AI.

Deterrence Theory and AI
It is argued that punishing AI will not dissuade other AIs from committing crimes since AI is undeterrable. Punishing AI is not effective as a deterrent. Basic AI may be undeterred by fines or regulations, but future responsive AI gathers data from several sources and learns from past experiences independently. Punishing AI will set an example and, potentially, may serve as deterrence. The overall deterrent effect of punishment is that it inhibits others from committing similar crimes by setting an example for other potential criminals. While it may not directly deter AI, it can serve as a deterrent to other potential offenders from utilizing AI to conduct crimes.

Retributive Theory and AI
To be retributive is to "payback." This hypothesis is based on revenge. When the offender is punished, the victim feels vindicated and is deterred from using the legal system to illegally punish the offender. When AI is penalized, the victims of crimes involving AI will feel that justice has been served, and public trust in the legal system will grow. The public will be reassured that even in cases when crimes are committed by artificial intelligence, the state maintains a zero-tolerance approach. This will establish a general atmosphere of security and safety.

Preventive Theory and AI
According to this notion, the criminal should be rendered incapable of committing the same crime again. By prohibiting the use of this illegal AI or destroying it, punishing AI can best achieve the goal of the preventive theory.

Reformative Theory and AI
The AI lacks a heart and cannot be changed. In today's context, the anthropomorphism of AI for reformative theory appears outdated. Future AI with emotions may be able to learn from punishment and avoid committing crimes, but this is unlikely to happen today.

AI cannot be punished in the real sense
According to Hart: "The punishment must involve pain, suffering and unpleasant experience" AI is not capable of feeling emotions or experiencing pain, even after being destroyed or reprogrammed. As a result, penalising AI does not achieve its intended goal. However, one could argue that not all criminals find their penalty painful. Some individuals may end themselves in jail for media stunts, while others may commit tiny crimes to obtain food and shelter.

Expanding the scope of existing criminal law
This is the simplest alternate method of penalising AI. In the existing criminal justice system, people are held accountable for any crimes committed by a machine or computer. The machines are just regarded a tool, not the offender. For example, if a hacker attempts to breach important government data, the human hacker will be held criminally responsible for the act and punished, not the software used to attack the data. Every country has adequate penal laws and cyber security regulations in place to hold persons who use machines or computer programs accountable for their actions. For AI-generated crimes, some of these existing laws can be expanded to define new offences related to AI.

The complex situation arises when the person has used AI to commit a minor crime, for example, to steal data from the computer of a police station but the AI erroneously or autonomously damages the property and causes the death of certain people whom it considered can stop it from accomplishing his work. Can the hacker be held liable for the murder of these people which was not foreseen by the hacker or he did not have a mens rea for murder? For such cases, criminal laws already have many provisions like constructive liability concepts. These concepts just need to be expanded to accommodate AI-generated crimes in its realm.

In the current situation when AI-generated crimes are very less the best solution can be to define new crimes related to AI, the way cyber-crimes were defined when computer-related crimes increased. A new AI Crimes Act can be enacted to criminalize negligent or mala fide intentional use of AI by different stakeholders. The individuals related to AI at different stages like developers, users, owners, supervisors, and trainers can be punished for the irresponsible behavior of an AI.

To address the low number of AI-related crimes, it may be beneficial to establish new AI-related crimes similar to how cyber-crimes were defined during the rise of computer-related crime. A new AI Crimes Act could be passed to criminalise the careless or malicious purposeful use of AI by various parties. Individuals involved in AI, including developers, users, owners, managers, and trainers, may face consequences for reckless activity.

Mandatory Licensing and Registration
Before deploying AI, it may be necessary to register a designated responsible person who can be held criminally liable for any improper action by the AI. This person may be a natural person or an artificial person, such as a firm or an NGO. Registration or obtaining a licence prior to developing or utilising AI should be made mandatory. This can also be a difficult task. Before issuing the licence, the licensing body must have officers who are AI experts and assess an AI's criminal capability. It is quite difficult to hire such highly technical personnel, particularly in underdeveloped countries. The expense of educating and developing the procedure for giving such licenses will be quite costly in comparison to the benefits it can achieve. So, ultimately this remedy also infers punishing a person associated with AI and not AI directly.

Conclusion
Artificial intelligence (AI) should be treated as a person and subject to penal laws, similar to how corporations are recognised as legal entities with criminal culpability. As AI becomes more complicated, advanced, and autonomous on a daily basis, holding any people criminally accountable for AI-generated crimes will become increasingly unfeasible. So, punishing AI is the best option. However, since AI is still in its early stages, General AI and super AI do not exist, and few AIs are directly involved in crimes, AI is still largely under human control.

Instead than condemning AI, it's better to seek alternative solutions. Instead of actively punishing AI, criminal and civil sanctions can be increased. If AI creators, owners, users, or supervisors fail to manage their responsibilities correctly, they may face serious penalties. This appears to be a better and more straightforward option than punishing AI directly and stakeholders indirectly. Failure to monitor, run, create, or develop an AI responsibly can result in severe civil liability in the form of tort.

Scientists are developing advanced AI capable of making moral decisions independently, thus it is not yet appropriate to address its criminal culpability. There is an urgent need for regulation and binding international legislation in the domains of self-driving cars, autonomous killer weapons, dark net, etc. It is always a good idea to be prepared for the future; otherwise, AI will quickly outperform humans. Written By: Gaurav Raj, Lloyd Law College

Law Article in India

Ask A Lawyers

You May Like

Legal Question & Answers



Lawyers in India - Search By City

Copyright Filing
Online Copyright Registration


LawArticles

How To File For Mutual Divorce In Delhi

Titile

How To File For Mutual Divorce In Delhi Mutual Consent Divorce is the Simplest Way to Obtain a D...

Increased Age For Girls Marriage

Titile

It is hoped that the Prohibition of Child Marriage (Amendment) Bill, 2021, which intends to inc...

Facade of Social Media

Titile

One may very easily get absorbed in the lives of others as one scrolls through a Facebook news ...

Section 482 CrPc - Quashing Of FIR: Guid...

Titile

The Inherent power under Section 482 in The Code Of Criminal Procedure, 1973 (37th Chapter of t...

The Uniform Civil Code (UCC) in India: A...

Titile

The Uniform Civil Code (UCC) is a concept that proposes the unification of personal laws across...

Role Of Artificial Intelligence In Legal...

Titile

Artificial intelligence (AI) is revolutionizing various sectors of the economy, and the legal i...

Lawyers Registration
Lawyers Membership - Get Clients Online


File caveat In Supreme Court Instantly