The omnipresent internet which is a modern-day need, has democratized us even
more by making data and information easily accessible. But still, it has also
introduced new digital threats into our lives. One such concern is the rise of
deepfakes which are hyper-realistic, AI-generated audio or video material that
manipulates reality, oftenly with malicious intentions. It's not wrong to
describe deepfake technology as a digital chameleon by the use of which anyone
can alter anybody's reality sitting anywhere around the world. The question that
echoes is whether India is ready to combat this rising digital menace.
Deepfakes are the offspring of artificial intelligence. Specifically through the
combination of technologies like generative adversarial networks (GANs) and
Machine Learning (ML), deepfake photos, and videos are generated.[i] These
algorithms learn from vast datasets to create eerily realistic synthetic
content. Whether it's swapping faces, altering voices, or fabricating entire
scenarios, deepfakes blur the line between reality and fiction.
Simply, it is
the deceptive digital creations that blend real and fabricated elements, that
have become a potent weapon in the age of information. Their potential impact on
privacy, reputation, and security cannot be overstated. These synthetic media
pieces can convincingly manipulate faces, voices, and even entire personas.
While it is undeniable that Deepfake technology has the potential to improve our
lives in many ways, but its misuse has detrimental effects that even include the
perpetration of criminal activity. As a result, this problem has attracted more
attention nowadays. Creating counterfeit pornographic movies using Deepfake, an
AI-based facial manipulation technique, may significantly damage or seek revenge
against individuals' reputations.
Regarding facial recognition, circumventing
the verification mechanism, such as employing Deepfake technology for facial
recognition, would empower criminals to partake in activities using other
individuals' identities. Businesses will suffer from the act of fabricating and
disseminating highly distorted information about competitors using Deepfake
technology, as well as other related consequences. In copyright, the creation of
highly manipulated images or movies might lead to conflicts with copyright
infringement and fair use.
Regarding matters of justice, the potential Deepfake
creations might provide difficulties for courts in verifying the authenticity of
evidence like as audio and video recordings.[ii] In the last couple of years,
there has been an upsurge in cases in India where artificial intelligence. (AI)
has been used to create deepfake content using the unauthorized personality
rights and personal attributes of individuals of all classes, such as their
voice, name, image, likeness, and video. This content has been used for various
purposes, including creating GIFs, emojis, or ringtones for commercial use, and
even for sexually explicit material. These instances have raised concerns about
the rights of individuals, including their personality rights and right to
privacy.
In September 2023, the Delhi High Court made an important ruling in the case of
Anil Kapoor v. Simply Life India & Ors[iii] where the court issued an ex-parte
verdict in which Indian actor Anil Kapoor sought to stop the misuse of his name,
voice, speech pattern, appearance, gestures, signatures, etc. He also raised
concerns regarding the misuse of deepfake technologies to exploit the netizens
and defame his image. This not only violates their individuals' rights but also
has a negative effect on society.
According to a survey by the Amsterdam-based cybersecurity firm Deeptrace, a
startling 96% of deepfakes were pornographic and sexual, and 99% of them
comprised women.[iv] Deepfake pornographic content mostly aims and target women,
subsequently increasing gender inequality. Women account for 90% of the victims
of crimes such as revenge pornography, nonconsensual pornography, and other
types of harassment through the usage of it. Deepfakes are not restricted to the
generation of photos and videos only; rather, there are AI systems that can
clone people's voices in order to carry out financial fraud. Approximately 47%
of Indian people have encountered or know of someone who has undergone an AI
voice fraud. According to McAfee's research on AI Voice Scams, over 83% of
Indian victims reported monetary losses, with 48% losing more than INR 50,000.
The question of whether the Indian Penal Code (IPC) of 1860 adequately addresses
the criminal consequences of deepfakes emerges as these artificial intelligence
techniques get increasingly concerning.
Although deepfakes and AI-related crimes are not specifically covered by any
laws in India, there are provisions under several statutes that may provide both
criminal and civil remedies. For example, Sections 509 (words, gestures, or acts
intended to insult a woman's modesty), 499 (criminal defamation), and 153 (a)
and (b) (spreading hate on communal lines) of the IPC, 1860 can also be used to
prosecute crimes related to deepfakes.
According to reports, the Delhi Police
Special Cell has filed a formal complaint against unidentified individuals under
Sections 465 (forgery) and 469 (forgery to damage the reputation of a party) in
connection with the Mandanna case. The Mandanna case involves a video that went
viral on social media that featured actor Rashmika Mandanna's likeness and
showed a woman entering a lift while wearing a sexually appealing bodysuit.
Using deepfake technology, the actor's face was overlaid on the body of British
Indian influencer Zara Patel without the consent of any of the party.
Although deepfakes aren't specifically mentioned in the IPC, certain of its
provisions may be interpreted and applied:
Forgery (Sections 469 and 471): The use of deepfakes to produce fake papers or
records may be included in the forgery of documents or electronic records with
the intent to deceive or cause injury. Deepfakes, which resemble document
forgeries, entail deceit and manipulation. The use of fabricated papers with
knowledge of their untruth is prohibited by Section 471. This clause may be
applicable if someone intentionally employs a deepfake to deceive others.
Cheating (IPC Sections 420 and 468): Deepfakes may be used to aid in deception,
fraud, or cheating. It may be necessary to refer to Sections 420 (cheating) and
468 (forgery for cheating) if a deepfake is used to deceive someone, there are
legal options available in these parts.
Defamation (Sections 499–501): These provisions could be expanded to include the
production and distribution of deepfake material to damage people's reputations.
Therefore, disseminating false information using deepfakes may be punishable in
accordance with these sections.
Public Mischief (Section 505): It may be illegal to create deepfakes that
encourage violence or disturbances in public.
Apart from this, deepfake offenses that entail the capturing, printing, or
transmission of an individual's picture in the media, so infringing against
their privacy, are subject to Section 66E of the Information Technology Act,
2000 (IT Act). A fine of ₹2 lakh or up to three years in jail are the possible
penalties for this kind of offense. Comparably, those who use computer resources
or communication devices maliciously in order to impersonate someone else or
cheat are subject to punishment under Section 66D of the IT Act. A violation of
this clause is punishable by up to a ₹1 lakh fine and/or three years in jail.
Furthermore, broadcasting or sending pornographic or sexually explicit deepfakes
might result in legal action under Sections 67, 67A, and 67B of the IT Act. The
IT Rules also mandate that social media sites remove "artificially morphed
images" of people as soon as they are notified and forbid hosting "any content
that impersonates another person."In case they fail to take down such content,
they risk losing the 'safe harbour' protection, which shields social media firms
from legal obligation for content published by users on third-party platforms.
In addition, if any video or picture protected by copyright has been utilized to
produce deepfakes, the Copyright Act of 1957 may be used. Any property that
belongs to another person and over which they have the sole right to use is
prohibited under Section 51.
The provisions of IPC, however, have a lot of serious restrictions. There is no
such definition and description regarding deepfake-related crimes in The IPC or
simply the IPC does not explicitly express this modern-day crime. Also, proving
the intent and attribution of a specific individual behind the spreading
deepfake can be exceptionally challenging. The existing sections under which
deep fake-related crimes are charged for subjective interpretation, potentially
lead to inconsistent application of the law. The IPC largely focuses on mens rea
and establishing mens rea in deepfake cases, particularly those involving
artistic expression or parody, is challenging. This is as similar and hard as
distinguishing free speech and hard speech.
Deepfakes and false information pose
a significant concern in the country. Existing rules are inadequate since they
were never established with these developing technologies in mind. The existing
policies only address internet takedowns through censorship or criminal
prosecution, but they do not provide a thorough grasp of how generative AI
technology works and the full range of harm it may create. The victim is solely
responsible for filing a complaint under the statutes. The state and the police
system are confused regarding charging these criminals. And thus, many people's
experiences with local police stations have been less than acceptable in terms
of investigation or the culprit suffering any type of punishment. These
limitations not only highlight the legal lacunae in addressing deepfakes solely
through the IPC but also express the current practical situations.
According to the definition provided by the Digital Personal Data Protection Act
of 2023, pictures and photos are considered sensitive personal data that can be
used to identify a specific individual. Thus, deepfakes constitute a breach of
personal information and an infringement on an individual's right to privacy.
Publicly accessible data might not be governed by law, but will the social media
corporations still be held accountable if the data they post on their platforms
can be used to spread false information.
So, to held deepfake-related offenders punishable under IPC, 1860, one must
answer
Where does the 'guilty mind' lie?
References:
- V. Rana, (2023). Deepfakes And Breach Of Personal Data – A Bigger Picture [Online]. Available at: https://www.livelaw.in/law-firms/law-firm-articles-/deepfakes-personal-data-artificial-intelligence-machine-learning-ministry-of-electronics-and-information-technology-information-technology-act-242916?infinitescroll=1.
- A. G. Getman, and L. Yilan, 2023, The Deepfake Technology: Threats or Opportunities for Customs, Administrative consulting, N. 4, pp. 30-36. Available at: https://cyberleninka.ru/article/n/the-deepfake-technology-threats-or-opportunities-for-customs
- Anil Kapoor V. Simply Life India and Ors. (2023) Delhi High Court.
- A. Bhaumik, (2023). Regulating deepfakes and generative AI in India | Explained [Online]. The Hindu. Available at: https://www.thehindu.com/news/national/regulating-deepfakes-generative-ai-in-india-explained/article67591640.ece.
- (2024). Deepfakes in India: Mixed response to advisory; government to notify tighter IT rules in a week [Online]. The Hindu. Available at: https://www.thehindu.com/sci-tech/technology/deepfakes-in-india-mixed-response-to-advisory-government-notify-tighter-it-rules-in-a-week/article67747422.ece.
Please Drop Your Comments