Artificially generated media, known as deepfakes, has emerged as a major concern
due to its potential to manipulate video, audio, and images using advanced
technology like Artificial Intelligence (AI). These hyper-realistic
falsifications have the capability to damage reputations, fabricate evidence,
and erode trust in democratic institutions. In fact, this issue has now even
infiltrated political messaging, posing a serious threat to the elections
process.
Deepfakes can be viewed as two distinct 'threat vectors'. Firstly, they can be
used to create false videos of individuals saying things they never actually
said. And secondly, they can be used to discredit genuine footage by claiming it
is fake.
This creates a dangerous situation, especially with the 2024 US presidential
election looming, as experts and officials are increasingly raising concerns
about the destructive power of AI deepfakes. These fears are based on the
potential impact of deepfakes on the country's perception of truth and the
potential destabilization of the electorate.
Australia appears to be ahead of other countries in terms of regulating
deepfakes, which may be due in part to the experience of Noelle Martin, a woman
from Perth who was targeted at the age of 17 with a doctored image of herself in
a pornographic context. Outraged by this violation, Martin pursued a career in
law and has since dedicated herself to combatting this type of abuse and
advocating for stricter regulations.
In the past, there have been many deepfake videos circulating on the internet
featuring famous actors like Rashmika Mandanna, Nora Fatehi, Katrina Kaif, Kajol
and cricketer Sachin Tendulkar. The government has stressed the importance of
social media intermediaries identifying and removing false or impersonating
content, including deepfakes, or else face legal consequences.
This recently became a reality for actor Akshay Kumar, who found himself at the
centre of a deepfake scandal. In this scandal, a fabricated video of Kumar
promoting a game application surfaced online, despite the fact that he never
engaged in such promotions. This highlights the need for stricter regulations
and consequences for those who create and share deepfakes.
In May 2022, a deepfake video featuring Ukrainian President Volodymyr Zelenskyy
urging his citizens to surrender their weapons went viral after cybercriminals
hacked into a Ukrainian television channel. This serves as a prime example of
the potential harm and chaos that can be caused by the spread of deepfake
content.
One of the most concerning incidents involving deepfakes was the publication of
sexually explicit AI-generated images of Taylor Swift on X. These images
received over 45 million views, 24,000 reposts, and hundreds of thousands of
likes and bookmarks within 17 hours before being taken down. Despite this, the
images had already spread across multiple accounts and social media platforms,
causing significant damage.
This incident, along with others like it, highlights the challenges in
preventing and limiting the spread of deeply fake pornography and AI-generated
images of real people. It is clear that stricter regulations and measures need
to be put in place to protect individuals from the harmful effects of deepfakes
and to hold those responsible accountable for their actions.
Examples of Deepfakes:
Deepfakes are a type of synthetic media produced through the use of deep
learning methods to manipulate or substitute existing images or videos with the
likeness of another person. To demonstrate the capabilities of deepfakes, here
are several examples:
- Face Swapping of Celebrities: A common application of deepfakes is to digitally replace the face of a well-known individual onto the body of another person in a video. This allows for videos to be created of a famous actor delivering a speech they never actually gave.
- Political Deception: Deepfakes can also be utilized to produce convincing videos of politicians saying or doing things that they never did. This can range from creating a video of a politician confessing to a crime to making controversial statements, causing confusion or spreading false information.
- Fabricated News: Deepfakes have been used to generate realistic-looking news clips or interviews that never actually took place. These videos can deceive viewers and lead to misinformation and doubts about the credibility of the media.
- Revenge Porn: Sadly, deepfake technology has been exploited to create explicit videos or images of individuals without their consent, commonly known as 'revenge porn.' These manipulated media can be used to harass, blackmail, or humiliate the victims.
- Dubbing and Localization: On a less harmful note, deepfake technology can also be employed for purposes such as dubbing movies or TV shows into different languages. By replacing the actors' mouth movements with accurate lip syncing, deepfakes can produce more natural-looking dubbed content.
Combating the Challenges Posed by Deepfakes:
In order to effectively address the challenges presented by deepfakes, it is
necessary to take a comprehensive approach that involves technology, education,
and policy. To this end, there are several key strategies that can be employed
to mitigate the risks associated with deepfakes:
- Developing Detection Tools: Significant investment should be made in research and development of advanced algorithms and technologies capable of detecting deepfakes. This can include both automated tools and manual inspection techniques.
- Promoting Media Literacy: Educating the public about the existence and potential dangers of deepfakes is crucial. Providing individuals with critical thinking skills and media literacy can help them recognize and verify the authenticity of digital content.
- Encouraging Transparency: Advocating for platforms and content creators to disclose when digital content has been manipulated or altered can help build trust and enable users to make informed decisions about the media they consume.
- Implementing Watermarking and Authentication: The use of digital watermarking and cryptographic techniques can be explored to authenticate original content and detect alterations.
- Strengthening Laws and Regulations: Enacting laws and regulations that address the creation, distribution, and use of deepfakes, particularly in areas such as privacy, defamation, and election integrity, can serve as deterrents and provide recourse for victims of deepfake-related harm.
- Collaborating with Tech Companies: Working with technology companies to develop and implement policies and tools to mitigate the spread of deepfakes on their platforms is crucial. This may include content moderation, flagging systems, and user education initiatives.
- Investing in Research: Allocating resources for ongoing research into deepfake technologies and their potential societal impacts can inform policy decisions and the development of countermeasures.
- Fostering International Cooperation: Collaboration among governments, tech companies, researchers, and civil society organizations at the international level is necessary to address the global nature of the deepfake threat.
Overall, a multi-faceted approach that involves a combination of technology,
education, and policy is essential in combatting the challenges posed by
deepfakes. By implementing these strategies, we can work towards mitigating the
risks and protecting individuals from the harmful effects of deepfakes.
Laws on Deepfakes:
India currently does not have specific legislation in place to combat deepfakes
and other AI-related crimes. However, various provisions under existing laws can
offer both civil and criminal remedies.
The Information Technology Act, 2000:
It further reiterated that any failure to act as per the relevant provisions of
the Information Technology Act, 2000 and Rules, Rule 7 of the Information
Technology Rules (Intermediary Guidelines and Digital Media Ethics) Code,
2021(hereinafter referred to as the "IT Rules") would render organisations
liable to lose the protection available under the Section 79(1) of the IT Act.
Section 79 (1) of the IT Act exempts online intermediaries from liability for
any third-party information, data, or communication link made available or
hosted by him. Rule 7 of the IT Rules empowers the aggrieved individuals to take
platforms to court under the provisions of the Indian Penal Code.
Section 66E of the IT Act prescribes punishment, for violation of privacy of an
individual through publishing or transmitting the image of private area of such
a person without his or her consent, with imprisonment of three years with fine
of INR 2 lakh.
Section 67, 67A and 67 B of the IT Act specifically prohibit and prescribe
punishments for publishing or transmitting obscene material, material containing
sexually explicit act and children depicted in sexually explicit act in
electronic form respectively.
In case of impersonation in an electronic form, including artificially morphed
images of an individual, social media companies have been advised to take action
within 24 hours from the receipt of a complaint in relation to any content. In
view of the same, Section 66D of the IT Act provides punishment of three years
with fine up to one lakh rupees for anyone who by means of any communication
device or computer resource cheats by impersonation.
Provisions of the Indian Penal Code, 1860 (IPC) can also be invoked for
cybercrimes related to deepfake forgery such as sections 509 (insulting the
modesty of a woman), 499 (criminal defamation) and 153 (a) and (b) (spreading
hatred on communal lines). In the 'Mandanna case', the Delhi Police Special Cell
reportedly registered an FIR against unknown persons under Sections 465
(forgery) and 469 (forgery to defame a party).
Provisions of the Indian Penal Code, 1860 (IPC) can also be invoked for
cybercrimes related to deepfakes, such as Sections 509 (insulting the modesty of
a woman), 499 (criminal defamation), and 153 (a) and (b) (spreading hate on
communal lines). In the 'Mandanna case,' the Delhi Police Special Cell has
reportedly registered an FIR against unknown individuals under Sections 465
(forgery) and 469 (forgery to harm the reputation of a party).
Additionally, the Copyright Act of 1957 can be used if any copyrighted image or
video was used in creating the deepfake content.
Conclusion:
Synthetic media, known as deepfakes, utilize advanced artificial intelligence
techniques, such as deep learning and neural networks, to replace the likeness
of a person in an existing image or video with that of another individual. These
methods allow for the creation of highly realistic visuals that can be
indistinguishable from authentic ones.
While deepfakes have potential uses in
entertainment and other creative realms, they also give rise to significant
ethical concerns, including the spread of misinformation, invasion of privacy,
and potential for misuse, such as the creation of false news or damage to one's
reputation. Ongoing efforts are being made to develop detection methods and
regulations to tackle the challenges presented by deepfakes.
Written By: Md.Imran Wahab, IPS, IGP, Provisioning, West Bengal
Email:
[email protected], Ph no: 9836576565
Please Drop Your Comments