Deepfake technology, driven by artificial intelligence, is capable of
producing highly real digital forgeries, and it often manipulates images,
videos, and audio to impersonate individuals. Deepfakes have lots of possible
applications in entertainment and education but are increasingly used for
malicious purposes-for example, identity theft, misinformation, and exploitation
of the non-consensual act. Therefore, deepfakes pose some very significant
threats to privacy, trust, and security because it is becoming hard to
distinguish between authentic content and the manipulated one. Ethical and legal
issues must be met with preventing the misuse of deepfakes and harm to
individuals.
Introduction
Have you ever wondered if your face or voice might be used without your
permission? Imagine a world where someone can create videos exactly like you.
Such videos could say things that you never said. That is the alarming reality
of deepfake technology. In India, while not common in politics, there is
increasing concern about the misuse of deepfakes. So, how did we get into this
situation, and what do we do to protect ourselves against such a digital threat?
Let's head out into the wild west of deepfakes and dive into the world's need
for quick governmental action and the role legislation plays in keeping us safe.
What Is Deepfake Technology?
- This procedure employs "deep learning," which is inspired by the way our brains process and understand vast amounts of information.
- Deepfake is a more complex architecture of two different neural networks: the Generator and the Discriminator.
- The Generator produces the fake content, while the Discriminator distinguishes the real data from the artificially generated ones.
- The game is played by both teams such that they practice, get better at their endeavors, and create fake content almost indistinguishable from reality.
- The endpoint of Deepfake is to create content that is indistinguishable from actual content, hence producing a new world where what is real becomes uncertain and what is synthetic can no longer be distinguished.
Laws Against Deepfake Technology
Centre tells the big social media companies to tread with caution while finding out and stopping the spread of fake information and deepfake videos.
Moreover, they ask for the removal of any content that violates the rules, as per the IT Rules of 2021, in the shortest time possible. The principal law of a deepfake is the IT Act of 2000 specifically Section 66 and its Rules. However, in my opinion, such laws may not be able to deal with this sort of problem. Let's dig in one by one.
Section 66D of the IT Act of 2000
With this, section 66D of the IT Act criminalizes one who impersonates another with the objective of acquiring some undue advantage over that person. He could be put behind bars for up to three years and also subjected to a fine of ₹1 lakh. Thus, any misuse of deepfakes will not go unpunished by law.
Section 66E of the IT Act of 2000
It applies to the cases of deepfakes that concerns capturing, pubicising or transmitting a person's images in mass media against their privacy. The offense is punishable by imprisonment for up to three years or a fine of up to ₹2 lakh.
Rules For Intermediaries
Under Rule 3(1)(b)(vii) of IT Intermediary Rules
Under Rule 3(1)(b)(vii) of IT Intermediary Rules, intermediaries have been directed to exercise due diligence, which would include explicit provisions in privacy policies or user agreements that prohibit hosting content regarding impersonation and direct engagements against the deepfake threat.
Intermediaries were also alerted to the fact that non-compliance with the IT Act and Rules may attract Rule 7 of the IT Rules 2021.
Non-compliance could jeopardize the protection under Section 79(1) of the Information Technology Act, 2000 as it intends to provide protection to the intermediaries.
Rule 3(2)(b) of IT Intermediary Rules
Rule 3(2)(b) imposes a time mandate of speed to be adhered by the intermediaries to act within a period of 24 hours immediately after they receive an intimation of an aggrieved party's complaint.
This particular pertains exactly to the impersonation content, including photos morphed artificially. This calls for urgency in the deepfakes takedown.
Offences Committed Using Deepfake Technology
The number of possibilities in doing crimes using deepfake technology is vast. In itself, the technology is not a danger. But it can be used to cause harm to people and even society. The following are some of the crimes that can be done with deepfake:
- Identity theft and virtual forgery - Deepfakes about stolen identities or creating photorealistic versions of people are not just pranks but serious crimes. All this high-tech wizardry can damage someone's reputation, spread canards, and play havoc with public opinion. Like Section 66-C of the IT Act, 2000, the law is more than ready to confront these cyber crimes. Even dowdy old legislations like Sections 420 and 468 of the Penal Code, 1860, can be dragged in for action against the miscreants in the virtual world.
- Misinformation against governments - Deepfakes ready and willing to spread false information against the government is a giant problem. It creates confusion, messing up public trust, and even influencing political outcomes. The court has ready-made laws under Section 66-F and Intermediary Guidelines to prosecute these digital crimes. And if things get worse, older laws like Sections 121 and 124-A deal with those who tried to wage war against the government.
- Hate speech and defamatory content - There are deepfakes promoting hate speech and defamatory content, so it is a serious issue for the individual as well as the online environment. The Intermediary Guidelines must be used in prosecuting these crimes along with IPC 153-A, 153-B, and 499.
- Election interference - With deepfakes circulating fake information against political candidates, this cannot be said without disrupting the elections. A remedy is available in sections 66-D and 66-F of the IT Act to prosecute these crimes. Other ways of combating these crimes include sections of the Representation of the People Act and a voluntary code of ethics from IAMAI.
- Violation of privacy/obscenity and pornography - Deepfakes, creating fake images or videos, will harm reputations and spread false info. This can injure both the individual and society.
It has alarming possibilities of misuse like non-consensual pornography or political propaganda. Sections 66-E, 67, and 67-A of the IT Act are more than ready to prosecute these offenses. Even the sections from the Penal Code and POCSO could be brought into action to safeguard the rights of women and children from such misuse of deepfake technology.
What Can We Do to Protect Ourselves Against This Digital Threat?
- Be careful about your online sharing. Keep personal images and videos out of the reach of deepfake tools, as you have high privacy settings.
- Arm yourself with stronger passwords across your accounts since breaking that will be too difficult for hackers to penetrate into your visuals.
- Use a good antivirus program that protects your computer from potential malware threats which can be exploited to create deepfakes.
- Be on the lookout for any videos or images that seem too good to be true. If it seems off, and it probably does, then it's probably a deepfake. Think twice before sharing or believing it.
- Put watermarks across your visuals, which deter others from stealing them. It is not a foolproof preventive measure, but it makes the job that much harder for anyone who desires to pass off your work as his or her own.
- Be careful to keep your metadata current and correct. These hidden pieces of data might include information such as copyright owner, date, and place of creation and will help authenticate ownership if anyone tries to dispute this right.
Suggestions
Deepfakes become a concern of the entire world. To successfully regulate
deepfakes with all such issues, international cooperation and collaboration is
required. To deal effectively with all the issues related to violations of
people's private information, the government may incline toward something like
the censorship approach: closing down the problem of malicious deepfakes from
spreading, perhaps by asking websites to take it off. They could also be very
punitive in that they made those who spread the false information pay for the
damage caused whether they were individuals or organizations. On the other hand,
another approach would be to make the intermediary responsible for the quick
pulling down of such content which is the intermediary regulation approach.
Failure in this would attract legal consequences under Sections 69-A and 79 of
the IT Act.
Conclusion
In conclusion, it is crucial to add this issue of privacy to the picture in the
context of the generation of deepfakes. Safety of online individuals would be in
line with risk. Current IT Act of 2000 is not viable enough for cyber crimes
related to artificial intelligence, machine learning, and deepfakes. There is a
need to reshape the law by explicitly including issues which pertain to
deepfakes. Adoption of stricter measures combined with penalties for malicious
use provides a positive sum in defense against unfavorable exploitation of the
individual's image. Updates in present legal frameworks should become an
imperative to effectively combat the changing landscape of digital deception and
ensure a safer online environment for all.
Please Drop Your Comments