Pretty Pixels, Pricey Privacy: The Ghibli AI Trend Unmasked
India today stands at a pathway where it demands for AI regulations at an
alarming rate. Latest trends and developments indicate the popularity which AI
is gaining as it is freely accessible to the public at large. Initially, upon
its public release, ChatGPT sparked extensive discourse as there were various
concerns revolving around it, one of the significant ones being its potential
impact on employment. Many have questioned whether AI, capable of performing
numerous tasks through simple prompts, could truly match and replace human
expertise and creativity, thereby disrupting traditional job markets.
Amidst the persistent concerns regarding AI, a recent trend that has gained
significant attraction is the "GHIBLI TREND." While it appears fascinating, it
also raises important concerns regarding data privacy and AI-generated content.
This trend involves users uploading their photos to various AI platforms, with
one of the most used being ChatGPT. The process is simple- users provide an
image along with a prompt such as "turn this picture into GHIBLI Art" after
which the AI applies the artistic effect and transforms the picture. This
specific type of digital art produces images with a gentle, hand-drawn
appearance, enhanced by vivid colors, deep natural shades and warm illumination,
delivering a surreal result.
However, despite having its visual appeal, the AI still lacks in providing the
users with accurate artwork as in many cases it has turned out to be imperfect.
There have been multiple occurrences where the AI has made significant errors,
like adding animals, generating extra limbs- such as an unexpected third hand or
even incorporating additional human figures without any context. These
inconsistencies highlight the limitations of AI in achieving flawless artistic
rendering.
While some may dismiss this as a harmless social media trend, the real issue
lies in the widespread lack of awareness regarding data privacy. Not only are
everyday users engaging with this trend without hesitation, but even
high-profile influencers and celebrities are actively participating, further
normalizing the practice. prominent figures such as Amitabh Bachchan, Katrina
Kaif, Vicky Kaushal, Bipasha Basu, and Rakul Preet Singh, as well as cricketer
Sachin Tendulkar, have shared their Ghibli-style images. Even notable
politicians, including Prime Minister Narendra Modi, Congress MP Shashi Tharoor
and Maharashtra Chief Minister Devendra Fadnavis joined in on the trend.
The voluminous data being processed has reportedly led to overheating of
ChatGPT's servers and hardware, causing performance degradation, leading to
performance degradation, and limiting users to generate three Ghibli-style art
transformation per day. This trend underscores the growing concerns surrounding
AI's handling of personal data. While the artistic results may seem captivating,
it is crucial to recognize the broader implications of continuously feeding
personal information into AI-driven platforms without understanding the
potential risks it exposes us to.
Artificial Intelligence (AI) has proven to be an invaluable tool across various
sectors, including research, healthcare, and entertainment. As AI continues to
evolve, its output generation has undergone continuous refinement. However, the
rapid advancement of AI raises critical concerns regarding data privacy, the
challenge lies in the lack of transparency regarding how AI platforms collect,
store, and utilize personal data, security, and regulatory compliance, which
remain unaddressed. Several fundamental questions demand urgent legal scrutiny:
Where is our data stored and the duration of that storage? On which servers are
it retained? And for what purposes is it processed, and with whom is it shared?
Does AI-based data processing comply with jurisdictional and statutory privacy
laws?
The Jurisdictional Dilemma and Data Protection Laws
In an ongoing legal battle before the Delhi High Court, Asian News International
(ANI) vs. OpenAI has emerged as India's first case addressing copyright concerns
in artificial intelligence. The dispute arose when ANI discovered that ChatGPT
was allegedly generating responses containing content from its proprietary news
articles published online. During the proceedings, Senior Advocate Amit Sibal,
representing OpenAI, argued that the company "cannot be accused of copyright
violations in India."
He further contended that since the training of OpenAI's large language model (LLM)
occurs outside India and its training data is not stored within the country, the
Indian Copyright Act does not extend to it. This case raises crucial questions
about the jurisdictional reach of India's copyright laws, the regulation of
AI-generated content, and the protection of intellectual property in the digital
age. The outcome of this litigation could set a significant precedent for AI
governance and copyright enforcement in India.
Currently, India lacks dedicated AI-specific regulations, leaving significant
gaps in oversight and compliance. While the Digital Personal Data Protection Act
2023 provides general data protection guidelines, it does not explicitly
regulate AI-based data processing, raising concerns about AI's future
coexistence with humans and the risks of privacy breaches. It mandates that
organizations processing personal data adhere to stringent guidelines concerning
consent, data retention, and cross-border data transfers.
As of now, ChatGPT and similar AI platforms do not provide an India-specific
privacy policy, nor do they clarify the jurisdictional limitations concerning
data storage and processing outside India's geographical territory. Furthermore,
the DPDP Act includes provisions allowing data to be shared with government
entities and judicial bodies without consent under specific circumstances as
outlined in Section 3(c) of the Act.
This raises concerns about data security, accountability, and the scope of
governmental access. Given this regulatory vacuum, AI compliance is often
presumed to align with existing laws such as the Information Technology Act,
2000, and the DPDP Act, 2023, yet the specifics of AI governance remain
undiscussed and undefined.
A striking example of the intersection between AI, data privacy, and public
participation is the GHIBLI Trend. The primary use of this AI-generated content
is for social media posts, stories, and reels. However, this raises significant
concerns about data privacy and ownership. Under Section 3(c) of the DPDP Act, a
Data Fiduciary (in this case, an AI platform such as ChatGPT) cannot be held
liable when a Data Principal (the user) willingly makes their data public
through vlogs, posts, or similar content. Consequently, waving their rights over
their personal data, making it available for AI training, facial recognition,
and potential third-party usage, the Data Fiduciary cannot be held responsible
for its misuse.
One of the most alarming issues emerging from AI's unchecked expansion is the
misuse of deepfake technology. Recently, several actors and public figures fell
victim to the unauthorized circulation of deepfake videos portraying them
engaging in explicit activities—commonly referred to as "deepfake pornography,"
the impact of which is particularly disturbing as it threatens an individual's
dignity, privacy, and professional reputation.
Although there has been widespread outrage over such incidents, the brief
attention span characteristic of digital audiences often leads to concerns being
swiftly neglected by the emerging trends such as the GHIBLI trend which
continues to exploit AI-driven visual alterations without adequate safeguards.
Legal frameworks such as the Information Technology (Intermediary Guidelines and
Digital Media Ethics Code) Rules, 2021, under the IT Act, 2000, empower
authorities to act against the dissemination of obscene or non-consensual
AI-generated content.
However, enforcement remains a challenge due to the ease with which such content
spreads across social media platforms and encrypted messaging services.
Furthermore, India currently lacks explicit legislation to penalize deepfake
creation and distribution, leaving victims with limited legal recourse.
Potential Legal Ramifications
In an age where privacy is the new currency, the Ghibli trend may fade but the
digital footprint we leave behind can last forever. Given the increasing
reliance on AI-driven tools, it is imperative for policymakers to establish
clear regulatory frameworks ensuring that AI platforms comply with national and
international privacy laws.
The following significant issues warrant immediate legal intervention:
Cross-Border Data Transfers – AI platforms must be mandated to disclose whether
user data is stored on offshore servers and the applicable data protection
measures in such jurisdictions. User Consent & Transparency – AI platforms
should implement explicit opt-in policies that allow users to control how their
data is processed and shared.
Jurisdictional Compliance – In the absence of India-specific privacy policies,
AI platforms must be held accountable for adhering to the DPDP Act and other
applicable laws. Grievance Redressal Mechanisms – Clear provisions for user
recourse in case of data breaches, unauthorized data sharing, or privacy
violations must be established. AI Governance and Ethical Oversight –
Independent regulatory bodies should oversee the ethical deployment of AI to
prevent mass surveillance and unlawful data exploitation.
Conclusion
The unchecked expansion of AI and its growing integration into daily digital
interactions necessitate a robust legal framework to address privacy risks and
jurisdictional complexities. While AI presents unparalleled opportunities for
innovation, its compliance with privacy laws must be non-negotiable. Regulatory
bodies, legal practitioners, and policymakers must work towards comprehensive AI
governance to ensure that user data remains secure, private, and protected
against unauthorized exploitation.
Given the emerging complexities in AI-driven data processing, a legislative
approach that aligns with global best practices is the need of the hour. Until
such measures are implemented, users must exercise caution while interacting
with AI platforms, ensuring they do not unknowingly compromise their personal
data rights.
Share this Article
You May Like
Comments