AI should be human centric, the ultimate goal for its development shall be
human centric and the thought that AI is pecuniary. With this hopeful idea (and
assumption), the governments of the world are even allowing the AI markets in
their country for their citizen. The ultimate reason for any country to allow AI
market is for its own economic and societal benefits. Without any debate, almost
everyone agrees that legislature should step into it and make laws to regulate
the AI. Because we generally think that if anything is done under the roof of
law is beneficial (until you know a lawyer), and ultimately we feel safe with
that.
So, for that, purposes European Union (EU) had enforced the EU AI act on
August 1, 2024; before that European Parliament (EP) had also taken considerable
amount of action. For instance, in October 2020 EP adopted numbers of resolution
related to AI which covered areas like ethics, liability and copyright. In 2021
resolutions were also adopted regarding criminal matters, education, culture and
the audio-visual sector. These resolutions governed development, deployment and
use of the AI.
The law is tailored with the risk-based approach and
built on 4 specific objectives:
- AI systems that are placed in markets are safe and respectful toward the law and fundamental rights and values of the EU.
- To foster investment and innovation in AI with clearly laid down legal principles.
- Effectively implementing the existing laws on fundamental rights and enhance governance on AI safety systems.
- Prevent market fragmentation of AI and facilitate trustworthy, lawful and safe AI applications.
According to the explanatory memorandum of the AI act, legal interventions are
made where there is a justified cause for concern or where such concern can
reasonably be anticipate in near future. There is a well-defined risk based
regulatory approach that doesn't make a trade corrosive by putting unnecessary
restrictions. For an aeon the EU AI act accentuates on problem faced while
developing of an AI system and use of the same system.
The proposal defines
common mandatory requirements applicable to design and development of AI. The
primary objective of the act is to ensure the stability of internal market of
the (European) Union. Not just the union who is concerned about the risk of AI
or in future but the Nations in the union is also developing their domestic laws
and regulation, for safety of its own citizen, which is a good thing. But, there
is other side to that also; with that many laws a legal uncertainty will arise
which will cumbersome the pace of AI uptake in the market. That too will create
a fragmentation of the internal market in EU regarding essential elements of a
particular AI system. This legal bewilderment will certainly arise for both
users and providers of AI systems.
When talking about act itself, it laid down the definition of AI and makes sure
it also adaptable for the near future possible developments as AI is fast
developing condominium. The participants in the market chain of AI are also well
defined, at both public and private level. Following that the act established a
list of prohibited AI(s), which is follows a risk based approach. Than this risk
based approach divides the AI systems from an unacceptable risk to minimal risk.
The law is well established on 'unacceptable risk AI models', but it more
concerned about 'high-risk AI models', so that TITLE III is epically dedicated
to it , it's more like Union want to make this topic an ostentatious.
I think
(European) Union assumed that the 'unacceptable risk AI' is way far in the
future and they need to focus on which is in near future or is currently present
(and there is nothing wrong in it.). For that to happen the providers of the AI
and legislature have to develop a significant amount of trust on each other. But
at the top of it the Act would majorly be dependent on the implementation of the
same, for the title(s) VI,VII and VIII, will look upon the governance and
implementation part.
Title VI will establish governance system at Union and
National level, at union level the proposal establishes a 'European Artificial
Intelligence Board', for the national level country members of the Union have to
designate 1 or more national authorities. Title VII & Title VIII will be
subservient for the monitoring work and reporting obligations for the providers
of AI respectively.
Title I: General Provision
Title I consists of 4 articles (Article 4 contains amendments to annexure).
- Article 1: Lays down the purpose of the law, focusing on AI. It broadly states the topics considered while forming this law.
- Article 2: Covers the applicability of the law. Regulations do not apply to AI systems developed or used for military purposes.
- Article 3: Defines key terms. Definitions are crucial for clarity in regulations.
What Is AI (according to the Act)
AI is defined as software developed using one or more of the following techniques and approaches:
- Machine learning approaches: Includes supervised, unsupervised, and reinforcement learning, with methods such as deep learning.
- Logic- and knowledge-based approaches: Includes knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, symbolic reasoning, and expert systems.
- Statistical approaches: Includes Bayesian estimation, search, and optimization methods.
AI operates based on human-defined objectives and generates output or decisions influencing its environment.
Key Entities
- Provider: A natural or legal person, public authority, agency, or body that develops AI systems or markets them under their name or trademark.
- Importer: Acts as a connector between the Union and external entities. An importer is a natural or legal person established in the Union who places AI systems in the market.
- Distributor: A person distinct from the provider and importer, who makes AI available in the market without altering its properties.
Title II: Prohibited Artificial Intelligence Practices
Title II contains only one article:
Article 5: Prohibited AI Activities
- AI systems that cause a person to harm another (physically or psychologically) and exploit vulnerable groups.
- AI deployed by public authorities to assess social behavior or predict personal characteristics, leading to unjustified unfavorable treatment.
- Real-time remote biometric systems in public spaces, except in the following cases:
- (a) Identifying potential crime victims, including missing children.
- (b) Preventing specific or imminent threats to life, physical safety, or terrorist attacks.
- (c) Detecting perpetrators or suspects of criminal offenses punishable by at least three years of custodial sentence or detention.
The use of real –time identification system in publicly accessible spaces shall
be granted by a judicial authority or by an independent administrative authority
of Member state Title III
High - Risk Ai Systems
Classification Of Ai Systems As High-Risk
Chapter 1 contains total 2 articles , which focuses on defining the high risk AI
systems or models. Total 8 AI systems have been listed as high risk . Covering
the biometric identification , migration / asylum and border control , law
enforcement , private and public service, employment and work management ,
education , management and operation of critical infra. AI systems that pose
risk to health and safety, adverse impact on fundamental rights.
Requirements For High-Risk Ai Systems
This chapter incorporated for the risk management for high risk AI models.
Containing total 7 articles. From risk management to accuracy, robustness and
cybersecurity. Risk management includes identification, analysis, estimation and
evaluation and reduction or elimination of High Risk AI, a process that will run
through entire lifecycle of that AI.
Article 10 talks about the data and the governance, which further elaborates
about testing , validation and testing data sets that only shall only take place
for the intended purpose , relevant , representative , free of errors and
complete. The technical documentation for these AI systems shall be drawn up
before that system is placed in market and have to demonstrate complies. There
shall be automatic recording of events with a level of traceability of AI.
Confidentiality And Penalties
National Authorities must respect the confidentially of information and data
obtained. However , regardless this, information exchanged on a confidential
basis between national authorities and commission shall not be disclosed without
user and without prior consultation of the originating national authority.
Meaning that these following instructions will be followed but it won't be
affecting rights of the commission and member state of union and notified bodies
with regard of information exchange.
Penalties
The member state shall lay down rules on penalties taking into an account the
interest of small-scale providers and start-up and their economic viability.
Fines: infringing will be subjected to fines upto 30 000 000 euro or if
the offender is a company the 6% of its total worldwide annual turnover
,whichever is higher.
Non-compliance of article 5(prohibited AI practices) and 10 ( data and
governance) will lead to administrative fines upto 20 000 000 euro or, if
offender is a company then 4% of its annual turnover for the preceding financial
year, whichever is higher.
Providing incorrect , incomplete or misleading information to the notified
bodies and national authorities ; the fine for that will be 10 000 000 euro ,
for companies , upto 2% of its total worldwide annual turnover , whichever is
higher.
Conclusion
The EU AI Act represents a comprehensive and forward-looking legislative
framework aimed at regulating the development, deployment, and use of artificial
intelligence within the European Union. By adopting a risk-based approach, the
Act seeks to balance the promotion of innovation and investment in AI with the
need to ensure safety, respect for fundamental rights, and ethical
considerations. The Act categorizes AI systems based on their potential risks,
from unacceptable to minimal, with a particular focus on high-risk AI systems
that could significantly impact health, safety, and fundamental rights.
Key provisions of the Act include the establishment of clear definitions for AI
and its market participants, the prohibition of certain harmful AI practices,
and the imposition of stringent requirements for high-risk AI systems. These
requirements encompass risk management, data governance, technical
documentation, and cybersecurity, ensuring that AI systems are developed and
used responsibly. Additionally, the Act emphasizes the importance of
transparency, accountability, and traceability in AI operations.
The governance structure outlined in the Act, including the creation of the
European Artificial Intelligence Board and national authorities, aims to
facilitate effective implementation and monitoring of AI systems. Penalties for
non-compliance are substantial, reflecting the seriousness with which the EU
regards the ethical and safe use of AI.
Overall, the EU AI Act is a significant step towards fostering a trustworthy AI
ecosystem within the EU. By addressing potential risks and establishing clear
legal principles, the Act aims to prevent market fragmentation, enhance consumer
confidence, and promote the responsible use of AI technologies. The success of
the Act will largely depend on its implementation and the ongoing collaboration
between AI providers, regulators, and other stakeholders to ensure that AI
development remains aligned with human-centric values and societal benefits.
Comments