top of page
Writer's picturePuja Modha

The AI Act: Shaping Responsible AI Practices in the European Union

On 13th March 2024, the European Parliament marked a historic moment with the approval of the Artificial Intelligence (“AI”) Act, a ground-breaking law governing the use of AI in the European Union (“EU”). This landmark legislation, the first of its kind globally, aims to provide robust protection for the rights of EU residents against the potential harms of advanced machine learning and AI systems. Members of the European Parliament (“MEPs”) voted in favour of the AI Act, with 523 votes in favour, 46 votes against and 49 abstentions. This vote signified the culmination of efforts initiated in April 2021 when the EU Commission first introduced its proposal.

 

Who is affected by the AI Act?

 

AI developers and providers


The AI Act imposes obligations on companies and organisations involved in developing or providing AI systems within the EU. This includes organisations located within the EU that develop AI systems, importers, distributors and manufacturers of AI systems in the EU as well as providers placing AI systems on the EU market. Additionally, the AI Act extends its jurisdiction to providers, whether inside or outside the EU, whose AI systems’ output is intended for use within the EU.

 

Users and operators of AI systems


Companies and organisations using or operating AI systems within the EU are also subject to the provisions of the AI Act. Users and operators must ensure compliance with the AI Act to ensure ethical and responsible usage of AI systems.

 

Regulators and supervisory authorities


National authorities within the EU are entrusted with the enforcement of the AI Act and ensuring compliance with its provisions. These regulatory bodies play a crucial role in overseeing the implementation of the AI Act and monitoring AI systems used within their respective countries. Additionally, the European AI Board will provide guidance and support to national authorities.

 

Consumers and citizens


The AI Act aims to protect the rights and interests of consumers and citizens within the EU who interact with AI systems. This includes ensuring that AI systems are transparent and that users are informed about the use of AI in their interactions with organisations.

 

How does the AI Act classify risks in AI systems?

 

The AI Act categorises AI systems into different bands of risk based on their intended use.


Unacceptable risk


The first category is unacceptable risk which includes practices that are strictly prohibited. These include AI systems that deploy subliminal, manipulative or deceptive techniques to distort behaviour and impair decision-making, as well as those that exploit vulnerabilities related to age, disability or socio-economic circumstances causing significant harm. Additionally, biometric categorisation systems inferring sensitive attributes such as race or political opinions are prohibited, except under specific circumstances like law enforcement use. Social scoring, assessing criminal offences based solely on profiling, compiling facial recognition databases from untargeted scraping, workplace emotion inference and real-time remote biometric identification in public spaces for law enforcement are also forbidden, except under statutory exceptions.

 

High-risk


The second category is high-risk, where the AI Act imposes stringent rules on AI systems deemed to carry elevated risk levels. High-risk AI systems are subject to a detailed certification regime but are not deemed so fundamentally objectionable that they should be banned. Examples of high-risk AI uses include critical infrastructure, education and vocational training, employment, essential private and public services (e.g., healthcare and banking), certain systems in law enforcement, migration and border management, justice and democratic processes (e.g., influencing elections). Such systems must assess and reduce risks, maintain use logs, be transparent and accurate and ensure human oversight. Citizens will have a right to submit complaints about AI systems and receive explanations about decisions based on high-risk AI systems that affect their rights.

 

Limited and minimal risk


The final categories are limited risk and minimal risk AI systems. These categories include AI systems that, while not classified as unacceptable or high-risk, still present limited risks to the health, safety or fundamental rights of individuals. Despite their lower risk level, these AI systems are subject to transparency obligations. For instance, providers must disclose if a system is AI-driven or if the content was AI-generated, enabling users to make informed decisions about their use.


How does the AI Act regulate General Purpose AI (“GPAI”) systems?

 

The AI Act also addresses the regulation of GPAI systems, and the GPAI models they are based on, which are capable of being used for various purposes. A GPAI model refers to an AI model, including when trained with a large amount of data using self-supervision at scale that displays significant generality and is capable of competently performing a wide range of distinct tasks, regardless of how the model is placed on the market, and can be integrated into a variety of downstream systems of applications.

 

All providers of GPAI models must provide technical documentation, instructions for use, comply with the Copyright Directive and publish a summary about the content used for training. Providers of free and open licence GPAI models only need to comply with the Copyright Directive and publish a sufficiently detailed training data summary, unless they present a systemic risk. Providers of GPAI models deemed to present a systemic risk must also conduct model evaluations, adversarial testing, track, document and report incidents and ensure adequate levels of cybersecurity protection.


When will the AI Act come into effect?

 

The AI Act is still subject to a final lawyer-linguist check and a formal endorsement by the European Council. Following its publication in the Official Journal, the AI Act will come into effect 20 days later. It will be fully applicable 24 months after its entry into force, with some exceptions:

 

Prohibitions on certain practices will take effect 6 months after entry into force.

Codes of practice will be applicable 9 months after entry into force.

Rules governing general-purpose AI including governance will take effect 12 months after entry into force.

Obligations for high-risk systems will be enforced 36 months after entry into force.

 

Aria Grace Law CIC

 

At Aria Grace Law CIC, we can offer comprehensive legal services to support organisations in preparing for the implementation of the AI Act. Our experienced data privacy team provides strategic guidance on understanding the implications of the forthcoming AI Act and developing proactive compliance strategies. Please feel free to get in touch with our team on privacy@aria-grace.com if you have any questions.

 

Article by Puja Modha (Partner) and Sarah Davies (Trainee Solicitor) – 11 April 2024

bottom of page