Navigating the EU AI Act’s Risk-Based Approach
The EU AI Act adopts a risk-based approach to regulating artificial intelligence, recognizing that different AI systems pose fundamentally different levels of potential harm. Understanding these risk categories is crucial for businesses operating within the European market. This post will break down the four key risk categories outlined in the EU AI Act: prohibited, high-risk, limited-risk, and low-risk.
Prohibited AI Systems: What’s Off-Limits
Prohibited AI systems are those deemed to directly violate natural persons’ health, safety, or fundamental rights. These systems are banned from being placed or put into service on the European market. Examples of prohibited systems include AI models that:
Deploy subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques (EU AIA, Article 5(1) a)
Generate a social scoring based on an evaluation or classification of natural persons or groups of persons over a certain period of time based on their social behaviour (EU AIA, Article 5(1) c)
Use biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation (EU AIA, Article 5(1) g).
Exceptions are made for law enforcement purposes in specific, justified cases, such as searching for missing persons or preventing terrorist attacks. However, for the most part, the EU AI Act provides little legislative guidance on prohibited AI systems, as they are simply banned from the European market.
High-Risk AI Systems:
Strict Regulations and Conformity Assessments
High-risk systems are defined as AI models that (i) have a direct impact on natural persons (ii) and create a risk to the health/safety or fundamental rights to public/private goods of natural persons. The EU AI Act defines eight distinct categories of high-risk AI systems (see Annex III of the EU AIA). Use cases most relevant for financial services include:
AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud.
AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance.
The vast majority of regulations within the EU AI Act target high-risk AI systems, as these are not prohibited and can be applied within the European market but directly affect natural persons as a consequence of their decision-making. These systems are subject to far-reaching regulatory requirements, spanning topics such as accuracy, explainability, fairness, robustness, and cybersecurity.
High-risk AI systems will need to undergo a complete conformity assessment procedure and shall be in full compliance with the requirements laid down in the EU AI Act (see EU AIA, Article 8(1)).
Limited-Risk Systems: Transparency and Disclosure
Limited-risk systems are defined as systems that “interact directly with natural persons” (EU AIA, Article 50(1)), such as models, including general-purpose AI systems, “generating synthetic audio, image, video or text content” (EU AIA, Article 50(1)). These systems include AI models such as chatbots or synthetic image or audio generators which directly respond to queries from end-users.
Deployers of limited-risk systems are placed under transparency obligations, and they shall visibly disclose to end-users interacting with the AI system that the content has been artificially generated or manipulated and that the AI system the user interacts with is generating artificial data. Providers must also ensure their technical solutions are effective, interoperable, robust and reliable (EU AI Act, Article 50(2)). In comparison to high-risk models, the requirements laid down for limited-risk models constitute only a small fraction of those for high-risk systems.
Low-Risk Systems: Minimal Regulation
Low-risk systems are defined as AI models that (i) are not covered by any of the definitions on prohibited, high-risk, and limited-risk systems above (ii) or are exclusively used for private, non-commercial purposes and do not interact with or affect natural persons within the European market. The EU AI Act does not stipulate any specific regulations for these.
Conclusion: Preparing for the EU AI Act’s Risk-Based Framework
Understanding the risk categories outlined in the EU AI Act is essential for businesses developing and deploying AI within the European Union. By categorizing AI systems based on their potential for harm, the Act aims to ensure that AI is developed and used responsibly and ethically. Staying informed and proactively addressing compliance requirements will be crucial for navigating the evolving regulatory landscape of AI.