The EU AI Act's Prohibited AI Practices: A Closer Look

Why This Matters

As the EU's Artificial Intelligence Act (AI Act) approaches its first entry into force phase of 2 February 2025, it's crucial for companies operating in the EU to understand the prohibited AI practices outlined in Article 5.

Non-compliance could result in hefty fines of up to 7% of total worldwide annual turnover or 35 million EUR, whichever is higher. While the Act doesn't impose a full ban on these practices, as exceptions are quite broad, grasping the nuances of each category is essential.

February 4, 2025 later update: The European Commission publishes the Guidelines on prohibited artificial intelligence (AI) practices, as defined by the AI Act. Full documents available here.

Overview of AI Act Article 5’s Banned Practices

Article 5 AI Act covers the placing on the market, the putting into service or the use of the following types of AI-powered practices:

a. Manipulative or subliminal AI techniques with the objective of distorting human behaviour:

an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken in a manner that causes or is reasonably likely to cause that person, another person or group of persons significant harm”

Potential applications: Recital 29 AI Act exemplifies audio, image, and video stimuli that persons cannot perceive or machine-brain interfaces that can materially distort human behaviour and affect decision-making.

Key issues to be clarified:

  • The terms “deceptive”, “beyond a person's consciousness” and “significant harm” are vague and broad, leaving room for interpretation. For instance, could personalized content algorithms leading to potentially damaging echo chambers be considered a prohibited practice under these standards?

  • Additionally, Recital 29 AI Act states that common and legitimate commercial practices, such as advertising, should not be regarded as harmful manipulative AI-enabled practices if they comply with applicable law. This raises questions about the interpretation of a legitimate advertising practice. For example, would an emotion recognition system used for advertising purposes be targeted by this prohibition?

b. Exploitative AI practices targeting a specific group of vulnerable persons:

an AI system that exploits any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability or a specific social or economic situation, with the objective, or the effect, of materially distorting the behaviour of that person or a person belonging to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm.”

Potential applications: An AI system that tracks user browsing history identifies individuals in poverty or elderly people to exploit their weaknesses by selling them low-quality or deceptive products for economic gain.

Key issues to be clarified: The limited scope of protected characteristics (age, disability, social, and economic situation) and the requirement of "significant harm" may create loopholes. For instance, would advertising gambling sites to a user identified as "low income" or “with lower-level education” based on their browsing history be considered a prohibited practice?

c. Social scoring AI systems:

AI systems for the evaluation or classification of natural persons or groups of persons over a certain period of time based on their social behaviour or known, inferred or predicted personal or personality characteristics, with the social score leading to either or both of the following: (i) detrimental or unfavourable treatment of certain natural persons or groups of persons in social contexts that are unrelated to the contexts in which the data was originally generated or collected; (ii) detrimental or unfavourable treatment of certain natural persons or groups of persons that is unjustified or disproportionate to their social behaviour or its gravity”

Potential applictions An AI tool analysing individuals' driving habits to assign a social score and treat them unfavourably in accessing credit.

Key issues to be clarified: The concept of "unrelated context" will be crucial in assessing such practices. It remains to be seen whether cases like the 2021 Netherlands welfare risk-scoring algorithm, which led to unjustified accusations of fraud, would be considered such a prohibited practice.

d. Predictive policing:

an AI system for making risk assessments of natural persons in order to assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics; this prohibition shall not apply to AI systems used to support the human assessment of the involvement of a person in a criminal activity, which is already based on objective and verifiable facts directly linked to a criminal activity”

Potential applications: The real-life example of the US COMPAS recidivism tool found to display racial bias in its assessments in 2016, highlighting the potential for discrimination in predictive policing AI systems.

e. Untargeted scraping of facial images from the internet or CCTV:

AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage”

Potential applications: The real-life example of Clearview AI's large-scale scarping of facial images on the internet, which was recently sanctioned by the Dutch Data Protection Authority with a 30.5 million EUR GDPR fine.

f. Emotional AI in the workspace or in educational settings except when used for medical or safety reasons:

AI systems to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reason

Potential applications: An AI tool analysing employees' emotions based on their facial expressions and voice tones during meetings; an AI-powered e-learning software using webcams to monitor students' emotions based on their facial expressions as they study and adapt content accordingly.

Key issues to be clarified: Despite clarifications offered by Recital 18 and 44, the distinction between emotion and sentiment should be clearer to correctly assess and distinguish emotion recognition systems from the widely used sentiment analysis techniques. To remove any ambiguity, it should also be clarified that the prohibition refers to emotion recognition systems that rely on biometric data (Article 3(39) AI Act) and not what is generally referred to as “emotional AI”.

g. Biometric categorization used to infer certain sensitive data:

use of biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation; this prohibition does not cover any labelling or filtering of lawfully acquired biometric datasets, such as images, based on biometric data or categorizing of biometric data in the area of law enforcement;”

Potential applications: An AI tool that uses voice or image recognition systems to infer a person’s race, skin colour or political leaning.

Key issues to be clarified: It's worth noting that this prohibition doesn't overlap with GDPR Article 9's special categories of data. Categorizing individuals based on health or genetic data, for example, doesn't currently fall under this prohibition. 

h. Real-time’ remote biometric identification systems

‘Real time’ remote biometric identification systems in publicly accessible spaces for law enforcement with a series of exceptions and corresponding safeguards: targeted search for victims of human trafficking or abduction; prevention of a highly likely safety risk, such as a terrorist act; localization of a person suspected of committing a serious crime for which a prison sentence of a minimum of four years may be imposed.

Potential applications: Face recognition technology deployed at a political protest for identifying individuals and comparing the collected data to criminal databases or using it for other ungrounded purposes.


Next Steps & Commission Guidance

The Commission’s AI Office held a consultation on 13 November 2024, seeking input on Article 5 AI Act applications. This closed on 11 December 2024, and official guidelines are expected in early 2025. These guidelines should clarify ambiguous terms and provide practical examples, helping both AI providers and deployers comply with the prohibitions.


How Can We Help

With the AI Act’s prohibitions taking effect on 2 February 2025, it is essential to confirm that none of your AI solutions—especially those used in sensitive areas like recruitment or employment—fall into the prohibited categories. We can guide your company through this evolving regulatory environment to ensure your AI practices comply with the law.

Here’s how we can support you:

  • Regulatory Guidance: We monitor the latest EU and Member State interpretations of the AI Act, helping you understand complex or ambiguous provisions like those set in Article 5.

  • Risk Assessment: We conduct tailored reviews of your AI systems to identify potential violations. From data collection to decision-making processes, we highlight areas that could raise compliance red flags.

  • Compliance Strategy: We work with your technical and legal teams to design or refine internal policies, documentation, and workflows, ensuring your AI solutions align with both current rules and upcoming standards and Commission guidelines and clarifications.

  • Ongoing Monitoring: As the regulatory landscape continues to evolve, we offer regular check-ins and updates to ensure that any new implementations or modifications in your AI systems remain compliant.

 

Maria Țucă

Experienced lawyer with a proven track record in litigation, commercial matters, and consumer protection. For the past four years, I’ve specialized in tech regulation, focusing on AI and digital services. My commitment to meticulous research and detail-oriented analysis enables me to deliver actionable, reliable legal advice that empowers organizations to thrive in a rapidly transforming digital ecosystem.

https://www.linkedin.com/in/maria-tuca/
Previous
Previous

AI Literacy in Action: Key Aspects for Companies

Next
Next

Digital Services Act: Is Article 14(4) Protecting Free Speech or Just Window Dressing?