On 4 February 2025, the European Commission published guidelines on the Prohibited Artificial Intelligence Practices (the Guidelines).
These Guidelines are supplemental to the European Union Artificial Intelligence Act (the Act, effective since 1 August 2024) and its banning of so-called "Prohibited Practices" under Article 5 of the Act. Although not binding, the Guidelines provide insight and clarity into how to interpret the prohibitions by providing explanations and examples to help organisations assess whether an AI system qualifies as prohibited under the Act. There are heavy penalties for non-compliance with provisions dealing with Prohibited Practices, including fines of up to 35 million Euros or 7 percent of global annual turnover of the preceding year (whichever is higher).
Background
On 2 February 2025, the first tranche of the Act began to apply which includes the obligation on organisations to ensure staff interacting with AI hold a sufficient level of AI "literacy" and the prohibition of certain AI practices being:
- Harmful manipulation
- Harmful exploitation of vulnerable persons
- Social scoring
- Individual risk assessment and prediction of criminal offences
- Untargeted scraping to develop facial recognition databases
- Emotion recognition in workplaces and schools
- Biometric categorisation
- Real-time remote biometric identification for law enforcement (not covered in this article)
The Guidelines outline how these prohibitions apply in a practical sense and clarify out of scope practices. Below is a summary some of the key takeaways from the Guidelines in relation to the most relevant prohibitions.
1 and 2. Harmful Manipulation and exploitation of vulnerable persons
The Guidelines group the first two prohibitions together: harmful manipulation and the harmful exploitation of vulnerable persons. These prohibitions cover AI systems that deploy subliminal, purposefully manipulative, or deceptive techniques with the objective or effect of materially distorting behaviour impairing a person’s ability to make an informed decision or exploit vulnerabilities due to age, disability, or a particular socio-economic situation, which causes them or is likely to cause them harm.
Key insights from the Guidelines:
- A person may remain unaware of the subliminal techniques being used to distort their own behaviour or they may be aware but may not be able to control or resist its manipulative effects.
- Purposefully manipulative techniques are those that are designed to influence behaviour, the intent to cause harm is not necessary (but is sufficient) for its prohibition, only that it does cause harm.
- In a similar vein, human intent for the AI system to be manipulative or deceptive is not required, the AI system itself can deploy manipulative or deceptive techniques, rather than the human(s) that have designed or used the system in this way.
- As long as the effect of these techniques is to distort behaviour then the AI system will be prohibited, regardless of the intent of the AI system or the human behind it. There should be a likely causal link between the potential for behavioural distortion and the subliminal, purposefully manipulative, or deceptive techniques.
- The Guidelines provide that authorities will decide whether the AI system is likely to appreciably impair the decision-making and free choice of an ‘average’ individual within a targeted group which reflects EU consumer law.
3. Social scoring
This prohibition covers the evaluation or classification of people or groups of people based on their social behaviour or known, inferred, or predicted personality traits over a period of time which leads to the unfavourable treatment of those people.
Key insights from the Guidelines:
- The term "evaluation" also relates to the concept of "profiling"; profiling of people under the EU data protection law may be prohibited if conducted through AI systems.
- The data used must span over a certain period of time for the AI system to be prohibited. A data point taken at a single point in time will not be prohibited.
- There must be a causal link between the social score and the unfavourable treatment.
4. Individual risk assessment and prediction of criminal offences
The Act aims to prohibit AI systems that assess or predict the risk of a person committing a criminal offence based solely on profiling or assessing personality traits and characteristics, unless it is used to support a human assessment of the involvement of a person in a criminal activity, which is already based on objective facts directly linked to a criminal activity.
Key insights from the Guidelines:
- Examples of personality traits and characteristics in this prohibition include a person’s nationality, place of birth, place of residence, number of children, level of debt, or type of car.
- Any other factors used in the risk assessment other than profiling, personality traits, or characteristics in order to circumvent the prohibition will have to be real, substantial, and meaningful.
- The prohibition may also apply where private actors are assessing or predicting the risk of a person committing a crime for legal compliance purposes such as anti-money laundering.
5. Untargeted scraping of facial images
The untargeted scraping of facial images using the internet or from CCTV footage to create or expand facial recognition databases is prohibited under the Act.
Key insights from the Guidelines:
- A ‘database’ in the context of the Act should be understood to refer to any collection of data that is organised for rapid search and retrieval by a computer.
- The sole purpose of the database does not have to be for facial recognition.
- ‘Scraping’ means the use of web crawlers, bots, or other means to extra data from different sources.
- ‘Untargeted’ relates to the AI tool absorbing as much data as possible without targeting specifically.
- If the scraping targets specific individuals or a pre-defined group, this will not be prohibited.
- The prohibition does not apply to:
- The untargeted scraping of biometric data other than facial images.
- AI systems which obtain large amounts of facial images in order to generate new images of fictional people.
6. Emotion recognition in workplaces and schools
This prohibition focuses on the placing on the market, the putting into service, or the use of AI systems that infer the emotions of a natural person in the areas of the workplace or education institutions unless it is used for medical or safety reasons.
Key insights from the Guidelines:
- The prohibition covers both identifying and/or inferring emotions or intentions through biometric data.
- The Act’s definition of biometric data is broad and includes any biometric data used for emotion recognition, biometric categorisation, or other purposes.
- Definitions in the Act reveal that identifying and/or inferring emotions or intentions based on biometric data will constitute ‘emotion recognition systems’.
- Emotions or intentions do not include physical states such as pain or tiredness.
- AI systems inferring emotions and sentiments not on the basis of biometric data will not be prohibited.
- AI systems that detect expressions, gestures, or movements such as smiles, frowns, or movements of the body, will not be caught by the prohibition unless used to then identify or infer emotions or intentions.
7. Biometric categorisation
The Act also prohibits biometric categorisation that categorises individuals based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation.
Key insights from the Guidelines:
- The categorisation process involves assigning an individual to a certain category by using their biometric data.
- Biometric data may rely on physical characteristics but also DNA or behavioural aspects.
- Biometric categorisation may fall outside the scope of the prohibition if it is only a feature that is intrinsically linked to another commercial service in such a way that it cannot be used without the principal service.
In conclusion, the Guidelines provide crucial insights into the practical application of the prohibitions outlined in the Act. By clarifying the scope and intent of these prohibitions, the Guidelines help organisations navigate the complexities of compliance, ensuring that AI systems are developed and deployed in a manner that safeguard fundamental rights. As AI technology continues to evolve, adherence to these Guidelines will be essential in preventing harmful practices and promoting ethical AI usage across various sectors.
The full Guidelines can be viewed here: Commission publishes the Guidelines on prohibited artificial intelligence (AI) practices, as defined by the AI Act. Shaping Europe’s digital future