On February 4, 2025, the European Commission released draft guidelines clarifying prohibited AI practices under the EU Artificial Intelligence (AI) Act. These guidelines aim to ensure the consistent and effective application of the AI Act across the European Union. While non-binding, they offer valuable insights into the Commission's interpretation of prohibited practices.
Key Prohibited AI Practices and Employer Risks
The AI Act identifies certain AI practices as posing unacceptable risks to fundamental rights and European values. Notable prohibitions include:
1. Manipulative Techniques
Prohibition: AI systems that deploy subliminal or purposefully manipulative techniques, distorting an individual's behaviour without their awareness, leading to decisions they would not have otherwise made, and causing or likely causing significant harm.
Example: Some AI-powered recruitment platforms claim to predict a candidate’s job suitability based on their facial expressions or voice tone during video interviews. If these systems use subliminal nudges to influence the recruiter’s perception or decision-making, they could fall foul of the AI Act.
2. Exploitation of Vulnerabilities
Prohibition: AI systems that exploit vulnerabilities of individuals or specific groups due to age, disability, or social or economic situations, materially distorting their behaviour in a manner that causes or is likely to cause significant harm.
Example: An AI-driven job-matching tool that intentionally steers lower-income applicants towards low-paying roles, based on assumptions about their socioeconomic status, would be considered exploitative under the Act. Similarly, AI screening tools that disadvantage candidates with disabilities by misinterpreting speech patterns or movement in video interviews could violate the law.
3. Social Scoring
Prohibition: AI systems that evaluate or classify individuals based on their social behaviour or predicted personal characteristics, leading to detrimental or unfavourable treatment unrelated to the original context of data collection, or treatment that is unjustified or disproportionate.
Example: If an employer uses an AI system to analyse employees’ social media activity and assigns them a risk score influencing promotions or disciplinary action, this would be a clear case of unlawful social scoring. Similarly, AI-powered tools that assess employee performance based on personal lifestyle choices, such as credit scores or location tracking outside work hours, could breach the AI Act.
4. Emotion Recognition in the Workplace
Prohibition: AI systems designed to infer emotions of individuals in workplace settings, except where intended for medical or safety purposes.
Example: Some companies deploy AI tools to monitor employees' facial expressions during meetings or track their tone of voice in customer service calls to assess engagement or stress levels. Such systems, if not strictly used for medical or safety reasons, would be prohibited under the AI Act.
Implications for Employers
Employers utilising AI systems must assess their practices to ensure compliance with the AI Act. Key considerations include:
Enforcement and Penalties
The AI Act establishes a comprehensive framework for AI governance. Non-compliance can result in significant penalties, including fines up to €35 million or 7% of annual global turnover for serious breaches.
Conclusion
The European Commission's guidelines on prohibited AI practices under the AI Act underscore the EU's commitment to ethical AI deployment. Employers must proactively assess and adjust their AI systems and policies to align with these guidelines, ensuring the protection of individual rights and maintaining public trust in AI technologies. By taking these steps now, businesses can avoid potential legal risks and foster a fair and compliant AI-driven workplace.
New paragraph
Thank you for signing up to our newsletter.
© Copyright 2024 | All Rights Reserved | Privacy Policy