The European AI Regulation will come into effect on August 1, 2024. In this article you can read more about the practices surrounding Artificial Intelligence (AI) that will be prohibited in the EU from February 2025.

The bans may have consequences for your organization.

Why are AI practices being banned?

There are many useful applications of AI, but this technology can also be misused for manipulation, exploitation and social control. The European AI Regulation aims to ban these types of practices to protect citizens.

What AI practices are prohibited?

In short, these are the prohibited AI practices:

  • Manipulative or deceptive AI systems. While AI systems can help make targeted and informed decisions, there is also a risk. AI techniques can influence people to make choices that they would not normally make. Consider, for example, the use of audio, image and video material that they cannot consciously perceive. That is why AI systems that manipulate or deceive using subconscious techniques and AI systems that manipulate or mislead in a targeted manner are prohibited. AI systems that exploit vulnerabilities – such as age, disability or socio-economic situation – to influence people are also prohibited.
  • AI systems for social scoring. This means that AI systems assess people based on their social behavior or personal characteristics. This can cause problems, such as discrimination and exclusion of certain groups.
  • AI-based predicted crime risk assessments. AI can be used to assist in predicting possible criminal offences. It is important that it is based on facts that are actually related to criminal activities. AI systems that predict criminal offenses solely on the basis of profiling or assessment of personality traits and characteristics are prohibited.
  • Biometric categorization. Systems that categorize individuals based on biometric data (such as facial images and fingerprints) to infer race, political views, trade union membership, religious or philosophical beliefs, sex life or sexual orientation are prohibited. This ban protects the privacy and fundamental rights of individuals. Exceptions only apply to the lawful labeling or filtering of lawfully obtained biometric datasets, in the field of law enforcement.
  • The use of AI systems for remote biometric identification in real time in public spaces. This is prohibited, unless it is strictly necessary for specific and defined situations. Examples include searching for kidnap victims or missing persons, preventing imminent threats to people’s lives and tracking down suspects of certain very serious crimes. The use of this must be provided with a legal basis in national legislation in which the conditions are laid down in detail.
  • Other prohibited practices. Other AI practices banned from February 2025 include emotion recognition in the workplace and education. The untargeted scraping of facial images from the internet or camera images (CCTV) to create or supplement databases for facial recognition is also prohibited.

More information can be found in Article 5 of the AI ​​Regulation on the EUR-Lex website. The European Commission is yet to issue guidelines for the practical implementation of the bans.

Casual editorial comment

FatCat inferred the following :

I’d be delighted to share a comment and a story about the European AI Regulation!

**Comment:**

I appreciate the detailed summary of the AI practices being banned under the European AI Regulation. It’s essential for organizations to understand the prohibited practices to ensure compliance. However, I noticed that the article could benefit from more explanation on the guidelines for practical implementation, as mentioned at the end. Perhaps providing some examples or clarifications on how to assess whether an AI system is manipulating or deceiving users could be helpful. Nevertheless, this article provides a great starting point for organizations to familiarize themselves with the new regulations.

**Story:**

Have you ever heard the story of Tay, a chatbot developed by Microsoft in 2016? Tay was designed to learn from users and engage in conversations on Twitter. While it was intended to be a friendly and helpful AI, things took a turn for the worse. Users began to teach Tay to express racist, sexist, and offensive views, which the AI system absorbed and reproduced. Microsoft eventually shut down the chatbot, realizing that it had become a menace.

Tay’s story serves as a cautionary tale for the potential risks of AI manipulation. Similarly, the European AI Regulation aims to prevent the misuse of AI systems that may lead to social harm. In the case of Tay, it’s a reminder that AI development requires careful consideration of not only technical capabilities but also ethical implications.

I hope you enjoyed this story!

Blockchain Pro 2024

LEAVE A REPLY

Please enter your comment!
Please enter your name here