The purpose of the legislation is to regulate AI technology. The AI Act provides a legal framework to create a safe and reliable AI climate that protects the fundamental rights of EU citizens. The framework also aims to stimulate innovation but in a safe manner. This means it takes into account both humans and the environment. AI can cause harm in various ways: psychologically, physically, socially, and economically.
Definition
The EU describes an AI system as a machine-based system that can operate autonomously. After installation, this system can adapt and learn from the information it receives to make predictions, create things, give advice, or make decisions that can affect the physical or virtual world.
It's important that AI systems are clearly distinguished from, for example, software programs or rules established by natural persons to carry out automatic actions.
Scope of application
The law applies to:
- Operators (users) of AI systems in the EU.
- Providers of AI systems (putting into service or making available) in the EU (regardless of whether the party is based in the EU or in a third country).
- Providers and operators of AI systems from a third country but whose output is intended to be used in the EU.
- Importers and distributors who release AI systems onto the EU market.
- Product manufacturers who introduce products with AI systems onto the EU market.
The law is thus applicable to all sectors.
Whether the law is correctly applied will be monitored by national authorities, such as the data protection authority. Non-compliance can result in a fine of up to 35 million euros per infringement or up to 7% of the global turnover.
Risk categories
The AI Act is based on a risk-based model. Some AI systems are thus prohibited, others must adhere to rules and provide transparency depending on the system's risk profile.
The following risk categories are considered:
Unacceptable risk
These AI systems are prohibited. This includes, for example, social scoring models. These models literally assign a social score to natural persons depending on personal characteristics or the social behaviour of the person. Running a red light, for example, negatively influences the score, helping people positively.
Also, biometric categorisation systems that use sensitive characteristics such as political, religious, philosophical beliefs, sexual orientation, or race are forbidden. Likewise for AI systems that manipulate human behaviour to circumvent free will or exploit human vulnerabilities.
These systems pose a danger to the rights, safety, and/or health of natural persons. They must be phased out within six months.
High risk
This includes, for example, AI systems that assist in recruitment and selection, such as selecting candidates for vacancies or determining bonuses for employees.
High-risk AI systems must be registered in an EU database before they come onto the market. They must be documented (assessment) and will also have to be followed up. This extensive documentation to demonstrate compliance with the regulation includes, among other things, records of programming and training methodologies, used datasets, and measures taken for supervision and control.
Low risk
These AI systems pose no dangers. They can thus be developed and used without further ado. Transparency remains crucial, however.
To classify whether an AI system poses a high risk depends on the system's effect on fundamental rights (right to privacy, right to freedom, right to non-discrimination, etc.), safety, and the health of natural persons. The law also provides many examples to arrive at the correct classification.
What does this mean for you?
- If you have mapped all AI systems, you then need to determine the risk category. You need to assess the risks of the system's effect on fundamental rights (and society) and establish measures to manage these risks (impact assessment). You must document this thoroughly and regularly review it. And most importantly: apply. High-risk AI systems must be registered in advance. In some cases, you will need to have the AI system assessed by an external party.
- It is important that your company takes management measures, documents everything correctly, follows up, is transparent, and can demonstrate all this (accountability). Compliance thus plays a crucial role in every AI system in your organisation. Monitoring the system and intervening in time before things potentially go wrong is therefore necessary.
- The law emphasizes the importance of high-quality (and thus reliable) data to guarantee the reliability of an AI system (data governance). Cybersecurity for data protection is also part of this.
- The law emphasizes – for those who might not yet be clear – once again that the protection of personal data is crucial. Among other things, by applying data minimisation and anonymising or encrypting data.