(4 min read)
Last week, MEPs and the European Council reached a historic agreement on proposed legislation for artificial intelligence (AI) regulation in the European Union. Although the deal is provisional at present, when the legislation is enacted next year, it will become the first set of comprehensive AI rules in the world, and this article discusses the expected regulatory landscape. European organisations that develop AI systems and organisations worldwide that use AI systems developed in Europe might be affected by the legislation, making this article of particular interest to them.
AI systems will be categorised into unacceptable, high, limited, or minimal risk groups based on their potential harm to health, safety, fundamental rights, the environment, democracy, and/or the rule of law. They will then be subject to regulation according to their categorisation. For general-purpose artificial intelligence systems (GPAIs), the proposed legislation will introduce additional regulation.
"The EU is the first in the world to set in place robust regulation on AI, guiding its development and evolution in a human-centric direction. The AI Act sets rules for large, powerful AI models, ensuring they do not present systemic risks to the Union and offers strong safeguards for our citizens and our democracies against any abuses of technology by public authorities. It protects our SMEs, strengthens our capacity to innovate and lead in the field of AI, and protects vulnerable sectors of our economy. The European Union has made impressive contributions to the world; the AI Act is another one that will significantly impact our digital future.” - Dragos Tudorache, Member of the European Parliament
Unacceptable risk
All AI systems which present unacceptable risk – for example, because they present a clear threat to the safety, livelihoods, and/or rights of people or which contravene the European Union's values – will be banned. Legislators agreed to prohibit:
- biometric categorisation systems that use sensitive characteristics (e.g., political, religious, philosophical beliefs, sexual orientation, race);
- untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;
- emotion recognition in the workplace and educational institutions;
- social scoring based on social behaviour or personal characteristics;
- AI systems that manipulate human behaviour to circumvent their free will; and
- AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation).
High-risk
High-risk systems, such as AI technology used in safety components of products, essential public and private services, or critical infrastructure, will become subject to stringent rules. These rules aim to ensure that these systems are robust, accurate, and secure. They will require detailed technical documentation, proper data governance mechanisms, and human oversight.
Systems will be evaluated for conformity with the AI Act and assessments will need to be made as to impact on fundamental rights. Systems must be registered in a European Union database, carry the CE mark, and utilise high-quality training, validation, and testing data to minimise discriminatory outcomes. Users will have the right to file complaints about AI systems and receive explanations about decisions taken by an AI system that affect their rights.
Limited risk
Systems with lower risk levels, like chatbots, will have fewer and less demanding requirements. Key obligations relate to transparency and for instance, unless it is obvious, users must be informed that they are interacting with an AI system whilst synthetic content must be labelled as having been generated artificially.
Minimal risk
Systems which do not fall into one of the three categories above – for example, AI-enabled video games or spam filters – will be classified as "minimal risk" and the free use of these systems will be permitted, although compliance with voluntary codes of conduct will be encouraged. The European Commission has stated that the "vast majority of AI systems currently used in the EU fall into this category".
GPAIs
GPAI systems will face two levels of regulation:
- The lower tier will necessitate providing technical documentation, complying with European Union copyright law, and sharing detailed summaries of the content used for the system's training.
- The upper tier will apply to GPAIs posing "systematic risk". These systems will be subject to a stricter set of rules, and obligations will include conducting model assessments, assessing and mitigating systematic risks, performing adversarial testing, reporting serious incidents to the European Commission, ensuring system cybersecurity, and reporting on energy efficiency. Until harmonised European Union standards are published, GPAIs with systematic risk must adhere to codes of practice, which will be jointly developed by various stakeholders and the European Commission.
Penalties
Failure to comply with the rules may result in fines based on the type of breach, capped at the higher of a set figure or a proportion of yearly global group turnover. For instance:
- supplying incorrect information may incur a fine of up to €7.5 million or 1.5% of turnover;
- violating obligations in the AI Act could lead to fines of up to €15 million or 3% of turnover; and/or
- breaching the banned AI list might result in fines of up to €35 million or 7% of turnover.
Small and medium sized businesses will face lower fines.
Next steps
The finalised text of the AI Act is expected to be released next year, followed by a phased two-year implementation period beginning in the summer. Look out for more information from AG next year, but if you have any queries in the meantime, please do not hesitate to contact one of our specialists.