• 2 min read

The EU AI Act has been published!

AI Learning and Artificial Intelligence Concept. Business, moder

What’s happening?

It has been a long time coming but the EU AI Act will finally come into force on 1 August 2024. However, there is no immediate impact as the rules will only start to apply on a phased basis:

  • Certain prohibited AI systems,  which are considered to present an unacceptable risk, will be banned from 2 February 2025.
  • Provider obligations relating to general-purpose AI models (GPAI), the confidentiality obligations and rules on penalties will apply from 2 August 2025. GPIA’s include what are  more commonly known as generative AI or foundation models.
  • The majority of the provisions will start to apply from 2 August 2026.

Operators of existing AI systems (excluding those banned from 2 February 2025) may have longer to comply in some limited circumstances pursuant to the Act’s transitional rules.

All that said, operator’s of AI systems will be encouraged to voluntarily adopt the Act’s requirements before the two-year deadline.

The Act sets out a comprehensive legal framework for development and use of AI and aims to foster trustworthy AI and position Europe as a leader in AI regulation.

The AI Act does not define the term ‘AI’ but rather defines an ‘AI system’ (of which an ‘AI model’ is an essential component) as, “a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

AI systems are categorised by reference to four levels of risk; unacceptable risk, high risk, limited risk and minimal or no risk. Each category has corresponding compliance requirements to ensure that the level of oversight is appropriate to the risk level. Compliance obligations also vary depending upon an operator’s role in the AI value chain – provider, deployer, importer, distributor, product manufacturer and authorised representative.

The majority of obligations apply to providers and deployers of AI systems and designated high-risk systems that could significantly affect people’s safety or fundamental rights– including those used for critical infrastructures and systems used to determine access to education or jobs..

Unlike the EU, there is still no real indication that the UK is intending to introduce comprehensive legislation like the EU AI Act to govern AI.

Why is this important?

The EU AI Act has extraterritorial effect (similar to the GDPR) and so it can apply to providers, importers, distributors and deployers of AI systems established or located outside the EU where the system is supplied to, or used in, the EU or where the output of the system is used inside the EU.

Sanctions for non-compliance with the Act are (like the GDPR) significant, ranging from €7.5 million to €35 million, or from 1.5 % to 7% of the organisation’s global annual turnover.

What should you do?

Organisations involved in the development, supply or use of AI systems should assess the extent to which these activities may fall within the scope of the EU AI Act so they can commence the process towards compliance.

Answers are just a click away

Make an enquiry