Strategy & Innovation

The AI Act: the world's first regulation of artificial intelligence

2.3.2024
6
min.
Discover the associated resource

Discover the impact of the AI Act on the evolution and regulation of AI in Europe. This article covers the adoption of the text, the concrete measures it contains for AI providers, and the regulatory transition measures. An essential guide to understanding the current and future challenges of AI in the EU.

The foundations and evolution of Artificial Intelligence

The foundations of artificial intelligence were laid several decades ago, notably with Alan Turing's machine. The arrival of Big Data, the increase in computing power and the shift to an inductive approach (machine learning and deep learning) have set the conditions for a new phase in the development of artificial intelligence.

The applications of AI are numerous and increasingly important for the economy and society. Although the vast majority of AI-related technologies in Europe today present minimal or non-existent risk to individuals, their rapid evolution means that new regulations need to be put in place.

IA regulation in the EU

The AI Act and its controversies

In April 2021, the European Commission became the first body in the world to issue a draft law aimed at regulating artificial intelligence.

The introduction of binding rules, particularly on foundation models, has not met with unanimous approval. During the third quarter of 2023, several European Union countries, including France, opposed any regulations for foundation model providers, preferring compulsory self-regulation measures.

For them, the original AI Act proposal would prevent European companies from developing certain innovations capable of competing on the market with American tech giants.

When was the AI Act passed?

In December 2023, a provisional political agreement was reached. To ensure a legal framework conducive to responsible innovation, this compromise proposed categorizing and regulating AI systems according to the risk they pose to users.

However, it was not until Friday February 2 that France finally ratified the AI Act. The text was thus unanimously validated by all European countries.

Risks and preventive measures under the AI Act    

The challenges of this regulation are manifold: it must take into account the advantages of AI, but also the new risks or negative externalities it can generate.

The aim is to ensure that AI systems used in the EU :

  • are safe, transparent, traceable, non-discriminatory and environmentally friendly;
  • respect the fundamental rights and values of the Union.

In particular, the proposal included requirements (on testing, controls, risk management, etc.) designed to reduce the risk of algorithmic discrimination.

The documentary Coded bias highlights the fact that algorithms, like humans, can have biases. As an example, it is shown that Amazon used an AI in its recruitment process to make HR's job easier by pre-selecting certain profiles. After some time, it was found that the AI selected almost no women.

Why such discrimination? The AI is not sexist, only that its machine-learning operation causes it to reproduce certain social patterns. It turns out that, at the time the algorithm was trained, very few women held positions of responsibility at Amazon.

Thus, the AI Act aims to clarify obligations in terms of the development, deployment or even use of AI to reduce the associated threats. At the same time, it will create legal conditions conducive to investment and the establishment of an innovative market.

Like the rapid evolution of AI systems, the purpose of the rules established by the AI Act is to continually adapt to technological change.

European regulations on artificial intelligence?

Risk categories and associated regulations

This text associates specific regulatory measures to an AI system according to the risk it presents to users. Four categories have been established:

Artificial intelligence applications can be divided into different categories:

  • Unacceptably risky applications will be banned. These applications include cognitive-behavioral manipulation and social scoring, among others.
  • High-risk applications, those posing a threat to security, livelihoods or fundamental rights, will be assessed before they are put on the market and throughout their lifecycle. These applications can include areas such as transport, education, health, employment and public services. For example, remote biometric identification, exam grading, verification of the authenticity of travel documents.
  • Limited-risk applications will have to comply with transparency obligations so that users can make informed decisions following AI intervention. Examples of such applications include chatbots and recommendation systems.
  • Applications with minimal or no risk will not be subject to any specific regulations. For example, video games and spam filters.

The establishment of "regulatory sandboxes" will enable both users/suppliers of high-risk AI systems to comply with the new rules, and EU member states to readjust or approve AI Act measures for final adoption of the world's first artificial intelligence rules.

In order to meet regulatory requirements, a set of concrete obligations has been drawn up to mitigate the threat posed by high-risk AI systems.

Supplier obligations under the IA Act

Before AI suppliers can be marketed, they will have to prove that they have the necessary expertise:

  • adequate risk assessment and mitigation systems;
  • the high quality of the data sets feeding the system, to minimize risks and discriminatory results;
  • recording of activities to ensure traceability of results;
  • detailed documentation providing all the necessary information on the system and its purpose to enable the authorities to assess its compliance;
  • the provision of clear and appropriate information for the user;
  • appropriate human monitoring measures to minimize risks;
  • a high level of robustness, safety and precision.

In a logic of iteration, high-risk AI systems will continue to be evaluated according to the aforementioned modalities, throughout their lifecycle.

Moreover, to ensure that the specific objectives set by the European Commission are met, a monitoring and evaluation mechanism has been set up, which will enable the sanctions provided for in the AI Act to be applied where necessary.

Notably, once an AI system is on the market:

  • National authorities will ensure that the new rules are properly implemented on their territory.
  • The European Commission will ensure coordination at European level through the creation of the European AI Office in February 2024.
  • The European Commission will establish a registration system for high-risk autonomous AI applications in a public database. This will enable anyone to check whether a specific high-risk AI system complies with the requirements of the AI Act.

Consequently, AI suppliers will be required to provide relevant information on their systems so that the competent authorities can assess their compliance. Also, in the event of an uncontrolled breach of fundamental rights obligations by AI, suppliers will be required to inform the competent authorities as soon as possible.

Sanctions and complementarity with EU law

Failure to comply with the requirements of the AI Act will result in sanctions:

  • For all breaches of prohibited AI applications: up to €35 million or 7% of the company's worldwide annual sales.
  • For providing incorrect information to the authorities: up to 7.5 million euros or 1.5% of worldwide annual sales.
  • For all other breaches of obligation: 15 million euros or 3% of worldwide annual sales.

The AI Act complements existing EU law on non-discrimination, data protection and consumer protection with a set of harmonized rules.

These rules will have a double benefit, they will enable suppliers to easily market their AI-related products and services across EU borders and they will allow

Conclusion and outlook on AI law in Europe

Entry into force of the regulation and adoption by the European Parliament

The text came into force 20 days after its publication in the Journal Officiel, i.e. at the end of February 2024. It should be noted that it will be fully applicable two years later, however, certain bans will come into force after six months, while the provisions relating to general-purpose AI will be applied after one year.  

How to make the regulatory transition to the AI Act?

To ease the transition to the new regulatory framework, the Commission has launched the AI Pact. A system that promotes the rapid implementation of the measures set out in the AI Act, since it enables voluntary companies to comply with the obligations contained therein in advance.

In concrete terms, companies must provide a statement of commitment detailing a plan of actions taken or to be taken to meet the specific requirements of the future AI law.

In return, the Commission will publish these commitments in order to provide visibility, enhance credibility and strengthen confidence in the technologies developed by companies participating in the pact.

Thank you very much for reading. If you'd like to find out more, or if we can help you decipher any new topics, please don't hesitate to contact.

Chloé Creuzet

Business intelligence consultant

Business intelligence consultant at Dynergie

Need help?

Our innovation experts are here to help you.

No items found.