The AI Act: 9 Key Answers to Get Onboard

The AI Act: 9 Key Answers to Get Onboard

On the 14th of June, the AI Act was successfully passed by the EU parliament. We gathered information on this complex piece of legislation for you. Let’s break down how the EU wants to regulate Artificial Intelligence with 10 questions.

Daniel Huynh,
Jade Hardouin

This month, on the 14th of June, the AI Act was successfully passed by the EU parliament. We gathered information on this complex piece of legislation for you. Let’s break down how the EU wants to regulate Artificial Intelligence with 9 questions.

  1. Is the AI Act a reaction to ChatGPT? Is this an attempt to ban AI?
  2. What is the AI Act?
  3. What would be forbidden by the AI Act?
  4. What are its main impacts on the AI market?
  5. Are ChatGPT AI and other Large Language Models Act compliant?
  6. Who will be responsible for AI decisions according to the AI Act?
  7. Can models be blindly trusted for privacy and accuracy once the law is enforced?
  8. What about the implementation of the law?
  9. How will the AI Act shape innovation?

Is the AI Act a reaction to ChatGPT? Is this an attempt by the EU to ban AI?

With the surge of large language models like ChatGPT, everyone craves to master and use AI for many new purposes. However, regulation and privacy risks have boomed with this massive adoption. The new AI Act initiated by the EU Parliament creates a legal framework for those issues. This legislation has been in development for several years. Most key elements voted this month were already in the draft published last year (see our previous article for more information). The AI Act has been designed since 2021 to address the broad risks posed by AI.

Among the various ways AI can be deployed and utilized, ChatGPT is just one recent example… that has had an undeniable impact on the speed of negotiations. It’s been almost ten years since AI systems started impacting the daily life of customers: supporting social media suggestions, credit card fraud detection, as well as manufacturer logistics.

AI introduces a distinct paradigm for data consumption, utilization, and generation. As the EU aims to regulate the wide range of data-driven AI applications, a specific AI regulation was needed. Note that the EU widened its definition of AI from the draft of 2021 to the current version. (The revised definition no longer includes the term “software” but now refers to “machine-based systems”, while it introduces the concept of autonomous operation… A rather broad definition.). This aims at facilitating global cooperation by aligning with the standards set forth by the OECD.

As for the banning of AI part, the Act’s purpose is only to ensure AI is used responsibly and beneficially, not to prohibit its development.

What is the AI Act?

It is a new law voted on the 14th of June by the European Union (EU) to regulate Artificial Intelligence (AI). The AI Act creates a framework for the safe and ethical use of AI in the EU.

It is a complex piece of legislation that is still under negotiation. The Parliament, the Council, and the Commission’s versions of the AI Act have differing opinions regarding the enforcement of the law (see question 8) and the specific case of foundation models. The European Parliament is hoping to conclude the negotiations by the end of the year, though, to enforce the text in 2026. AI providers of systems developed or used in the EU will have 2 years to comply.

The AI Act is meant to set a global standard for its regulation worldwide. As AI systems are used to make decisions with significant impacts on people (hiring, lending, or decisions in court), it is meant to guarantee that these decisions are fair and non-discriminatory.

What AI would be forbidden by the AI Act?

The AI Act is a risk-oriented text that targets applications of AI through four categories: unacceptable risk, high risk, limited risk, and minimal risk.

  1. Unacceptable risk

AI systems in the unacceptable risk category, such as the government-run social scoring in China, will be banned. Here are the main characteristics that define the systems banned under the Act:

  • Manipulative systems using subliminal techniques or exploiting vulnerabilities
  • Social scoring by public authorities to introduce discriminative treatment
  • Biometric identification in public spaces by law enforcement (except for well-defined cases identified by a Member State, such as missing children, victim search, and others)

2. High-risk

AI systems in the high-risk category are linked to an exhaustive list of fields, including employment (CV scanning, for instance), education, biometric identification, management of critical infrastructure, justice, and many others, and will be subject to strict requirements. Here are some of them :

  • Quality assurance and technical documentation through harmonized standards
  • Conformity assessment to check before selling the product
  • Traceability, security, robustness, and accuracy
  • Relevant datasets to train on accuracy and reduce discrimination via awareness

3. Limited risk

AI systems with limited risks, such as bots, biometric categorization, creation of synthetic content, will face specific transparency requirements, mostly based on the fact that users will be aware they are interacting with such systems.

4. Minimal risk

Regarding the last category, the minimal risk systems can be deployed freely. It includes various commonly used AI systems, such as spam filters, inventory-management systems, and numerous others that we regularly encounter in our everyday lives.

-- Where do general-purpose AI systems belong?

The category ChatGPT and other Large Language Models (LLMs) should belong to has sparked a debate among negotiations on the drafts of the Act. They are between the high-risk and limited-risk categories since their original generative purpose can be applied to high-risk use cases.

The first draft of the Act from 2021 did not consider the case of LLMs. Its providers, such as OpenAI, argued that foundation models shouldn’t be subject to the high-risk category strict requirements. In the released version of the Act, “foundation models” are now explicitly addressed, especially in Article 28b. They don’t fall into categories but are subject to specific guidelines to ensure which handful of requirements from which category they have to validate. We will get into more detail in question 5.

What are the main impacts of the AI Act on the AI market?

The global AI market is booming, and along with it, the number of competitors too. Estimated at 136.6 billion USD in 2022, the market has attracted a multitude of companies and start-ups operating in Europe. However, with the introduction of the AI Act, these entities now face new challenges to comply with regulations.

The high-risk category of AI systems requires transparency, accuracy, and cybersecurity, while the limited-risk category mandates transparency and disclosure. As a result, AI providers and users will need to conduct extensive tests for training, validation, and quality certification. These testing processes will demand considerable time and financial resources (see next question). However, those criteria, along with the human oversight requisite, intend to boost trust in AI systems through decreased risks regarding safety and fundamental rights.

The major impact the Act will have on the market will be the gap that might deepen between start-ups and market leaders due to their disparity in terms of resources and finances. A prime reason for this is the required analysis of very demanding hypothetical risk factors such as privacy violations, deep fakes, socioeconomic inequality, and others. However, this approach can be nuanced since the full transparency required by the Act on AI giants, such as OpenAI, limits their ability to dominate the market. Large providers won’t be able to use data without user consent and shush their development configurations to keep their monopoly on the AI market, so small companies won’t face unfair dispositions.

The AI systems that induce limited risks don’t face strict requirements, so they might be able to comply easily. However, the ease of coping with both limited risks AI and high AI requirements will rely heavily on the apparition of a standardized offer of tools allowing global compliance with AI Act principles (see what happened with the “Cookies” to comply with the GDPR data privacy law enforcement).

Are ChatGPT AI and other Large Language Models Act compliant?

ChatGPT and other Large Language Models (as “foundational models”) have their own specific obligations since they serve a general purpose falling under the limited risk category but can be used afterward in high-risk cases. The new version of the AI Act explicitly includes numerous references to foundation models, aiming to clearly outline their specific requirements. The Stanford Center for Responsible AI (CRFM) created a table to summarize them. They graded the model provider’s compliance with the new version of the Act.

(source: Stanford, CRFM, https://crfm.stanford.edu/2023/06/15/eu-ai-act.html)

For most large models AI providers, compliance will depend on data governance through traceability transparency and energy consumption. Disclosure of these two elements is the critical component LLMs providers will need to work on to comply with the new European law and keep their business sailing in Europe.

Additionally, given the vast number of potential high-risk scenarios arising from their products, it is impractical for LLMs to evaluate each one individually. Therefore, they’ll have to provide comprehensive information and documentation on the capabilities of their model to support compliance for downstream operators modifying models for high-risk use cases.

Who will be responsible for AI decisions according to the AI Act?

The question of accountability for AI-powered outcomes has risen alongside the AI market. Detrimental repercussions of AI decisions can go from an AI-powered surgery tool causing harm to a patient to discrimination against resumes in CV-scanning due to biases. This regulation aims to set up guidelines to deal with accountability. The AI Act imposes liability on developers and manufacturers for their products. This approach is aligned with the spirit of the Act, putting manufacturers in charge of continuous evaluation of the safety and performance of their product through “Conformity Assessment” (AI Act-Title III-Chapter 5-Article 43).

However, there’s still discussion around this strict and straightforward way of defining accountability since some nuance might be needed to prevent a stalling in innovation. Responsibility could burden many companies, especially small ones, considering the high-stakes risks they’ll have to take.

Regarding foundation models, such as ChatGPT, Bard, and others, providers are still held responsible for their products. The legislation expanded to reach even these giants of generative AI. However, the intricacy of the workflow of Large Language Models raises questions about the accountability of providers over deployers.

Will it be possible to trust models blindly with accuracy and privacy?

AI systems influence how businesses operate and governments make decisions. Problematic breaches, scandals, and leaks of personal information have shaken public trust in pervasive AI. Recognizing the importance of those issues, the Parliament has responded. Is their response sufficient?

In terms of accuracy, ensuring that AI systems provide reliable, consistent, and harmless predictions is crucial. Continuous evaluation and improvement of models serve as a remedy, and the AI Act imposes controls, such as benchmark methods. However, an attestation for model traceability is also needed to ensure the correct model is being utilized. Otherwise, model evaluation is useless as we cannot be sure the model on which are due assessments is the one in production.

Regarding privacy, safeguarding all data is essential. The Act focuses on the integrity of training and validation datasets. It asserts the inputs should not be copyrighted and must not contain personal data. This helps prevent reverse attacks, which trace input data from the output. Yet the same model traceability requirement dilemma appears here: we need to get proof that the model deployed is indeed a model that respected the data privacy norms during training. Otherwise, personal data can be extracted from LLMs models as they are trained to learn it by heart (see this study for more information).

But the AI Act falls short in addressing the handling of input and output data during model querying. While the issue of training data traceability is rightfully acknowledged, there is also a need for additional regulation to ensure data confidentiality throughout all stages of AI model usage. For instance, Google uses queries to improve and develop its products and services. Then, the possibility of leakages emphasizes the necessity to regulate querying privacy.

For more info, stay tuned for our second article.

What about the implementation of the law?

The discussion to decide which body shall enforce the AI Act is still a subject of debate. On the one hand, the Parliament wants to create a National Supervisory Authority (NSA) in each Member State. On the other hand, the Council and Commission want to let Member States introduce as many Market Surveillance Authorities as they deem necessary for the many fields in which AI systems apply. Having only one authority per member state is relevant for coordination between Member States and the gathering of expertise focused on the enforcement of the AI Act. Still, it can also complexify regulation in specific sectors (finance, for example) since authorities or experts belong to different entities.

The Authorities will be entrusted with dual responsibility for overseeing the implementation of the Act. Firstly, they are responsible for establishing technical standards for AI systems and providing guidance on their usage. Secondly, they are empowered to investigate potential violations of the Act, impose sanctions, and even compel companies to remove their apps from the market. Notably, the Act introduces fines categorized into three levels, surpassing those outlined in the General Data Protection Act (GDPR). They range from 10 million or 2% of annual worldwide turnover to 30 million Euro or 6% of annual worldwide turnover, depending on the severity of the violation.

What about innovation?

Regulating a field is always a double edge-sword because protecting could also mean slowing progress. This is why the European Parliament is proposing to exempt research activities and AI components provided under open-source licenses.

Experimentation will then be held in real-life environments called ‘regulatory sandboxes’, supervised by public authorities. Those will take place before the deployment of products on the market during the development, training, and testing phases.

The Parliament claims to foster innovation and keep Europe updated on the AI market, all the while ensuring a proportionate regulation of AI to protect individual rights and provide accurate results.

Sources

Want to learn more about Zero-Trust LLM deployment?

Image credits: Edgar Huneau