What To Expect From the EU AI Regulation?
A view on the key upcoming EU regulations, and how these are likely to affect data and AI industry practices.
With the GDPR, one could believe the EU had completed its legal arsenal regarding data privacy. Yet, there will soon be a new regulation in town: the AI Regulation... will it be the big bang in AI practices as the GDPR has been in data governance?
What is to be expected from the EU AI Regulation?
The new AI Regulation is still under discussion. Yet, it is already possible to analyze the main features that will be retained: classification of AI based on risk levels and requirements of data quality, data traceability, and system cybersecurity based on these risk levels.
The main contribution of the European regulation is a classification of AI systems according to their level of risk:
Prohibited AIs: The most sensitive ones, i.e. those that contravene the fundamental principles of the Union, will simply be prohibited. This is the case with real-time remote biometric recognition AI, or AIs for social scoring that result in differences in the treatment of individuals (such as the Chinese system for punishment and reward).
High-Risk AIs: This is followed by high-risk AIs, which are subject to heavy requirements, concerning both data quality and data security. These AIs are those relating to the security of some specific enumerated topics, like education, justice, repression, and sensitive infrastructures.
The quality of data used with high-risk AIs is paramount: incomplete, outdated, or inaccurate data can make the AI biased, which can be quite harmful with such critical matter. The future regulations will place the responsibility for ensuring data quality on the AI provider. AIs will also have to remain under human supervision (high expectations in terms of transparency and explicability) and be subject to thorough technical documentation about their purpose, their features, and their risks... The safety of high-risk AI systems is a key feature of future regulation. AIs will have to include technologies to protect against attacks targeting their vulnerabilities, such as data poisoning or adversarial examples. High-risk AIs will also have to be CE marked to certify their compliance. Those AI will have to undergo frequent “iterative testing” to evaluate proper functioning and identify risks.
Non-High-Risk AIs: Non-high-risk AIs are much less limited by the proposal. They are only subject to transparency obligations, regarding their data, their architecture, and their use. However, non-high-risk AIs which interact with people, such as chatbots or deep fakes, shall disclose that they are generated by AI not to mislead people’s judgment.
The fact that this proposal mainly refers to high-risk AI can seem surprising. It is justified by the will of the EU not to put a stop to the development of low-risk AIs for obvious economic reasons. Concerning the prohibited AIs, there was no need to develop too much since they are (almost) completely banned.
What about existing EU regulations on AI?
It is clear that the AI Regulation will introduce new provisions, clarify rules and unify regulations across Europe. Yet we believe it is unlikely to deeply change the industry practices.
The real big bang in the data sector was the GDPR in 2016. This EU regulation created a set of rules for the use of data, while it was not so limited before. It imposed new principles, such as the mandatory consent for data collection, the security of data treatment, or the obligation to appoint employees dedicated to data governance… As a result, it has designed a new culture and new habits of data security and privacy. Companies are now far more data-aware. Most of all, the GDPR is about data… and what’s a digital world without data? By adopting a regulation on data, the EU has therefore set up a comprehensive legal framework setting limits to data-related products, including AI.
Moreover, in the most sensitive sectors where we can use AI, other regulations already complete the legal arsenal on data. For example, in the medical sector, data are particularly sensitive, but at the same time, their analysis opens up promising use cases. This is why the 2017 European regulation on medical devices included a series of measures concerning medical software, setting a particular focus on security and certification. The digital world is therefore already highly regulated, thanks to the coexistence of a general regulation laying down fundamental principles, supplemented by specific requirements for the sectors that require them.
What changes can we expect?
The future regulation of AI seems to be one of those specific regulations complementary to a general framework, rather than a new regulatory revolution. The absolute prohibitions it deals with were in fact already prevented by the existing legal framework. For example, real-time remote facial recognition in public spaces was already blocked by the GDPR. It involves biometric data whose processing restriction already forbids such use. This future regulation will mainly strengthen the safeguards already existing to protect European societies from fundamental rights violations. Companies already in compliance with the legal framework and with great data governance processes will not see their practices change significantly, although they will have increasing documentation requirements. They will rather be able to benefit from a unified certification system for AI systems, which will homogenize practices between EU member states.
While the regulation sets out safety requirements, it leaves it up to companies to meet them in whatever way they wish. They are free to use whatever technologies and means they wish to meet the regulatory requirements. Consequently, this paves the way for the development of a thriving AI cybersecurity ecosystem.