After marathon “final” talks that lasted almost three days, European Union lawmakers tonight reached a political agreement on a risk-based framework for the regulation of artificial intelligence. The file was originally proposed for April 2021, but it took months of difficult tripartite negotiations to reach an agreement. The development means a Europe-wide AI law is definitely on the way.
Giving a triumphant but grueling press conference in the wee hours of Friday night/Saturday morning local time, key representatives for the European Parliament, Council and Commission — the bloc’s co-legislators — hailed the deal as a hard-fought, landmark achievement and historical, respectively.
Going to X in tweet the newsEU president Ursula von der Leyen – who made implementing an AI law a key priority of her term when she took office in late 2019 – also praised the political deal as a “world first”.
The full details of what was agreed won’t be fully confirmed until a final text is drawn up and released, which could take a few weeks. But a Press release published by the European Parliament confirms that the agreement reached with the Council includes a complete ban on the use of artificial intelligence for:
- biometric categorization systems that use sensitive characteristics (eg political, religious, philosophical beliefs, sexual orientation, race);
- untargeted scraping of face images from the internet or CCTV footage to create facial recognition databases.
- emotion recognition in the workplace and educational institutions.
- social rating based on social behavior or personal characteristics.
- Artificial intelligence systems that manipulate human behavior to override their free will.
- Artificial intelligence used to exploit people’s vulnerabilities (due to their age, disability, social or economic status).
The use of remote biometric identification technology in public places by law enforcement has not been outright banned — but parliament said negotiators had agreed to a series of safeguards and limited exceptions to limit the use of technologies such as facial recognition. This includes a requirement for prior judicial authorization — and with uses limited to “strictly defined” lists of crimes.
Retrospective (non-real-time) use of remote AI biometrics will be limited to the “targeted investigation of a person who has been convicted of or is suspected of having committed a serious crime”. While the real-time use of this intrusive AI technology will be limited in time and location and can only be used for the following purposes:
- targeted victim investigations (kidnapping, human trafficking, sexual exploitation),
- prevention of a specific and present terrorist threat, or
- locating or identifying a person suspected of having committed one of the specific crimes listed in the regulation (e.g. terrorism, human trafficking, sexual exploitation, murder, kidnapping, rape, armed robbery, participation in a criminal organization, environmental crime).
The agreed package also includes obligations for artificial intelligence systems classified as “high risk” due to having “significant potential harm to health, safety, fundamental rights, the environment, democracy and the rule of law”.
“The MEPs successfully managed to include a mandatory impact assessment on fundamental rights, among other requirements, which also apply to the insurance and banking sectors. Artificial intelligence systems used to influence the outcome of elections and voter behavior are also classified as high risk,” the parliament wrote. “Citizens will have the right to file complaints about artificial intelligence systems and receive explanations about decisions based on high-risk artificial intelligence systems that affect their rights.”
There was also agreement on a “two-tier” guardrail system to be applied to “generic” AI systems, such as the so-called fundamental models that underpin the viral explosion in AI production applications like ChatGPT.
As we reported earlier, the agreement reached on fundamental models/general purpose AI (GPAI) includes some transparency requirements for what co-legislators refer to as “low-level” AI — meaning that model makers must write technical documentation and produce (and publish) detailed summaries of content used for training in order to support compliance with EU copyright law.
For “high impact” GPAIs (defined as the cumulative amount of computation used to train them, measured in floating point operations is greater than 10^25) with so-called “systemic risk” there are more stringent obligations.
“If these models meet certain criteria, they will have to conduct model assessments, assess and mitigate systemic risks, perform counter-testing, report serious incidents to the Commission, ensure cyber security and report on their energy performance,” the parliament wrote. “The MEPs also insisted that, until harmonized EU standards are published, GPAIs with systemic risk can rely on codes of practice to comply with the regulation.”
The Commission has been working with industry on an AI Stop Pact for AI for a few months — and confirmed today that this is intended to plug the practice gap until the AI Act comes into force.
While commercialized core models/GPAIs face regulation under the Act, R&D is not intended to fall under the scope of the Act — and fully open source models will have lighter regulatory requirements than closed source, according to today’s announcements.
The agreed package also promotes regulatory sandboxes and real-world trials created by national authorities to support start-ups and SMEs to develop and train AI before going to market.
Penalties for non-compliance can result in fines ranging from €35 million or 7% of global turnover to €7.5 million or 1.5% of turnover, depending on the breach and the size of the company.
The deal agreed to today also allows for a phased-in phase-in after the law is passed — with six months until the rules for prohibited use cases are in place. 12 months for transparency and governance requirements; and 24 months for all other requirements. So the full impact of the EU AI law may not be felt until 2026.
Carme Artigas, Spain’s foreign minister for digital and artificial intelligence, who led the Council’s negotiations on the dossier as the country holds the rotating Council Presidency since the summer, hailed the deal on the hotly contested dossier as “the biggest milestone in the history of digital information in Europe”. both for the bloc’s single digital market — but also, he suggested, “for the world.”
“We have achieved the world’s first international regulation on artificial intelligence,” he announced during a midnight press conference to confirm the political agreement, adding: “We feel very proud.”
The law will support European developers, start-ups and future scale-ups by giving them “legal security with technical certainty”, he predicted.
Speaking on behalf of the European Parliament, co-rapporteurs Dragoș Tudorache and Brando Benifei said their aim was to deliver AI legislation that would ensure the ecosystem is developed with a “human-centred approach” that respects fundamental rights and European values. Their assessment of the outcome was equally upbeat — citing the inclusion in the agreed text of a complete ban on the use of artificial intelligence for predictive policing and biometric categorization as major wins.
“We are finally on the right track, defending fundamental rights in the necessity that exists for our democracies to withstand such incredible changes,” Benifay said. “We are the first in the world to have a horizontal legislation that has this direction for fundamental rights, that supports the development of artificial intelligence on our continent and that is up-to-date on the frontiers of artificial intelligence with the most powerful models under clear obligation. So I think we delivered.”
“We’ve always been asked if there’s enough protection, if there’s enough stimulus for innovation in this text, and I can say that that balance is there,” Tudorache added. “We have safeguards, we have all the provisions we need, the remediation we need to give our citizens confidence in interacting with AI, in the products in the services that they’re going to interact with going forward.
“We must now use this plan to now pursue global convergence because this is a global challenge for everyone. And I think with the work that we’ve done, as difficult as it was — and it was difficult, this was a marathon negotiation by any standards, looking at all the precedents so far — but I think we got it done.”
EU Internal Market Commissioner Thierry Breton also chimed in with the euro’s two cents — describing the deal reached just before midnight Brussels time as “historic”. “It’s a complete package. It’s a complete deal. And that’s why we spent so much time,” he said. “This balances user safety, innovation for start-ups, while respecting … our fundamental rights and our European values.”
Despite the EU very visibly patting itself on the back tonight to secure a deal on ‘world-first’ AI rules, it’s not the end of the road for the bloc’s legislative process, as there are still some formal steps to be taken must be done — at least The final text will be voted on by parliament and the Council for approval. However, given how much division and disagreement there has been over how (or even if) AI will be regulated, the biggest hurdles have been cleared with this political deal and the path to the EU’s AI law coming next months seems clear.
The Commission certainly expresses confidence. According to Breton, work to implement the agreement begins immediately with the creation of a CN Office within the EU executive — which will be tasked with coordinating with member states’ supervisory bodies that will have to enforce the rules on artificial intelligence companies. “We will be welcoming new colleagues … lots of them,” he said. “We will work – starting tomorrow – to get ready.”