EU set to roll out world’s first AI regulation after 35-hour talks

[ad_1]

Member states and the European Parliament have reached a preliminary deal on the Artificial Intelligence Act, the world’s first attempt to regulate the fast-evolving technology in a comprehensive, ethics-based manner.

ADVERTISEMENT

The agreement was struck at the political level on Friday night after talks that occupied the entire day and followed an unsuccessful marathon between Wednesday and Thursday afternoon. In total, the entire push took more than 35 hours.

The breakthrough came amid aggressive lobbying from Big Tech and start-ups, stark warnings from civil society and intense media scrutiny as the legislation from Brussels could very well influence similar regulatory efforts across the world.

“Historic! The EU becomes the very first continent to set clear rules for the use of AI,” said Thierry Breton, the European Commissioner for the internal market who took part in the debate. “The #AIAct is much more than a rulebook — it’s a launchpad for EU startups and researchers to lead the global AI race.”

The negotiations were a hard-fought back-and-forth between governments and lawmakers over a string of highly complex and technical issues.

Having sealed on Thursday a tentative compromise to rein in the foundation models that power chatbots like OpenAI’s ChatGPT, Friday’s talks had a strong focus on the use of real-time biometrics, including facial recognition, in public spaces.

At the core of the debate was the question of whether state authorities should be allowed to deploy AI-powered biometric systems that can identify and categorise people based on sensitive characteristics such as gender, race, ethnicity, religion and political affiliation, as well as systems of emotion recognition and predictive policy.

In their joint mandate, MEPs said these practices were “intrusive and discriminatory” and therefore should be prohibited across EU territory. Member states, though, had quite a different opinion and argued exceptions for law enforcement were necessary to track down criminals and thwart threats against national security.

France approved earlier this year legislation to enable the use of biometric surveillance during the 2024 Paris Olympics and Paralympics, a move that Amnesty International said “undermines the EU’s ongoing efforts to regulate AI.”

The clash between national security and fundamental rights absorbed most of the energy on Friday. Spain, the current holder of the Council’s rotating presidency, had the hard task of representing the 27 member states and keeping a united front.

Talks were interrupted by a protracted recess that allowed lawmakers to discuss among themselves the demands made by the Spanish presidency. Meanwhile, scholars and activists took to social media to urge MEPs to resist the exemptions for law enforcement.

In the end, the Parliament and the Council managed to find a way to patch up their differences and claim victory – even if provisionally.

“I welcome the landmark deal on the new AI Act. An avant-garde, responsible, comprehensive legislation that sets global standards,” said Roberta Metsola, the president of the European Parliament, celebrating the news.

Details of the agreement were not immediately available.

Given the complexity of the issue at hand, the compromise that emerged from the drawn-out talks will likely require further consultations and fine-tuning to come up with a piece of legislation that is fully acceptable to all parties around the table.

Once the legal text, which covers hundreds of pages in articles and annexes, is rewritten, it will be sent to the European Parliament for a new vote in the hemicycle, followed by the final green light by the countries in the Council.

The votes are expected to take place in early 2024. The law will then have a grace period before it becomes fully enforceable in 2026.

An ever-evolving technology

First presented in April 2021, the AI Act is a ground-breaking attempt to ensure the most radically transformative technology of the 21st century is developed in a human-centric, ethically responsible manner that prevents and contains its most harmful consequences.

The Act is essentially a product safety regulation that imposes a staggered set of rules that companies need to follow before offering their services to consumers anywhere across the bloc’s single market.

ADVERTISEMENT

The law proposes a pyramid-like structure that splits AI-powered products into four main categories according to the potential risk they pose to the safety of citizens and their fundamental rights: minimal, limited, high and unacceptable.

Those that fall under the minimal risk category will be freed from additional rules, while those labelled as limited risk will have to follow basic transparency obligations.

The systems considered high risk will be subject to stringent rules that will apply before they enter the EU market and throughout their lifetime, including substantial updates. This group will encompass applications that have a direct and potentially life-changing impact on private citizens, such as CV-sorting software for job interviews, robot-assisted surgery and exam-scoring programmes in universities.

High-risk AI products will have to undergo a conformity assessment, be registered in an EU database, sign a declaration of conformity and carry the CE marking – all before they get to consumers. Once they become available, they will be under the oversight of national authorities. Companies that violate the rules will face multi-million fines.

AI systems with an unacceptable risk for society, including social-scoring to control citizens and applications that exploit socio-economic vulnerabilities, will be outright banned across all EU territory.

ADVERTISEMENT

Although this risk-based approach was well received back in 2021, it came under extraordinary pressure in late 2022, when OpenAI launched ChatGPT and unleashed a global furore over chatbots. ChatGPT was soon followed by Google’s Bard, Microsoft’s Bing Chat and, most recently, Amazon’s Q.

Chatbots are powered by foundation models, which are trained with vast troves of data, such as text, images, music, speech and code, to fulfil a wide and fluid set of tasks that can change over time, rather than having a specific, unmodifiable purpose.

The Commission’s original proposal did not introduce any provisions for foundation models, forcing lawmakers to add an entirely new article with an extensive list of obligations to ensure these systems respect fundamental rights, are energy efficient and comply with transparency requirements by disclosing their content is AI-generated.

This push from Parliament was met with scepticism from member states, who tend to prefer a soft-touch approach to law-making. Germany, France and Italy, the bloc’s three biggest economies, came forward with a counter-proposal that favoured “mandatory self-regulation through codes of conduct” for foundation models. The move sparked an angry reaction from lawmakers and threatened to derail the legislative process.

But the daunting prospect of thrusting the landmark law into limbo in the lead-up to next year’s European elections acted as a motivation to bridge the gaps on foundation models and biometrics and strike a preliminary deal.

ADVERTISEMENT

“The AI Act is a global first,” said Ursula von der Leyen, the president of the European Commission. “A unique legal framework for the development of AI you can trust. And for the safety and fundamental rights of people and businesses. A commitment we took in our political guidelines – and we delivered.”

This article has been updated with more information about the political deal.



[ad_2]

Source link