[ad_1]
The European Union would like to do for AI what it did for online privacy: create gold-standard, globally influential regulation. But its planned AI Act is faltering in the home stretch.
EU leaders were hoping the bill might be finalized next week at a behind-closed-doors “trilogue” session involving negotiators from the bloc’s big institutions. However, the three biggest EU economies—Germany, France, and Italy—threw the whole thing into disarray last week by unexpectedly rejecting the push (by the European Parliament and by other EU countries) to have the AI Act regulate foundation models. Instead, the trio said they wanted foundation model providers such as OpenAI to self-regulate, with stricter rules applying to those who provide “high-risk” applications that tap into that underlying technology (so for example, OpenAI’s popular ChatGPT app would be covered, but GPT-4, the underlying model that powers ChatGPT, would not).
In case you’re marveling at the spectacle of Germany and France being nice to U.S. tech firms for once, their motivations are likely a lot closer to home.
Germany’s government is an enthusiastic champion of local AI outfit Aleph Alpha, which has received funding from national titans such as SAP and Bosch. It also can’t hurt French AI sensation Mistral that its lobbying efforts are being led by cofounder Cédric O, a close ally of Emmanuel Macron who was, until last year, his digital economy minister. “I feel strongly that former office holders should not engage in political activities related to their former portfolio,” sputtered Max Tegmark, the Swedish-American president of the pro-AI-safety Future of Life Institute, in an X argument with O (hey, I didn’t name them) yesterday.
European industry has also lobbied hard against the regulation of foundation models, while its creative sector has taken the opposing stance, largely because it wants foundation model providers to be transparent about the provenance of their training data.
Whatever lies behind the Germany-France-Italy U-turn, the European Parliament is not impressed. “We cannot accept [their] proposal on foundation models,” said Axel Voss, a key German member of the Parliament, in a post last week. “Also, even minimum standards for self regulation would need to cover transparency, cybersecurity and information obligations—which is exactly what we ask for in the AI Act. We cannot close our eyes to the risks.”
Again, this law was supposed to be wrapped up at a final trilogue session on Dec. 6. The European Commission, which made the initial proposal for the AI Act, has now come up with a compromise text that avoids reference to “foundation models” while obliging the makers of particularly powerful “general-purpose AI models” (i.e. foundation models) to at least document them and submit to having them officially monitored. That’s weak sauce compared to what Parliament wants, so the chances of this being pushed through to next year are pretty high.
The problem is that 2024 will be a bad time for well-considered legislation because there will be European Parliament elections in June, after which there will be a new Parliament and a new Commission. So there really isn’t much time to find a compromise here.
“A failure of the ‘AI Act’ project would probably be a bitter blow for everyone involved, as the EU has long seen itself as a global pioneer with its plans to regulate artificial intelligence,” wrote Benedikt Kohn of the law firm Taylor Wessing in a blog post today that noted how the U.S. has recently taken meaningful steps towards AI regulation.
More news below—though if you want to read more about evolving AI rules, a bunch of countries including the U.S., U.K., and Germany just released a set of cybersecurity guidelines for companies building AI applications.
David Meyer
Want to send thoughts or suggestions to Data Sheet? Drop a line here.
NEWSWORTHY
ON OUR FEED
“The legal requirements for the extradition of the accused KDH have been met, at the request of South Korea and the United States of America.”
—A Montenegro court establishes that Kwon Do-hyung, better known as on-the-run Terraform Labs founder Do Kwon, will be extradited to face trial over the TerraUSD/Luna crypto collapse. It’s not yet clear whether South Korea or the U.S. will get him.
IN CASE YOU MISSED IT
OpenAI staff reportedly warned the board about an AI breakthrough that could threaten humanity before Sam Altman was ousted, by Christiaan Hetzner
New Binance CEO Richard Teng says firm has ‘robust timeline’ for board, financial disclosures, by Jeff John Roberts
Elon Musk dismissed hybrid vehicles as a ‘phase’ while Toyota doubled down on them. Now they’re a ‘smoking-hot market’ as EV demand chills, by Steve Mollman
Elon Musk brands Sweden’s unions ‘insane’ after strikes cripple Tesla operations—but caving to any demands may open the floodgates in the U.S. and Germany, by Christiaan Hetzner
Nvidia sued after senior employee accidentally showed off confidential files taken from previous employer during a video meeting, by Chloe Taylor
OpenAI’s board might have been dysfunctional–but they made the right choice. Their defeat shows that in the battle between AI profits and ethics, it’s no contest, by Ann Skeet (Commentary)
Beware of crypto grifters looking to crash the AI party, by Kathleen Breitman (Commentary)
BEFORE YOU GO
AI vs authenticity. After the Collins Dictionary adopted “AI” as its word of the year, Merriam-Webster has decided to go in the opposite direction with “authentic.”
As editor-at-large Peter Sokolowski explained, people have been looking up the word an awful lot this year: “We see in 2023 a kind of crisis of authenticity. What we realize is that when we question authenticity, we value it even more.”
This is the web version of Data Sheet, a daily newsletter on the business of tech. Sign up to get it delivered free to your inbox.
[ad_2]
Source link