Fri | Oct 18, 2024

Europe’s world-leading AI rules facing do-or-die moment

Published:Tuesday | December 5, 2023 | 12:07 AM
In this file photo, OpenAI CEO Sam Altman (left) appears onstage with Microsoft CEO Satya Nadella at OpenAI’s first developer conference, on November 6, 2023, in San Francisco. Negotiators will meet this week to hammer out details of European Union artif
In this file photo, OpenAI CEO Sam Altman (left) appears onstage with Microsoft CEO Satya Nadella at OpenAI’s first developer conference, on November 6, 2023, in San Francisco. Negotiators will meet this week to hammer out details of European Union artificial intelligence rules but the process has been bogged down by a simmering last-minute battle over how to govern systems that underpin general purpose AI services like OpenAI’s ChatGPT and Google’s Bard chatbot.
File - Arthur Mensch, cofounder and CEO of Mistral AI, attends the UK Artificial Intelligence (AI) Safety Summit in Bletchley, England on Nov. 2, 2023. France, Germany and Italy have advocated for self-regulation of artificial intelligence companies in a m
File - Arthur Mensch, cofounder and CEO of Mistral AI, attends the UK Artificial Intelligence (AI) Safety Summit in Bletchley, England on Nov. 2, 2023. France, Germany and Italy have advocated for self-regulation of artificial intelligence companies in a move seen as an effort to help homegrown generative AI players such as French startup Mistral AI and Germany's Aleph Alpha AI. (Toby Melville/Pool Photo via AP, File)
File - OpenAI CEO Sam Altman speaks at the Asia-Pacific Economic Cooperation (APEC) CEO Summit on Nov. 16, 2023, in San Francisco. Negotiators will meet this week to hammer out details of European Union artificial intelligence rules but the process has bee
File - OpenAI CEO Sam Altman speaks at the Asia-Pacific Economic Cooperation (APEC) CEO Summit on Nov. 16, 2023, in San Francisco. Negotiators will meet this week to hammer out details of European Union artificial intelligence rules but the process has been bogged down by a simmering last-minute battle over how to govern systems that underpin general purpose AI services like OpenAI's ChatGPT and Google's Bard chatbot. (AP Photo/Eric Risberg, File)
File - A portion of Google's Bard website is shown in Glenside, Pa. on March 27, 2023. Negotiators will meet this week to hammer out details of European Union artificial intelligence rules but the process has been bogged down by a simmering last-minute bat
File - A portion of Google's Bard website is shown in Glenside, Pa. on March 27, 2023. Negotiators will meet this week to hammer out details of European Union artificial intelligence rules but the process has been bogged down by a simmering last-minute battle over how to govern systems that underpin general purpose AI services like OpenAI's ChatGPT and Google's Bard chatbot. (AP Photo/Matt Rourke, File)
File - The OpenAI logo appears on a mobile phone in front of a screen showing part of the company website in this photo taken on Nov. 21, 2023 in New York. Negotiators will meet this week to hammer out details of European Union artificial intelligence rule
File - The OpenAI logo appears on a mobile phone in front of a screen showing part of the company website in this photo taken on Nov. 21, 2023 in New York. Negotiators will meet this week to hammer out details of European Union artificial intelligence rules but the process has been bogged down by a simmering last-minute battle over how to govern systems that underpin general purpose AI services like OpenAI's ChatGPT and Google's Bard chatbot. (AP Photo/Peter Morgan, File)
1
2
3
4
5

Hailed as a world first, European Union artificial intelligence rules are facing a make-or-break moment as negotiators try to hammer out the final details this week – talks complicated by the sudden rise of generative AI that produces human-like work.

First suggested in 2019, the EU’s AI Act was expected to be the world’s first comprehensive AI regulations, further cementing the 27-nation bloc’s position as a global trendsetter when it comes to reining in the tech industry.

But the process has been bogged down by a last-minute battle over how to govern systems that underpin general-purpose AI services like OpenAI’s ChatGPT and Google’s Bard chatbot. Big-tech companies are lobbying against what they see as over-regulation that stifles innovation, while European lawmakers want added safeguards for the cutting-edge AI systems those companies are developing.

Meanwhile, the United States, the United Kingdom, China, and global coalitions like the Group of 7 major democracies have joined the race to draw up guardrails for the rapidly developing technology, underscored by warnings from researchers and rights groups of the existential dangers that generative AI poses to humanity as well as the risks to everyday life.

“Rather than the AI Act becoming the global gold standard for AI regulation, there’s a small chance, but growing chance, that it won’t be agreed before the European Parliament elections” next year, said Nick Reiners, a tech policy analyst at Eurasia Group, a political risk advisory firm.

“There’s simply so much to nail down” at what officials are hoping is a final round of talks Wednesday. Even if they work late into the night as expected, they might have to scramble to finish in the new year, Reiners said.

When the European Commission, the EU’s executive arm, unveiled the draft in 2021, it barely mentioned general-purpose AI systems like chatbots. The proposal to classify AI systems by four levels of risk – from minimal to unacceptable – was essentially intended as product-safety legislation.

Brussels wanted to test and certify the information used by algorithms powering AI, much like consumer safety checks on cosmetics, cars, and toys.

That changed with the boom in generative AI, which sparked wonder by composing music, creating images, and writing essays resembling human work. It also stoked fears that the technology could be used to launch massive cyberattacks or create new bioweapons.

The risks led EU lawmakers to beef up the AI Act by extending it to foundation models. Also known as large language models, these systems are trained on vast troves of written works and images scraped off the internet.

Foundation models give generative AI systems such as ChatGPT the ability to create something new, unlike traditional AI, which processes data and completes tasks using predetermined rules.

Chaos last month at Microsoft-backed OpenAI, which built one of the most famous foundation models, GPT-4, reinforced for some European leaders the dangers of allowing a few dominant AI companies to police themselves.

While CEO Sam Altman was fired and swiftly rehired, some board members with deep reservations about the safety risks posed by AI left, signalling that AI corporate governance could fall prey to boardroom dynamics.

“At least things are now clear” that companies like OpenAI defend their businesses and not the public interest, European Commissioner Thierry Breton told an AI conference in France days after the tumult.

Resistance to government rules for these AI systems came from an unlikely place: France, Germany, and Italy. The EU’s three largest economies pushed back with a position paper advocating for self-regulation.

The change of heart was seen as a move to help homegrown generative AI players such as French start-up Mistral AI and Germany’s Aleph Alpha.

Behind it “is a determination not to let US companies dominate the AI ecosystem like they have in previous waves of technologies such as cloud (computing), e-commerce and social media”, Reiners said.

Altman has proposed a US or global agency that would license the most powerful AI systems. He suggested this year that OpenAI could leave Europe if it couldn’t comply with EU rules but quickly walked back those comments.

EU negotiators still have yet to resolve a few other controversial points, including a proposal to completely ban real-time public facial recognition. Countries want an exemption so law enforcement can use it to find missing children or terrorists, but rights groups worry that that will effectively create a legal basis for surveillance.

AP