Bridging the AI regulation gap
On March 22, the Future of Life Institute published an open letter calling for a six-month moratorium on the development of generative artificial intelligence systems, citing the potential dangers to humanity.
Since the publication of that letter, numerous high-profile figures have voiced similar concerns, including AI pioneer Geoffrey Hinton, who recently resigned from Google to raise the alarm about the “existential threat” posed by the technology he played so pivotal a role in developing.
The seriousness of these warnings should not be underestimated. Demands for government intervention rarely originate from tech companies, which in recent years have fiercely resisted efforts by American and European policymakers to regulate the industry. But given the economic and strategic promise of generative AI, development cannot be expected to pause or slow down on its own.
Meanwhile, members of the European Parliament have voted in favour of a more stringent version of the AI Act, the landmark regulatory framework designed to address the challenges posed by “traditional” AI systems, trying to adapt it to tackle the so-called “foundation models” and advanced generative AI systems such as OpenAI’s GPT-4.
As one of the main negotiators of the European Union’s groundbreaking DMA, the Digital Markets Act, and the DSA or Digital Services Act, I recognise the importance of creating a human-centred digital world and mitigating the potential negative impact of new technologies. But the speed at which the EU is developing restrictive measures raises several concerns.
First, LLMs or large language models like GPT-4 could significantly increase the productivity of white-collar workers.
At a time when developed countries are desperately seeking ways to boost productivity, Europe cannot afford to miss out yet again on a technological breakthrough that could enhance its competitiveness. But the version of the AI Act as issued by the European Parliament acts as a de facto ban on LLM development on the continent.
Second, the EU’s rapid response could result in yet another missed opportunity for the United States and Europe to agree on a common framework for regulating the tech industry. So far, transatlantic regulatory discussions have been plagued by misunderstandings, and both sides have pursued their own initiatives without proper coordination.
In recent years, the EU has enacted several far-reaching bills to regulate the tech sector such as the General Data Protection Regulation, the Data Governance Act, the DSA, and the DMA. But while Europe has embraced stricter digital regulation, sometimes at the cost of its own competitiveness, the US has been slow to adopt new rules, owing to partisan divisions and concerns about potential infringements on freedom of speech.
Unfortunately, although a common framework is necessary to ensure fair competition among businesses, there is little indication that the US and European approaches to overseeing the industry will converge anytime soon.
To be sure, both sides are expected to seek regulatory frameworks that align with their own needs and priorities. But our inability to find common ground on digital regulation has resulted in short-term inefficiencies and could lead to long-term decoupling because a shared digital space becomes increasingly difficult to maintain when rules and regulations diverge significantly. This also has political implications: when democracies cannot unite around shared values and goals, illiberal forces and regimes thrive.
The transformative power of LLMs, particularly their potential to cause widespread socioeconomic disruption by displacing millions of workers, raises the stakes for European and American policymakers to establish a shared regulatory framework.
This would require concessions from both sides.
The EU, for its part, would need to pause its own AI-related legislation. The US, which has struggled to contain the collateral damage of new technologies despite leading the world in innovation, would have to find a way to achieve a bipartisan consensus in Congress.
While regulatory harmonisation would not be easy, it remains the most viable long-term solution. Europe must take advantage of the opportunity to gain a competitive edge, and the US must intervene to halt the race to the bottom that is currently playing out in the AI domain.
Much like the banking sector before it, the digital sector has become integral to the functioning of our economies and societies. But while the finance industry operates under common rules aimed at ensuring stability and fairness, such as anti-fraud protocols, anti-corruption frameworks, and prudential regulations, the regulation of the tech industry is fragmented and therefore ineffective.
The current critical moment could offer a unique opportunity to change that. If we seize it, we could ensure that the US and Europe benefit from generative AI’s immense potential and that the technology develops within an ethical and responsible framework.
Cédric O is a former French secretary of state for the digital economy. © Project Syndicate 2023 Website: www.project-syndicate.org