Mon | Dec 23, 2024
The Inside Opinion

AI in the common interest

Published:Wednesday | December 28, 2022 | 7:39 AMGabriela Ramos and Mariana Mazzucato
Gabriela Ramos is Assistant Director-General for Social and Human Sciences at UNESCO.
Mariana Mazzucato, Founding Director of the UCL Institute for Innovation and Public Purpose, is Chair of the World Health Organization’s Council on the Economics of Health for All.
1
2
3

LONDON: The tech world has generated a fresh abundance of front-page news in 2022. In October, Elon Musk bought Twitter – one of the main public communication platforms used by journalists, academics, businesses, and policymakers – and proceeded to fire most of its content-moderation staff, indicating that the company would rely instead on artificial intelligence.

Then, in November, a group of Meta employees revealed that they had devised an AI program capable of beating most humans in the strategy game Diplomacy. In Shenzhen, China, officials are using “digital twins” of thousands of 5G-connected mobile devices to monitor and manage flows of people, traffic, and energy consumption in real-time. And with the latest iteration of ChatGPT’s language-prediction model, many are declaring the end of the college essay.

In short, it was a year in which already serious concerns about how technologies are being designed and used deepened into even more urgent misgivings. Who is in charge here? Who should be in charge? Public policies and institutions should be designed to ensure that innovations are improving the world, yet many technologies are currently being deployed in a vacuum. We need inclusive mission-oriented governance structures that are centred around a true common good. Capable governments can shape this technological revolution to serve the public interest.

Consider AI, which the Oxford English Dictionary defines broadly as “the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.” AI can make our lives better in many ways. It can enhance food production and management, by making farming more efficient and improving food safety. It can help us bolster resilience against natural disasters, design energy-efficient buildings, improve power storage, and optimise renewable energy deployment. And it can enhance the accuracy of medical diagnostics when combined with doctors’ own assessments.

These applications would make our lives better in many ways. But with no effective rules in place, AI is likely to create new inequalities and amplify pre-existing ones. One need not look far to find examples of AI-powered systems reproducing unfair social biases. In one recent experiment, robots powered by a machine-learning algorithm became overtly racist and sexist. Without better oversight, algorithms that are supposed to help the public sector manage welfare benefits may discriminate against families that are in real need. Equally worrying, public authorities in some countries are already using AI-powered facial-recognition technology to monitor political dissent and subject citizens to mass-surveillance regimes.

Market concentration is also a major concern. AI development – and control of the underlying data – is dominated by just a few powerful players in just a few locales. Between 2013 and 2021, China and the United States accounted for 80 per cent of private AI investment globally. There is now a massive power imbalance between the private owners of these technologies and the rest of us.

But AI is being boosted by massive public investment as well. Such financing should be governed for the common good, not in the interest of the few. We need a digital architecture that shares the rewards of collective value creation more equitably. The era of light-touch self-regulation must end. When we allow market fundamentalism to prevail, the state and taxpayers are condemned to come to the rescue after the fact (as we have seen in the context of the 2008 financial crisis and the COVID-19 pandemic), usually at great financial cost and with long-lasting social scarring. Worse, with AI, we do not even know if an ex post intervention will be enough. As The Economist recently pointed out, AI developers themselves are often surprised by the power of their creations.

Fortunately, we already know how to avert another laissez-faire-induced crisis. We need an “ethical by design” AI mission that is underpinned by sound regulation and capable governments working to shape this technological revolution in the common interest, rather than in the shareholders’ interest alone. With these pillars in place, the private sector can and will join the broader effort to make technologies safer and fairer.

Effective public oversight should ensure that digitalisation and AI are creating opportunities for public value creation. This principle is integral to UNESCO’s Recommendation on the Ethics of AI, a normative framework that was adopted by 193 member states in November 2021. Moreover, key players are now taking responsibility for reframing the debate, with US President Joe Biden’s administration proposing an AI Bill of Rights, and the European Union developing a holistic framework for governing AI.

Still, we also must keep the public sector’s own uses of AI on a sound ethical footing. With AI supporting more and more decision-making, it is important to ensure that AI systems are not used in ways that subvert democracy or violate human rights.

We also must address the lack of investment in the public sector’s own innovative and governance capacities. COVID-19 has underscored the need for more dynamic public-sector capabilities. Without robust terms and conditions governing public-private partnerships, for example, companies can easily capture the agenda.

The problem, however, is that the outsourcing of public contracts has increasingly become a barrier to building public-sector capabilities. Governments need to be able to develop AI in ways that they are not reliant on the private sector for sensitive systems so that they can maintain control over important products and ensure that ethical standards are upheld. Likewise, they must be able to support information sharing and interoperable protocols and metrics across departments and ministries. This will all require public investments in government capabilities, following a mission-oriented approach.

Given that so much knowledge and experience is now centred in the private sector, synergies between the public and private sectors are both inevitable and desirable. Mission orientation is about picking the willing – by co-investing with partners that recognise the potential of government-led missions. The key is to equip the state with the ability to manage how AI systems are deployed and used, rather than always playing catch-up. To share the risks and rewards of public investment, policymakers can attach conditions to public funding. They also can, and should, require Big Tech to be more open and transparent.

Our society’s future is at stake. We must not only fix the problems and control the downside risks of AI, but also shape the direction of digital transformation and technological innovation more broadly. At the start of a new year, there is no better time to begin laying the foundation for limitless innovation in the interest of all.

Gabriela Ramos is Assistant Director-General for Social and Human Sciences at UNESCO. Mariana Mazzucato, Founding Director of the UCL Institute for Innovation and Public Purpose, is Chair of the World Health Organization’s Council on the Economics of Health for All.

 

Copyright: Project Syndicate, 2022.

www.project-syndicate.org

For feedback: contact the Editorial Department at onlinefeedback@gleanerjm.com.