Thu | Nov 14, 2024
The Inside Opinion

AI-augmented disinformation is NATO's new battlefield

Published:Wednesday | July 3, 2024 | 7:59 AMYlli Bajraktari for Project Syndicate
Ylli Bajraktari, a former chief of staff to the US National Security Adviser and a former executive director of the US National Security Commission on Artificial Intelligence, is CEO of the Special Competitive Studies Project.
1
2

WASHINGTON, DC NATO’s:  July summit in Washington marks the 75th anniversary of the alliance’s establishment, and it comes at a critical juncture. As threats to global stability evolve beyond conventional military domains, NATO must confront the barrage of disinformation undermining its unity and values. Specifically, member countries must prevent hostile authoritarian regimes from manipulating public opinion by leveraging technology to wage “cognitive warfare.”

Fittingly, the upcoming summit is expected to focus on the war in Ukraine and the need for NATO’s collective-defence framework to adapt to the realities of today’s information ecosystem. Russia’s brutal war of aggression has revealed the scale of the threat posed by cognitive warfare, with the Kremlin orchestrating a massive social-media campaign aimed at spreading false narratives, fuelling anti-Western and anti-democratic sentiment, and undermining NATO’s greatest strength: its unity.

To be sure, information warfare is not new. During the Cold War, NATO recognised and countered the Soviet Union’s efforts to use propaganda and disinformation to weaken Western democracies. Since then, however, the digital revolution and subsequent rise of artificial intelligence have compounded the problem, enabling bad actors to produce and disseminate deepfakes and other forms of AI-augmented content at unprecedented speed and scale.

While these powerful new tools can yield significant economic benefits, they can also be dangerous weapons. Hostile powers like Russia are already using disinformation against democracies to influence domestic public debate, drive polarisation, erode trust in institutions, and weaken their ability to address shared challenges. This super election year, with half of the world’s population casting ballots, creates a unique opportunity for NATO’s adversaries to undermine democratic processes and fuel political instability.

NATO countries must confront the threat of AI-augmented disinformation head-on. This requires moving beyond the current reactive approach, which focuses on debunking falsehoods. The alliance must develop a comprehensive collective-defence doctrine that treats the information ecosystem as a key front in the battle to protect democratic societies.

Several key changes are needed. First, NATO must develop the capability to monitor and analyse disinformation in real time. Investing in open-source intelligence (OSINT) tools and collaborating with tech companies is crucial to developing the necessary technical expertise to identify and counter malign influence campaigns. Specifically, NATO must invest in content authenticity and transparency tools – such as large language models (LLMs), AI classifiers, and natural language processing for sentiment analysis – that can identify AI-generated or altered content.

Second, countering disinformation effectively requires fast, agile, and far-reaching strategic communications. To this end, NATO must proactively promote its interests and mission, highlight the failures of authoritarian regimes, and offer its own positive narrative.

But to do this effectively, NATO must also take the battle to its adversaries’ information ecosystems. By actively highlighting the malign activities of authoritarian regimes on their own digital platforms, NATO could undermine autocrats’ narratives and expose their tactics. At the same time, amplifying independent voices in these domains could create a powerful multiplier effect, fostering greater resilience against propaganda and disinformation.

Third, because disinformation campaigns transcend national borders, countering them will require NATO to establish collaborative relationships with governments, private companies, and civil-society organisations. These partnerships should focus on developing shared standards, early-warning systems, coordinated responses to massive disinformation campaigns, and mechanisms to trace and mitigate malign activities across multiple sovereign information environments. As the concerted effort to combat Russian disinformation about Ukraine demonstrates, continuous collaboration is vital.

Lastly, the best defense against disinformation consists in fostering informed, critical-thinking populations. By supporting programmes that promote media and digital literacy, like those already in place in Finland, NATO could help build societal resilience against propaganda and voter manipulation.

To implement this strategy, NATO will also require new organisational structures. For starters, the alliance should establish a disinformation unit responsible for coordinating intelligence, spearheading counter-messaging efforts, and building strategic partnerships. Such a unit could leverage the Five Eyes intelligence alliance (the United States, the United Kingdom, Australia, New Zealand, and Canada), as well as the work of Europol – the European Union’s law-enforcement agency – to bolster and expand information-sharing networks.

Given the threat posed by the emerging “axis of disruptors” – comprising China, Russia, Iran, and North Korea – NATO must combine its military prowess with equally sophisticated mechanisms to protect its members’ information systems against cognitive warfare. Member countries should use the upcoming Washington summit to make this approach a top priority. Confronting disinformation is not just about protecting the integrity of public debate; it is also about defending the very foundations of freedom and security. It is an opportunity we cannot afford to miss, because what is at stake is a battle we cannot afford to lose.

 

Ylli Bajraktari, a former chief of staff to the US National Security Adviser and a former executive director of the US National Security Commission on Artificial Intelligence, is CEO of the Special Competitive Studies Project.

 

Copyright: Project Syndicate, 2024.

www.project-syndicate.org

For feedback: contact the Editorial Department at onlinefeedback@gleanerjm.com.