Anthony Clayton | The hyperscalers, and why we should worry about them – Part 2
The second part of a two-part series on the way that the hyperscalers now control our digital world.
The hyperscalers that control the Internet have become so rich and powerful because there is a natural tendency for public Internet spaces to become monopolistic; the value of a public space goes up as more people use it. Only major governments can change the rules of the world that technology has made, and the only way that they can do that is by introducing new regulations to ensure that technology firms comply with standards and prevent them from destroying or buying up any potential competition.
In the USA, Section 230 of the Communications Decency Act gave social media platforms immunity from the legal responsibilities of publishers, which gives them an overwhelming competitive advantage against traditional media. One option open to US regulators, therefore, is simply to repeal Section 230. However, one reason why it might now be harder for government to remove this exemption is that the hyperscalers now control some of their sensitive intelligence data. For example, the CIA has been using Amazon Web Services to back up their data since 2014, while the UK’s spy agencies, GCHQ, MI5, and MI6, are also moving some of their material to Amazon Web Services. There is a risk that this increasing dependency might, in future, inhibit governments from regulating these companies as firmly as they should.
The same regulatory weakness has facilitated the growth of even more serious problems, including the migration of organised crime and terrorism into unregulated cyber-space; the use of social media for undermining democracy and encouraging violence; the use of the Internet and social media by authoritarian governments to oppress ethnic minorities and dissenters; and the proliferation of ways to launder the proceeds of crime, defraud people, and distribute narcotics, weapons, and stolen goods. For example, in June 2020, Facebook removed 49 groups that were trafficking historical artefacts looted from Iraq and Syria, but this was less than 30 per cent of the groups in the Middle East and North Africa set up for the sole purpose of looting and trafficking, the largest of which was estimated to have 437,000 members. This new form of transnational organised crime is largely beyond the reach of normal policing as it is anonymous, fluid, adaptable, and can regenerate faster than it can be shut down.
UNDERSTAND THE HARM
The recent revelations from Frances Haugen, a former senior Facebook staff member, that Facebook fully understands the harm it causes, have made it much harder for the company to evade responsibility. Haugen testified that Facebook channelled ethnic violence, which led to atrocities in Ethiopia and Nigeria, and was used by the military in their genocidal campaign of mass murder and rape to drive the Rohingya out of Myanmar. Similarly, former Facebook data scientist Sophie Zhang said that she had found “multiple blatant attempts by foreign national governments to abuse our platform on vast scales to mislead their own citizenry” and included Azerbaijan, Honduras, India, Ukraine, Spain, Brazil, Bolivia, and Ecuador in the list of countries responsible for abuses such as manipulative fake accounts that support authoritarian leaders. The real problem, Haugen explained, is that every time the Facebook executives see a conflict of interest between profits and people, they keep choosing profits. The Rohingya refugees are now suing Facebook for US$150 billion for failing to take down anti-Rohingya hate speech, but there is no precedent for foreign law succeeding in lawsuits against social media companies protected by Section 230. So there is no protection against the abuses supported by companies like Facebook.
It is also clear that Facebook cannot act as an effective self-regulator. The company said that it had spent over two years preparing for the last US election, mobilising dozens of teams and tens of thousands of employees to focus on security, all of which failed to detect or block the conspiracies that led to the storming of the Capitol on the January 6, 2021.
Governments are starting to take these threats seriously. Over the last five years, there have been competition lawsuits, congressional hearings and investigations by nearly 40 US States, the US House of Representatives, the Department of Justice, the European Commission, and competition authorities in the EU and UK. The US Federal Trade Commission fined Facebook US$5 billion in 2019 for the Cambridge Analytica scandal, and the European Commission has imposed nearly US$10 billion in fines on Google in the last four years. However, the technology firms are so profitable that even these fines have had very little effect on their behaviour. Facebook alone had US$86 billion in revenue in 2020. Some members of Congress are, therefore, talking about the need for anti-trust legislation to put an end to the anti-competitive behaviour of the technology firms, which have a bad habit of buying up smaller rivals before they can grow too strong.
The problem, however, is that regulating the hyperscalers is an extremely complicated and delicate task. No democracy wants to lose the extraordinary benefits that the Internet has created for society, and every government knows that a digital society is the way forward, and a digital economy will be an essential basis for competitiveness in future. So the challenge is figuring out a way to retain and encourage these gains while restraining the range of harms that have come from the largely unregulated and unchecked exercise of power by the same firms that have enabled these advances. It is also necessary to build a consensus as to the preferred solution and muster sufficient political support to drive through the required legislation, and it will take courage to endure the inevitable public criticism.
There are different options. The US may choose between breaking up the technology giants with antitrust law and reforming Section 230 by removing the protection from companies that serve illegal content to their users, probably with a specific focus on the algorithmic amplification that personalises feeds. Removing that protection would mean that Facebook itself could be sued if its algorithm promotes any illegal content, which means that Facebook would have to move to a non-algorithmic feed. The UK Parliament has agreed that the era of self-regulation by technology firms is over, that the firms are clearly responsible for services they have designed and profit from, and that they will be held accountable for their decisions. The UK’s proposed Online Safety Bill will, therefore, give the technology firms a duty of care to their users, with criminal liability and prosecution for social media executives who allow illegal content to be hosted on their platform and fail to take it down as soon as it is discovered. Germany now fines platforms up to €50 million if they do not delete posts containing racist, defamatory, or otherwise illegal speech within 24 hours. Australia will table legislation this month that will oblige social media companies to collect identifying details for all users and allow courts to access these identities in defamation cases (which will prevent anonymous trolling), and make social media companies legally liable for hosting defamatory posts. The European Court has just upheld a $2.8 billion antitrust fine against Google but will soon be able to levy much more serious penalties as the EU’s Digital Markets Act and Digital Services Act include powers to impose fines of up to 20 per cent of total worldwide turnover with additional restrictions that could hurt even Google or Facebook. Jamaica’s Data Protection Act, which is modelled on the European Union’s General Data Protection Regulation and will apply after 2022 to any entity that offers service to individuals in Jamaica, is another step in the direction of greater regulation of the digital world.
Politicians probably cannot rely on much public support for the necessary reforms, however, as most social media users have little idea of how the technology works or how much the platform knows about every aspect of their lives. They do know, however, that they are very dependent on the platforms and services that run on the Internet and would be very vocal if they thought that their access was in some way threatened. Even those users who do realise that they may have compromised their privacy and given away intimate secrets still continue to use social media, indicating the extent to which it has become absolutely essential to modern life.
The underlying problem is that most users are happy to allow the technology companies access to every aspect of their lives although they are not usually aware of how much personal information they are disclosing. They will let the technology company track their movements, see their contacts, monitor their online and offline activity, their banking, work and social life, shopping and sexual preferences, all in exchange for a ‘free’ App or access to the public space. This allows a small group of technology companies to control most aspects of the networked society, which puts them in the most powerful and privileged position in modern civilization. And this, inexorably, has brought with it extraordinary potential for abuses of power.
The hyperscalers promised the world a new era of freedom but inadvertently opened the gates to abuse and malice on an unprecedented scale and gave crime, fundamentalism, and terrorism the opportunity to metastasise into new and more virulent forms. Governments are now going, belatedly, to bring the digital frontier under the rule of law.
Anthony Clayton is professor of Caribbean Sustainable Development. Send feedback to email@example.com.