Tik-tok and the tyrants
As Australia becomes the latest country to ban the Chinese-owned content platform TikTok from government devices, it would be a mistake to limit the public policy debate to traditional state-on-state espionage or major power rivalry.
Such platforms and the advent of the eerily relatable artificial intelligence tool ChatGPT are society-changing technologies that cannot be dismissed as benign or treated as a public good closed to any regulatory or governance process.
ChatGPT and GPT-4, released in recent months by the US organisation OpenAI, create a sense of intimacy and identification with the user that, as the technology improves, will enable them to affect our thinking in ways that are orders of magnitude greater than today’s social media.
The name ‘chatbots’ hardly does them justice. ‘Synthetic relationships’ is the description used by some concerned technology commentators.
TikTok, meanwhile, is not just another app. It is influenced by an authoritarian political system while, in turn, having enormous influence in shaping public opinion by controlling what users see and hear.
Although chatbots and TikTok are distinct issues, they converge on one thorny question: how should liberal-democratic governments involve themselves in the use of technology so that citizens are protected from information manipulation—and hence influence over our beliefs—on an unprecedented scale?
The answer is that it’s time for democratic governments to step in more heavily to protect our citizens, institutions and way of life. We cannot leave a handful of tech titans and authoritarian regimes to rule the space unchallenged, as AI has the potential not only to be a source of information, but to establish a monopoly on truth that goes beyond human knowledge and becomes a new form of faith.
To date, Western governments have largely leaned towards a hands-off approach. Born partly out of the ideological struggles of the Cold War, we believed that governments should stay out of the way lest they stifle innovation.
Meanwhile, authoritarian regimes in China, Russia and elsewhere have grasped the value of technology as something they can control and weaponise, including to win information battles as part of broader political warfare. As Russian President Vladimir Putin said of AI in 2017: ‘Whoever becomes the leader in this sphere will become the ruler of the world.’
We would never want governments to control the information sphere as happens under authoritarian systems. But the philosophy of relying on the free market no longer works when the market is distorted by the Chinese Communist Party, the Putin regime and others who have collapsed the public–private distinction and are interfering heavily in what we once hoped might be a free marketplace of ideas.
The combined lesson of chatbots and TikTok is that we face a future challenge of technology that can establish a convincing sense of intimacy comparable to a companion, friend or mentor, but that is controlled by authoritarian regimes. Although AI and social media are distinct, the content users receive would ultimately be dictated by architecture built by humans beholden to an authoritarian system.
Let’s say the chatbot spends a few weeks establishing a relationship with you. Then it casually drops into a conversation that a particular candidate in a coming election has a policy platform remarkably in tune with your political beliefs. That might feel creepy, wherever the chatbot comes from.
Now let’s say it was built not by an independent outfit such as OpenAI, but by a Beijing- or Moscow-backed start-up. It notices you’ve been reading about Xinjiang and helpfully volunteers that it has reviewed social media posts from Uyghur people and concluded most of them actually feel safe and content.
Or you’ve been looking at news about Ukraine, so the chatbot lets you know that many experts reckon NATO made a mistake in expanding eastwards after 1991, and therefore you can hardly blame Moscow for feeling threatened. So really there are faults on both sides and a middle-ground peace settlement is needed.
The power of chatbots demonstrates that the debate over TikTok is the tip of the iceberg. We are going to face an accelerating proliferation of questions about how we respond to the development of AI-driven technology. The pace of change will be disorientating for many people, to the point that governments find it too difficult to engage voters. That would be a terrible failure.
Some 1,300 experts, even Elon Musk, are sufficiently troubled that they have called for a six-month pause on developing AI systems more powerful than GPT-4. And the Australian Signals Directorate just last week recognised the need to balance adoption of new technology with governance and issued a set of ‘ethical principles’ to manage its use of AI.
This is a welcome step, but leaves open the question of what are we doing to manage Beijing’s or Moscow’s use of AI?
We need an exhaustive political and public debate about how we regulate such technologies. That is why ASPI has this week hosted the Sydney Dialogue, a global forum to discuss the benefits and challenges of critical technologies.
One starting point would be to treat applications that shape public opinion in the same way as media providers, involving restrictions if they cannot demonstrate independence from government. This would separate the likes of the ABC and BBC—funded by government but with editorial or content independence—from government-controlled technologies that could exercise malign influence. This would also maintain a country-agnostic regulatory approach.
We will need to wade through many moral conundrums and find solutions we can realistically implement. Part of it will be national regulation; part will be international agreements on rules and norms. We don’t want to stifle innovation, nor can we leave our citizens to fend for themselves in what will be, at best, a chaotic and, at worst, a polluted and manipulated information environment.
Government involvement is not interference. A tech future based on anarchy with no rules will itself be ruled by authoritarian regimes.
This article was published by The Strategist.
Justin Bassi is the Executive Director of the Australian Strategic Policy Institute. He previously served in various roles in the Australian government regarding national security strategy, foreign policy and international relations.