AI in Asia
It is said that when merchants arrived in the port of Alexandria in antiquity, their manuscripts would be seized, taken to the city’s famous library, and copied by scribes, who would confiscate the original and graciously give the copy to the merchant.
Something of that mercenary spirit is still alive in the software developers behind the wildly successful new generative artificial intelligence (AI) programs that are rewriting the digital economy.
The functionality of ChatGPT and its competitors is built on collections of text and other data that some allege has not properly been paid for. A major lawsuit from authors accusing OpenAI of systematically violating copyright to build the corpus on which programs like ChatGPT are based is only the start of a new round of litigation and regulation that will try to place limits on what is and is not permissible in AI.
But two problems complicate matters. The first is that, even more than for earlier digital innovations like the search engine, there are major first-mover advantages and economies of scale that make AI ripe for natural monopolies. An early age of antitrust suits against software makers like Microsoft, generally ending in weak settlements, did little to establish general principles for the digital economy about where to draw the line between successful innovation and anti-competitive behaviour.
The second problem is that AI has quite obvious national security applications, and if there are monopoly rents to be had, each government would prefer — for security purposes as well as economic reasons — that their own companies hold the dominant market position. Because of the high fixed costs of entry and increasing returns to scale, as well as the national security nexus, established players in the United States and China have the upper hand.
Given the volatile geopolitical situation and the splintering world economy, the new digital frontier has become an arena of contest between the two largest economies in the world, and that entails major risks for smaller economies, particularly in Asia.
New technologies often make existing rules obsolete, but not the values upon which they are based. The rapid spread of AI into every corner of the global economy demands new international economic rules, but they should be based on principles that have proven themselves, like international openness and transparency.
Given the centrality of the United States and China in the AI economy, there is an important role for Asian economic cooperation to play in driving the adoption of new rules of engagement for AI that address legitimate national security concerns without disadvantaging smaller economies. This explains Singapore’s proactivity in this sphere.
In an article in the latest East Asia Forum Quarterly, Jacob Taylor explores some of the potential features that a comprehensive system of AI governance might have. He argues that there is a need to address the tendency for governments to try to localise data through regional cooperation to ensure the free, well-regulated flow of data across national borders.
This will help to lower the barriers to entry for new, smaller players in the region. There must also be a concerted effort to build capacity in communities that have been excluded from the emerging digital economy in Asia through effective financing and regulatory assistance.
Any attempt to devise new rules to govern AI will, of course, come up against the unwillingness of Washington and Beijing to cede any advantage to their geopolitical rival. The United States’ refusal to come to the table to end the gridlock at the World Trade Organization suggests that it might be wishful thinking to imagine a comprehensive set of regulations for AI that has effective buy-in from all of the most important players. The G7 AI initiative, of which the United States is a part, does not meet this test.
As Taylor argues, ‘[t]here are no easy answers to questions of concentration, localisation and exclusion in AI systems. But coordinated AI governance can create incentives for diverse regional stakeholders to actively steward AI systems while increasing transparency around risks.’
The state of technology is moving faster than regulators have been able to keep up with, particularly given the borderless nature of most digital transactions.
The scope for AI to reshape economies and drive growth is obvious, but effective, efficient and thoughtful regulation is desperately needed to ensure that the benefits are not monopolised or squandered by locking data behind national borders and the potential of the new technology to include vastly more people in the process of development is realised.
This article was published by the East Asia Forum.
Based out of the Crawford School of Public Policy within the College of Asia and the Pacific at the Australian National University, the Forum is a joint initiative of the East Asian Bureau of Economic Research (EABER) and the South Asian Bureau of Economic Research (SABER).