AI develops in global regulatory black hole

As US retreats from economic leadership, questions arise as to who will supervise tech’s frontiers

This article was first published in Nikkei Asian Review on 23 February  2018.

President Xi Jinping’s strategy to transform China into the world’s leading artificial intelligence economy by 2030 has opened up a technological arms race with the U.S.

Both nations are now ploughing billions into research, while their largest tech companies strain to launch new AI-powered products. The stakes are not merely the economic bounty from this major technological advance, but also a decisive say in how AI will be regulated — and by whom.

President Donald Trump’s isolationist instincts mean the U.S., a traditional cheerleader for global cooperation, is making almost no effort to lead international efforts to think through AI’s future. Instead, AI’s global standards increasingly look set to be written behind closed doors in Silicon Valley and Beijing, leaving everyone else outside in the cold.

This regulatory black hole will be clear to see next month as leaders of the world’s largest economies gather for the Group of 20 meeting in Argentina. Amongst other things, the grouping will discuss plans to regulate cryptocurrencies. But so far neither the G-20 nor any other major international body has grappled with the problems raised by AI, meaning the algorithms that underpin everything from Apple’s voice assistant Siri to Amazon’s online shopping recommendations.

That AI can bring benefits is not in doubt. Automating routine human work, from office tasks to driving cars, will make companies more efficient, boosting productivity and growth in turn. Many of these tools are already being widely deployed around Asia, from Singapore’s plans to develop AI-powered border checks to Malaysia’s news last month that Kuala Lumpur will soon introduce an AI “smart city” system developed by China’s Alibaba.

Yet AI has been a cause for alarm, especially given the threat automation poses to jobs. Others worry about privacy, noting that AI systems become more effective with access to ever larger data sets, notably those with sensitive information about customers or citizens. “If you’re not concerned about AI safety, you should be,” as Tesla co-founder Elon Musk put it last year, while calling for sweeping rules to govern the sector.

Much of the alarm raised by Musk and others focuses on theoretical risks from advanced future forms of AI, for instance the use of fully autonomous “killer robots” for military purposes. Yet there are also more prosaic and immediate concerns where international regulation is needed, from the standards that govern AI in self-driving cars and financial markets, to data protection for its use in international medical devices and research.

This is where China comes in. So far, America’s tech giants maintain an edge. The likes of Google and Uber are investing heavily. The U.S. still has far more AI startups than China. In 2016 the consulting group McKinsey found that American companies had gobbled up roughly two thirds of global investment into AI. China took just 16%.

Even so, China’s internet giants are catching up quickly, as Alibaba, Baidu and others race to hire data scientists and build research labs. China’s odds of success are helped by the size of its population, which produces vast quantities of useful data, paired with relatively lax privacy and data rules. Last November, Eric Schmidt, chairman of Google’s parent Alphabet, warned that America risked falling behind.

As the world’s two largest economies do battle, national governments are struggling to work out the rules they need to keep AI in check. Yet there are plenty of areas where even better national laws will not be enough.

Take the example of finance, where AI is already being to assess loan credit worthiness and process insurance claims, while also managing trading strategies for hedge funds and banks. Last year, the Financial Stability Board, a group that advises the G-20, warned that automation risked making global financial markets unstable too, by creating “unexpected forms of interconnectedness” between those investing and trading with AI tools.

The FSB also warned that new “systemically important” financial companies would emerge, potentially including giants like Alibaba, which offer banking and insurance services. The spread of these tech companies into finance bring broader worries, given they have typically been regulated less closely than banks. Alibaba operates one of China’s dominant payment systems, for instance, and can use AI to comb through user’s data in areas like shopping habits and internet usage to assess creditworthiness. As it and other Chinese internet companies expand internationally, for instance into South East Asia, they will take these techniques with them.

There are other more alarming examples where global regulation may be needed, most obviously in defense, where concerns about fully autonomous weapons have brought calls for complete bans. But AI will spill across boundaries in much more mundane ways too, most obviously as other tools developed by American and Chinese companies are deployed elsewhere, from ride-hailing apps and e-commerce to public services, such as Kuala Lumpur’s smart city plan.

The lack of a global response to all this is worrying, but it also indicates a broader problem, and one especially relevant in Asia — namely the fact the U.S. can no longer be relied upon to lead global efforts to govern areas like the development of new technologies, as it did during the early days of the internet in the 1990s, for instance.

For all its own global aspirations, China shows even less interest in such a world role. Instead, it wants others to adopt Chinese standards, as is increasingly the case in other sectors where China is a world leader, such as the manufacturing of ultra-high voltage power equipment. In the case of AI, there is a clear risk of a race to the bottom, as China’s aim of overtaking the U.S. takes precedence over safeguards in areas like privacy and data protection.

What might be done about all this is less clear. Some academics propose a global AI regulatory agency, similar to those that govern trade or intellectual property. Others call for specific prohibitions, including bans on AI weapons, or moves to increase transparency about the makeup of algorithms and monitoring of how they work once deployed. Either way, most countries, including in Asia, are set to become users of AI tools developed in China and the U.S. Whoever wins that race, everyone else has an interest in having global rules that even the robots must obey.