Incentives and Uncertainties: How might governments seek to advance AI?

With any emerging technology, questions about how and whether nations will try to leverage the opportunity to gain advantage over others, and what the effect that their actions will have on the technology’s resulting landscape, are inevitably raised. This topic is of particular relevance now, as nations are increasingly viewing AI as a way in which they can help drive the next major economic expansion. The fear of missing out on this opportunity is undoubtedly fueling the development of domestic plans to advance AI capabilities: through investment, incentives, and talent development. How world leaders ultimately end up balancing the risks and benefits of developing AI technology will likely have far-reaching effects, although it is highly uncertain as to whether these effects will be net-positive for our society.

It’s no secret that excitement around the development of AI systems that can perform tasks more quickly and efficiently than their human counterparts is mounting, with research into the industry attracting substantial investment from both public and private patrons in recent years. A few notable examples of this activity include the US Pentagon’s intention to request a $12 billion budget for AI weapon technology in 2017, China’s 2016 plan to create a $15 billion dollar AI market by 2018, and the doubling of private investment into AI between 2020 and 2021, totalling more than $66 billion.

It’s likely that a large part of this investment is driven by a confidence in the potential of new methods to continue progressing as they have been doing since the turn of the century. Modern AI techniques most typically involve machine learning: a practice that uses data to make predictions, without being explicitly programmed to do so. This approach massively increases the complexity of tasks that AI systems are able to execute, with more complex models being created simply by adding more data and compute. While the fraction of tasks that present-day ML systems can perform as well as or better than humans is fairly small, we’ve seen tremendous progress in what can be achieved with ML. For instance, earlier this year, DeepMind its released predicted structures for nearly all catalogued proteins known to science produced by the AlphaFold AI system, potentially transforming the field of biology, while tools such as DALL-E and Midjourney have been prompting interesting questions about the meaning of art after their systems were used to win a fine arts competition in the US. 

This is not unusual. High technology industries such as the nuclear industry, semiconductor chip development, and the civil aviation industry, have long been the subject of heavy government industrial policy in order to advance national interest. It is common for nations to implement industrial policies which focus on so-called ‘infant industries’: newly created or emerging industries consisting of firms that may not yet be efficient and operate at a profit, but are expected to flourish in the long-run. It is plausible that certain aspects of the AI industry could fall into this category.

A frequently referred to example in discussing government industrial policy, is the aircraft subsidies dispute (between the US’s Boeing and its European counterpart Airbus) which ended up lasting nearly two decades, only being resolved in summer 2021.

At the crux of the altercation was a disagreement over the fairness of the financial support provided to Airbus (founded in 1970) by the EU. The support was initially intended to bring Airbus to a point at which it could adequately compete in the industry, but was flagged as an issue in 1988 when the firm started to eat into Boeing’s share of the market with its flagship A320 – the model of aircraft that has historically made up the majority of all planes sold. The flare-up was temporarily resolved with a bilateral agreement, although this proved not to be enough, and the dispute eventually swung into full force in 2004 when the number of Airbus aircraft deliveries first surpassed those of its US counterpart. 

The tit-for-tat relations that ensued, which consisted of back-and-forth punitive tariffs affecting a range of goods beyond aircraft – from wine and whisky to olives and dairy – are perhaps reflective of the high value that nations place on building up technological advantages over their rivals. Indeed, setting aside the tariffs it’s clear that Airbus has brought substantial economic benefits to the countries in which it is located. In 2019, an Oxford Economics report (admittedly sponsored by Airbus) found that Airbus directly generated a gross value-added contribution to the UK GDP of £2.0 billion in 2019 – a figure that’s hard to ignore. What’s more, values like these can often fail to capture the indirect value of a nation hosting an industry giant, hiding the true value of the business. Airbus’ purchases of goods and services stimulate activity throughout the EU economy, supporting hundreds of thousands of technologically sophisticated jobs across the continent. The importance of Airbus to the EU transcends the purely economic aspects of its business, increasing the EU’s political power.

With AI, the upsides of being an international leader could be even more marked. Nick Bostrom, an Oxford-based philosopher known for his work on existential risks, proposed in his book Superintelligence that the first group to develop an adequately advanced AI system might gain what he termed a ‘decisive strategic advantage’. In other words, the group will have attained a level of technological order and other advantages sufficient for complete global domination. He went further, describing a scenario in which this group might use their advantage to suppress competitors and form a single global decision-making agency: a ‘singleton’. A singleton could be good or bad: it could support civilisation, or it could obliterate it on coming to power. 

If leaders buy into this hypothesis, they might have strong incentives to pursue heavy industrial policy in order to get there first. Some commentators have expressed the concern that this kind of arms race – a winner-takes-all scenario in which each team is incentivised to finish first – might lead development teams to skimp on important safety precautions. Others have suggested that governments might want to try and avoid imposing regulations on the development of advanced AI, and should instead direct funding towards projects that meet an appropriate safety standard in order to avoid creating a speed advantage for any project willing to skirt around the rules. 

However, even if a group does succeed in attaining a decisive strategic advantage, it does not necessarily follow that the group will aim to form a singleton. Bostrom shows the plausibility of this by directing the reader’s attention to the US’s decision not to pursue the establishment of a nuclear monopoly during the 1945-1949 period in which they were the sole nuclear power. In this period, the US could have feasibly used its technological advantage to construct a singleton – for instance, by investing their efforts into building up an extensive nuclear arsenal then threatening nuclear strike to destroy the industrial capacity of any nation with a nascent nuclear weapons program. But this isn’t what they chose to do.

There are numerous reasons that an actor (or group of actors) might be deterred from devoting their efforts towards the development of a singleton, among them are cost, internal coordination concerns, and a strong aversion to risk. Moreover, there is additional difficulty in predicting how nations will act with regard to AI. Like nuclear technology, AI is a dual-use technology, and superiority in the field is not synonymous with security: there are substantial risks in the race to the top. But there is also uncertainty of how things could differ if, rather than a human leader or group, it was a superintelligent artificial agent coming into possession of a decisive strategic advantage. For example, the problem of internal coordination is likely to be avoided in an AI takeover. Indeed, researchers working at the Machine Intelligence Research Institute have even suggested that the potentially catastrophic destabilising effect that the development of advanced AI could have might provide reason for leaders to be more cooperative than usual.

These considerations make it difficult to assess how countries might act in coming years. Because of the high degree of unpredictability that surrounds AI research and development, it will probably be tricky for governments to properly weigh up the risks and benefits of funding the accelerated advancement of the field. It must be said that, at present, governments and intelligence agencies do not seem to be paying all that much attention to the prospect of a potentially meteoric intelligence explosion like the one which would result from the formation of a singleton. The majority of ‘AI strategies’ and government white papers that have been published by nations so far have largely focused on improving technological capabilities in individual sectors and industries rather than global takeover. Although admittedly, if states indeed were looking hard into this possibility, it’d be hard to imagine them being very public about it. In addition, advanced AI technology is presumably much harder to detect than, for example, nuclear technology (which requires resources with stringent access requirements), perhaps increasing the odds that a group might try and succeed in keeping their development program quiet.

One thing almost certain to have an effect in determining the balance of power between actors is the speed at which AI ‘takes off’. The small but growing community of researchers focusing their efforts on the safe development of artificial intelligence, have broadly lumped AI takeoff speeds into two categories: ‘soft’ and ‘hard’. The term ‘soft takeoff’ generally refers to a situation in which there is an advanced AI system that self-improves over a period of years or decades, whereas a ‘hard takeoff’ refers to AI expansion in a matter of minutes, days, or months at most. The latter is perhaps more likely to result in the formation of a singleton, since it involves the AI system rapidly ascending in power without human control. 

As mentioned above, a fear that a small, but significant number of researchers hold is that future AI systems might have advanced planning capabilities and strategic awareness, allowing them to gain power over present-day civilisation. This may seem slightly too far-fetched or abstract to properly imagine. To paint a clearer picture of how this could play out, it might be useful to describe a few concrete ideas of ways in which some sort of advanced AI or collection of AIs could take power.

Paul Christiano, a research associate at the Future of Humanity Institute in Oxford known for his work on aligning AI systems to human values, outlined a possible situation in which ML systems become able to handle a range of tasks currently managed by humans – from running factories, manufacturing processes, and supply chains to design and decision making in deploying military drones. In this scenario, ML systems are designing and testing new ML systems, and the financing of this process is also carried out by ML systems. At first, this might seem like an excellent time in history. These automated systems would be far more efficient than their predecessors, and could, in theory, facilitate the improvement in the health and wealth of human members of society. 

However, if the world becomes increasingly complicated in this way, it could become correspondingly difficult for humans to evaluate. Since we’d be unlikely to understand what these automated systems would be doing and why, we’d probably be forced to surrender to evaluating things by simply looking at results: are our cities safe? Are our investments going up? Is our society peaceful?

If everything is taken out of our hands, we won’t have any idea whether the ML systems we created to run our society are actually trying to avoid and accurately predict a grave failure – or whether they’re simply being trained on data optimised to make everything seem OK. The sensors and watchdogs that we’d want to rely on would similarly be run by ML systems and so it may become impossible to decipher the ‘intentions’ of faulty systems. In the extreme case, humans who try to intervene with an ML system convinced that everything is perfectly fine may even be stopped and killed. Christiano admits that his story is rough and imperfect, but his ideas, and their plausibility, remain worrying.

What’s more is that, as mentioned above, this type of commentary appears to be occurring mainly on the fringes of the discussion on AI, and at present, governments are ostensibly focusing on the more ‘narrow’ risks that AI could present. Unless governments and intelligence agencies are made aware of and take seriously the huge dangers that advanced AI could pose to civilisation there is a risk that they will half-blindly implement heavy industrial policy in favour of its development.

One thought on “Incentives and Uncertainties: How might governments seek to advance AI?

  • October 25, 2022 at 4:36 pm
    Permalink

    Great piece! I found the diagrams really helpful for illustrating some of the pathways to risk. This seems like a really good place to find lots of resources summarised and linked to, for readers new to the topic. Do you have any thoughts on what types of evidence/arguments would be most persuasive to convince governments to take seriously the dangers from advanced AI?

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *