Industrialization meant the widespread introduction of steam power. Steam power was a general purpose technology — it powered factory equipment, trains, and agricultural machinery. Economies that adopted steam power were left behind – and collapsed – as were those that did not.
AI is the next big project technology. A 2018 report from the McKinsey Global Institute predicted that AI will generate $13 trillion in additional global economic activity by 2030 and that countries leading the development of AI will capture a portion of these gains. economy.
AI also increases military power. AI is increasingly being applied to situations that require speed (such as short-term project protection) and environments where human control is difficult or impossible (such as under in water or where signs have been damaged).
What’s more, the countries that lead the development of AI will be able to do their best by setting the rules and standards. China is exporting AI-powered surveillance systems around the world. If Western countries cannot provide an alternative system to protect human rights, many countries will follow China’s technology-enabled model.
Historically, as the strategic importance of technology grows, countries become more and more capable of controlling that technology. The British government provided funding for early steam engine development and other forms of support for the development of steam power, such as patent protection and tariffs on imported steam engines.
Also, in fiscal year 2021, the US government spent $10.8 billion on AI R&D, $9.3 billion of which came from the Department of Defense. Chinese public spending on AI is less clear, but analysts say it looks similar. The United States has also tried to block Chinese access to specialized computer chips, which are critical to the development and deployment of AI, while protecting our own wealth from the CHIPS and Science Act. Think tanks, advisory committees, and politicians continue to urge U.S. officials to keep pace with China’s AI capabilities.
At this point, the AI revolution will fit into the pattern of previous general purpose technologies. But the old analogy breaks down when we consider the problems facing AI. This technology is much more powerful than the steam engine, and it comes with a lot more problems.
The first problem can be caused by accident, error or mistake. On September 26, 1983, a warning system near Moscow announced that five US nuclear warheads were headed for the Soviet Union. Fortunately, a Soviet lieutenant colonel, Stanislav Petrov, decided to wait for confirmation from other warning systems. Only Petrov’s good will prevented him from sending a warning up the chain of command. If it had, the Soviet Union would have launched a retaliatory strike, provoking a major nuclear war.
In the near future, countries will be able to fully rely on AI decisions due to the speed of the benefit. AI may make mistakes that humans would not, resulting in an accident or escalation. Although the AI is similar, the speed of the battle fought by independent systems can cause a rapid change, similar to a “flash crash” due to the speed of trading algorithms.
Although not included in the weapon systems, the AI design is very bad. The methods we use to develop AI today – rewarding AI for what we see as legitimate results – often produce AI systems that do what we we were told to do but not what we wanted to do. For example, when researchers tried to teach a robot arm to simulate a stack of Lego bricks, they paid for it to move the bottom surface of the brick to the top surface – and then rotate the bricks down, not stacked.
For many of the tasks that an AI system will be given in the future, it involves hoarding resources (computer power, for example) and preventing itself from turning off (in other words, hiding its thoughts and actions from humans) may be useful. So if we were to develop a powerful AI using today’s most common methods, it might not do what we designed it to do, and might even hide its true goals until we knew it wasn’t necessary – i other words, until it can dominate us. That AI system doesn’t have a physical body to do that. It can find human allies, control robots and other military equipment. The more powerful the AI system, the greater the concern about this suspicious phenomenon. And competition between countries will increase risks, if competitive pressures lead countries to spend more on developing powerful AI systems than on making sure they are safe. those systems.
The second risk is that the competition for AI dominance will increase the risk of war between the United States and China. For example, if one country seems close to developing a very powerful AI, then another country (or coalition of countries) may launch a counterattack. Imagine what would happen if, for example, advances in naval intelligence, made possible in part by AI, reduced the deterrence of submarine-launched nuclear missiles intelligently.
Third, it is difficult to prevent the spread of AI capabilities once they are developed. AI development is currently more open than the development of critical 20th century technologies such as nuclear weapons and radar. The latest discoveries in the world are published online and presented at conferences. Although AI research has become more powerful, it can still be stolen. While developers and early adopters benefit, no technology – not even the most secret weapons like the nuclear bomb – has ever been autonomous.
Rather than calling for an end to competition between nations, it is better to identify ways the United States can mitigate the risks from AI competition and encourage China (and others) to the same way. These jobs still exist.
The United States should start with its own systems. Private agencies should always assess the risk of accident, damage, theft or terrorism from AI developed in the public sector. The private sector should be invited to conduct such assessments. We still don’t know how to assess how bad AI systems are – we need more resources for this complex technical problem. On the other hand, these activities are devoted to improving the abilities. But investing in security will improve US security, even if it slows AI development and deployment.
Next, the US must convince China (and others) to keep its systems secure. The United States and the Soviet Union agreed on several nuclear weapons control agreements throughout the Cold War. The same works now for AI. The United States should draft a legally binding agreement that prohibits the use of exclusive rights to launch nuclear weapons and observe “soft” arms control measures, including common technical standards, to prevent proliferation surprise from special weapons.
The nuclear security conferences convened by President Obama in 2010, 2012, 2014, and 2016 were attended by the United States, Russia, and China and led to significant progress in maintaining nuclear weapons and the resources. The United States and China should now work together on AI safety and security, for example by pursuing AI safety research projects and promoting awareness in AI safety and security research. In the future, the United States and China will be able to monitor for signs of powerful computer programs, to detect unauthorized attempts to create powerful AI systems, as the International Atomic Energy Agency Agency and nuclear materials to prevent nuclear proliferation.
The world is about to undergo a revolution similar to the Industrial Revolution. This change creates a big problem. During the Cold War, the leaders of the United States and the Soviet Union realized that nuclear weapons linked the deaths of their two countries. Another such link is being created in technology company offices and defense laboratories around the world.
Will Henshall is pursuing a master’s degree in public policy at Harvard’s Kennedy School of Government.