TOKYO -- Drones swarming like bees to attack an aircraft carrier, or taking flight after a missile strike to show soldiers the quickest way to fix a destroyed runway, are no longer the stuff of science fiction. They are in the plans of the U.S., China and Russia as they race to develop weapons controlled by artificial intelligence.
But with the pace of progress in military AI systems now outstripping discussion of international rules to govern them, these transformative weapons are likely to make their way onto the battlefield with little oversight.
Unmanned systems are crucial to China, where the decadeslong one-child policy has left the country with many only children whose parents have no desire to send them into combat. The People's Liberation Army is working on turning old tanks due for decommissioning into remote-controlled robots, and with AI, these would have no need for human involvement at all.
This technology also appeals to Japan's Self-Defense Forces, which are struggling to recruit from a graying population.
But the more sophisticated AI-controlled weapons are, the greater the fallout could be. At a United Nations meeting on the topic this past March, Japan called for international restrictions on lethal autonomous weapons systems, which are controlled completely by AI and can use deadly force without human intervention.
If designed or used without sufficient care, autonomous weapons risk becoming so-called killer robots with little regard for the humanitarian principles on which the laws of war are based. Militaries or governments could invest too much confidence in the capabilities of AI systems, making them more willing to start conflicts or prone to overlook opportunities to end them.
The potential perils of AI weapons have prompted protests by civic groups. Companies working with the military on AI projects, such as Google, have faced pushback from employees.
Former Google CEO Eric Schmidt has predicted that humans will be capable of controlling runaway killer robots. And some in the U.S. military worry that unless they make human involvement a condition for using AI weapons, they may have trouble securing cooperation from engineers in the private sector, where most AI research is now taking place.
But fears about killer robots remain unabated as Washington, Beijing and Moscow compete for supremacy in the field.
If, for instance, the U.S. mandated human involvement in its AI weapons, out of concern for public opinion, this could give China a decisive edge in decision-making speed, if it opted for fully autonomous systems. This possibility could tempt Washington to leave all the decisions to AI in its own systems as well.
The use of AI in warfare is often described as a major breakthrough on a par with the development of nuclear weapons. But some observers -- including Henry Kissinger, who served as U.S. secretary of state in the 1970s during the Cold War between the U.S. and Russia -- worry that AI weapons may prove even harder to control.
For all their destructive force, nuclear arms cannot improve their own capabilities. Some fear that autonomous systems could eventually modify and enhance themselves without outside input, or build copies of themselves with the same capabilities.
And in contrast to nuclear missiles, which require large, conspicuous facilities to produce on a large scale, AI weapons can be developed by small teams that are far easier to conceal.
The natural human desire for greater safety and fear of letting the enemy get ahead drive the development of new weapons. This gave the world nuclear weapons, and it is poised to usher in new technology that could prove even more problematic.
Atomic bombs have been used only twice in the 74 years since their creation because those two occasions showed the world the horror of nuclear weapons. A similar shared understanding of the threat posed by AI weapons, and the difficulty of controlling them, would represent a first step toward keeping them in check. This technology will test the wisdom and foresight of humanity like never before.