ArrowArtboardCreated with Sketch.Title ChevronTitle ChevronIcon FacebookIcon LinkedinIcon Mail ContactPath LayerIcon MailPositive ArrowIcon PrintIcon Twitter
Indo-Pacific

Killer robots need ethical rules, US and Chinese analysts agree

Prospect of robots run amok raises thorny questions of accountability

Sailors move an X-47B unmanned combat air system demonstrator onto an aircraft elevator aboard the USS George H.W. Bush aircraft carrier in the Atlantic Ocean. (Photo courtesy of the U.S. Navy)

NEW YORK -- Autonomous weapons systems, or "killer robots," have no fear, no anger, and no guilt or hesitation about pulling the trigger. They do as their programming tells them to.

But say a robot is tasked with hunting for Scud missiles from among a "library" of targets, choosing from those it sees. Who, or what, would keep it from attacking a Scud near a school?

How much is the machine allowed to do on its own, and who among its human overlords takes responsibility for unintended engagements? These questions of "machine permissibility" and "machine accountability" are just some of what countries will wrestle with as lethal autonomous weapons are increasingly deployed in theaters of war.

Autonomous weapons systems are expected to play a major role in future combat, since all three players in the great-power competition have incentives to switch to them from humans. The U.S. has just endured a grueling two-decade war on terrorism in Iraq and Afghanistan, where many lives were lost. Russia has a declining population and will need robots to sustain its forces.

China, due to its past one-child policy, has a military almost entirely made up of each family's only child.

Two MH-60S Sea Hawk helicopters and an MQ-8B Fire Scout unmanned aerial vehicle conduct hover checks at the Mid-Atlantic Regional Spaceport's Unmanned Aircraft Systems Airfield at NASA Goddard’s Wallops Flight Facility in Virginia. (Photo courtesy of the U.S. Navy)

From U.S. operations in Afghanistan to the recent Israel-Hamas conflict, unmanned vehicles have been in heavy use. But the impact of drones in the autumn 2020 clash between Azerbaijan and Armenia in the Nagorno-Karabakh region caught the eye of many military experts around the world.

"The Azerbaijanis used drones to incredible new effect," Peter Singer, a strategist at the New America think tank and author of "Burn-In: A Novel of the Real Robotic Revolution," told Nikkei Asia. "They took out over 40% of Armenia's tanks and armored fighting vehicles, and over 90% of their artillery and missiles, utilizing a mix of airstrikes and drones."

"Fifteen years ago, the question in war was, 'Will there be a role for drones?'" he said.

"Coming out of that war, no one's questioning whether drones are going to be used in war," Singer said. "They're questioning, 'Do tanks still have a role?'"

The drones used by Azerbaijan included equipment purchased from Turkey and Israel that helped identify, target and attack Armenian defensive positions and armored units, according to a January report by the Congressional Research Service.

In an April paper titled "Principles for the Combat Employment of Weapon Systems with Autonomous Functionalities," former U.S. Deputy Secretary of Defense Robert Work wrote of the need to "develop, debate, and agree upon some commonly accepted principles for the employment of weapon systems with autonomous functionalities in armed conflict."

Work opposed mandating human oversight over every step of the kill chain. For example, if a munition dispenser released 40 skeets over a group of targets, the time between the release of the skeets and their attacks is measured in seconds.

"Requiring a human-in-the-loop would therefore require 40 human operators to monitor the action of one skeet and permit or abort its attack -- a prohibitive personnel requirement," Work wrote.

An unmanned aerial vehicle of U.S. Customs and Border Protection stands by ready for patrol along the southern border. (Photo courtesy of the CBP)

Still, Work's paper suggests that there should be "a responsible chain of human command and control" to guide the use of weapon systems with autonomous functionalities, and that it should be clear that human responsibility for decisions over the use of force can never be transferred to machines.

But unlike targeting an enemy destroyer or submarine at sea -- a clear-cut decision for a machine -- governing autonomous weapons in urban warfare will be more complex.

In a 2019 podcast with the Modern War Institute at West Point, Work gave the example of an autonomous weapons system confronted with four people in a city.

"Two people are holding a rifle, but two people are not. Am I going to kill all four, or am I just going to kill the two with the rifles?" The army fights among the people, so the burden on U.S. Army autonomous systems will be much higher than going out after a ship, he said.

"It's much, much more difficult in things like compartmented terrain in megacities."

Similar debates have also taken place in China, where scholars often express concerns over frequent American use of drones.

Defense analysts Chen Dongheng and Li Xin'an argued in an article for the official People's Liberation Army Daily last year that intelligent combat systems should adhere to the basic principle of "people in the loop," and prioritize human judgment, operation and control.

In a 2019 PLA Daily article, analysts Zhao Xiangang and Liu Xiaoxing argued that unmanned combat could incentivize big military powers to use force, further dehumanize the enemy and gamify the act of killing, and lead to high collateral damage.

A Vanilla ultraendurance land-launched unmanned aerial vehicle operates in the Pacific Ocean on April 24. (Photo courtesy of the U.S. Navy)

Zhao and Liu argued that even a highly intelligent system would be hard-pressed to discern intentions on the battlefield -- when faced with enemies who have been injured or disarmed or are using civilians as human shields, for instance. Surrendering such judgments to machines will seriously challenge the civilian-combatant distinction in international humanitarian law, as well as the rule that members of armed forces who have laid down their weapons shall not be made the object of attack, they wrote.

The Washington-based think tank Brookings Institution and the Beijing-based Tsinghua University, together with the Berggruen Institute in Los Angeles and the Minderoo Foundation in Perth, Australia, have been conducting track two (unofficial) dialogues on the issue over the past two years. 

In a joint report, Tsinghua's Fu Ying, a former Chinese vice minister of foreign affairs, wrote that AI has limitations, including the inability to interpret intuition, emotion, responsibility and value. In the human-machine collaborative process, the machine's deficiencies could lead to escalations of international crises, she said.

"China is ready to work with the U.S. and the rest of the world on the governance of AI," she wrote.

Brookings President John Allen wrote that, "There is certainly more agreement than disagreement among national security technology experts in the U.S. and China over the risks and challenges posed by AI."

New America's Singer says that these are "new legal ethical questions that humans have never really dealt with before" and that they are not limited to future warfare. "They aren't just taking place on an imaginary future battlefield -- they're taking place on our highways, right now," he said.

In one scenario posed by Singer, your self-driving car -- without you inside -- gets into a wreck and someone dies.

"Are you responsible? Is the company responsible that made the car?" he asks.

The Pentagon does have guidance on autonomy in weapon systems: Directive No. 3000.09, which was issued in 2012 and extended in 2017. Commanders and operators should exercise "appropriate levels of human judgment over the use of force," it states.

Singer called for more clarity in a 2016 opinion piece he co-wrote: "The words 'appropriate' and 'judgment' are pretty loaded terms here, given that people might reasonably debate their meanings in all sorts of contexts. In short, if something is going to be banned or not, our definitions of it need to be much more clear and accessible."

Sponsored Content

About Sponsored Content This content was commissioned by Nikkei's Global Business Bureau.

You have {{numberArticlesLeft}} free article{{numberArticlesLeft-plural}} left this monthThis is your last free article this month

Stay ahead with our exclusives on Asia;
the most dynamic market in the world.

Stay ahead with our exclusives on Asia

Get trusted insights from experts within Asia itself.

Get trusted insights from experts
within Asia itself.

Try 1 month for $0.99

You have {{numberArticlesLeft}} free article{{numberArticlesLeft-plural}} left this month

This is your last free article this month

Stay ahead with our exclusives on Asia; the most
dynamic market in the world
.

Get trusted insights from experts
within Asia itself.

Try 3 months for $9

Offer ends October 31st

Your trial period has expired

You need a subscription to...

  • Read all stories with unlimited access
  • Use our mobile and tablet apps
See all offers and subscribe

Your full access to Nikkei Asia has expired

You need a subscription to:

  • Read all stories with unlimited access
  • Use our mobile and tablet apps
See all offers
NAR on print phone, device, and tablet media

Nikkei Asian Review, now known as Nikkei Asia, will be the voice of the Asian Century.

Celebrate our next chapter
Free access for everyone - Sep. 30

Find out more