SAN FRANCISCO/TOKYO -- Artificial intelligence can be a friend or foe to humans depending on how it is used. The question is how we control it.
In response to Pyongyang's continued provocations, such as conducting ballistic missile and nuclear tests, South Korea has deployed machine gun-wielding robots in the Demilitarized Zone that stretches 900 sq. km separating North Korea and South Korea. These robot soldiers can aim at targets within a radius of about 4km and control the guns in an emergency.
Lee Beom-hee, a professor at Seoul National University, said deploying AI and robots at the front will reduce the risk of human soldiers getting hurt.
However, a 28-year-old woman in Seoul said she is worried about whether AI has the proper decision-making capability to handle weapons. If robots develop the ability to launch attacks on their own, can they distinguish between friends and foes, or civilians and combatants?
With AI weapons possibly posing a threat, the question is how we control them, said Heigo Sato, a professor of Takushoku University in Tokyo.
AI can already harm us.
Alexander Reben, a California-based artist and roboticist, asked me to put on a headset equipped with a microphone -- one of his original creations -- and introduce myself in Japanese. However, I was unable to speak properly. Feeling sick, I took the headset off.
The headset is a piece of equipment that controls the speed at which humans speak. A drawling voice specially tweaked by software emitting from the headset inhibited me from speaking normally. Finally, I became totally numb.
Through this experience, I came to believe that it might be easy for advanced AI to manipulate humans by taking control of our brain activity.
While working at a tech company for a few years after studying robotics at the Massachusetts Institute of Technology, Reben started thinking about how AI and other advanced technology could pose a risk to humans. That is why he is experimenting with robots that can intentionally decide to hurt and injure humans.
DeepMind, Google's AI unit, in 2016 unveiled WaveNet, an AI system that learns from raw audio files and then produces digital sound waves that resemble those produced by the human voice. The technology sparked a controversy in the U.S. that it could be used to commit fraud. For example, AI could be used to scam people into making bank transfers.
The University of Oxford and others in 2015 cited AI as one of the 12 risks that threaten human civilization, alongside extreme climate change, nuclear war and a global pandemic. There is a keen sense of crisis that AI and robots could run amok one day.
However, Yoshiharu Habu, a professional player of shogi, or Japanese chess, is optimistic. He believes AI has the potential to eliminate the remaining 11 threats. The question is how humans handle AI, which could both help us and harm us.
Nikkei staff writers Akira Oikawa, Shoji Yano, Ryosuke Hanada and Hiromi Sato contributed to this article.