TOKYO -- Artificial intelligence is increasingly able to make decisions on its own, presenting its human creators with new and unforeseen questions.
Many view this as an upcoming phase in which AI can help solve various problems facing humanity. But the possibility exists that technology will make inappropriate decisions that could be detrimental to society. Researchers are examining these risks, with particular concern that AI's process of deep learning remains a mystery, and thus, a formidable unknown.
SMRT, a major public transport operator in Singapore, introduced AI technology developed by Japan's NEC to its bus business last autumn to promote safe driving.
The AI analyzes data on the company's 2,000 bus drivers, including information such as native country, age, and braking and accelerating logs, and attempts to identify drivers likely to cause an accident within six months. Drivers flagged by the system then receive training.
"How on earth does the AI make decisions?" a SMRT staff member asked Masahiko Arai, a senior manager at NEC's big data strategy headquarters, during his visit to Singapore in January. "Please tell me so that I can explain this to drivers."
Recently, there had been rumors that Malaysians were often identified. The SMRT staff member feared that these drivers might complain about being treated unfairly.
The technology, called Heterogeneous Mixture Learning, allows its forecasting method to be expressed in a simple formula and can, therefore, be shown to be unbiased against any particular nationality, according to NEC.
Perhaps this is just another example of the anxiety people feel when exposed to a new and unfamiliar technologies.
In the AI age, however, such apprehension is likely to arise and may not easily be overcome. AI researchers, at least those on the front line, are beginning to argue that the technology must be designed to be in harmony with society.
Jeannette Wing, corporate vice president of Microsoft Research, pointed out some problems with deep learning at the company's Japanese subsidiary in Tokyo on Feb. 27. Deep learning is a method by which AI imitates the way the brain works to analyze -- and learn from -- massive amounts of data.
She said she did not know why AI makes one decision over another.
Wing is an influential researcher in the U.S. computer science community and leads Microsoft's 850 researchers. She also teaches at Carnegie Mellon University and the University of Washington.
AlphaGo, Google's AI program for the Japanese board game go, defeated one of the world's top professional players in March 2016. The victory was a significant achievement and boosted public recognition of the power of deep learning.
Deep learning is the process of AI reading data and making a model on its own so that it can formulate the best decision -- all done without being taught by humans. However, the way AI does this cannot be logically followed, and thus, it is referred to as a black box.
The mysterious workings behind deep learning present an obstacle to human-AI cooperation in the everyday world. Researchers understand this and are concerned by it, prompting discussions that transcend company boundaries.
Google, Amazon, Facebook, IBM and other entities in September 2016 created the Partnership on Artificial Intelligence, which is formulating a development policy to be shared by the members. Apple later joined the group, and various Japanese companies are expected to sign up soon.
Eric Horvitz, the interim head of the partnership and former president of the U.S. Association for the Advancement of Artificial Intelligence, said at a symposium in Tokyo on March 13 that AI must promote the welfare of humanity. He said AI must be fair and fulfill its duty to explain the decisions it makes.
Broadening the scope
In the September 2016 edition of its journal, the Japanese Society for Artificial Intelligence expressed a similar awareness of the situation with what it calls humane artificial intelligence.
On Feb. 28 this year, the society published ethical guidelines for member AI researchers. The idea of humane AI is incorporated in the ninth and last provision of the guidelines, which says that the AI itself also needs to be able to observe the ethical guidelines if it is to be a "member" of the society.
For her part, Wing spoke of examples of research approaches for creating AI that is in harmony with society.
One is a method for examining each of the information processing layers that correspond to the brain's nervous system in the deep learning algorithms and extracting factors that enable humans to understand how the AI processed data. Another is a method for showing in mathematical terms the probability of answers provided by AI.
David Heiner, Microsoft's vice president in charge of regulatory affairs, pointed out that AI reflects our world. When AI uses data from the world, it accepts the prejudices humans have. He said a diverse range of people, rather than only Caucasians in Silicon Valley, should take part in developing AI.
In the past, machines replaced humans to perform physical labor. Then computers took over a considerable amount of clerical work. Now, AI is beginning to make decisions, which raises serious concerns and questions. In the new division of labor, AI itself has large issues to be resolved.