TOKYO -- Myriad works, both fact and fiction, discuss the question of if and when artificial intelligence takes over from humans.
Two lines of thinking emerge in much of the more serious literature. The first believes in "singularity" -- a concept proposed by U.S. inventor Ray Kurzweil, which predicts AI comes to exceed what humans are capable of around the year 2045. The other holds the phenomenon is impossible.
The difference may simply be down to a generation gap.
There have been three booms in the history of AI -- in the 1960s, the 1980s, and today.
Most critics of the singularity theory come from the old guard -- the generation of researchers involved in the development of a fifth-generation computer under a failed Japanese government project during the 1980s and their peers.
In contrast, there is young generation with no experience of the unsuccessful project -- engineers in their 20s to 40s who are working to develop general-purpose AI and economists of a similar age.
Few major companies in Japan have dived headlong into AI. Hitachi, in May, became one of the pacesetters when it launched a service with its AI system "H," which is aimed at analyzing big data owned by clients and Hitachi for use at factories, in transportation systems and the promotion of smart cities.
In a recent interview, company President Toshiaki Higashihara was very clear in his views on the idea of singularity.
Hitachi holds the belief that singularity "cannot happen."
Higashihara agrees that AI is on course to overtake humans in terms of knowledge processing as robots develop exponentially. However, no matter how much progress AI makes vis-a-vis the human brain, he said, "it will not lead to the emergence of AI with a will of its own."
In any case, there is a need for the development of AI algorithms.
There is an argument that AI will lead to a greater inequality, but Higashihara refutes this idea, too. He assumes that the popularization of AI will bring about an "ambient society," in which robots become part of every aspect of our lives. For example, there would no longer be need for ID cards as a security check to enter office buildings.
Put simply, wider use of AI could remove the gap between technology haves and have-nots, according to Higashihara.
He notes, however, that the issue of ethics is one area requiring great caution. The spread of AI is set to blur the boundaries between cyberspace and reality. The exchange of money in cyberspace, for example, will become commonplace, meaning people will use virtual transactions and real-life financial institutions differently.
Just as millennials today cannot remember a world without the internet, children will grow up in an environment where cyberspace and the real world mix seamlessly together -- and it will seem completely natural.
They will be the generations coming of age in 2045.
The question arises of how well equipped they will be to differentiate between the virtual and real worlds and, moreover, if they will be able to use AI in the correct way.
Hitachi now places vast importance on educating its engineers on the ethical questions they are set to face.
The company is increasing the number of opportunities for retired engineers to take the lead in telling their younger counterparts about what good researcher entails. With a pool of 986 retired certified engineers it can call upon, Hitachi is somewhat spoiled for choice.
Appropriate use of AI can, of course, also contribute much to further development and progress for people.
It could ease the transition of specialized knowledge. AI systems could memorize techniques and educate junior workers. Analysis of past sales data can help companies understand where orders are won or lost. AI could even one day improve corporate governance.
Its success depends on how well employees and employers learn, as well as the education offered to people from all walks of life.
Rather than the point at which AI takes over, singularity may signify a turning point that depends on the level of human maturity.