ArrowArtboardCreated with Sketch.Title ChevronTitle ChevronEye IconIcon FacebookIcon LinkedinIcon Mail ContactPath LayerIcon MailPositive ArrowIcon PrintSite TitleTitle ChevronIcon Twitter
Science

AI and all its dilemmas are here to stay

John Krafcik, right, CEO of Google's self-driving car project speaks at Nikkei Innovation Forum in East Palo Alto, U.S. on Oct. 26.

EAST PALO ALTO, U.S. -- "Artificial intelligence" is on the lips of policymakers and business leaders, many of whom consider it the next big thing. But this isn't the first time artificial intelligence has shot to prominence.

AI first became a hot topic in the 1940s, then again in the 1980s, each time fading away as a technology not yet up to scratch.

2016 may be different, according to speakers at Nikkei Innovation Forum: The Future of AI, Robots and Us, held Wednesday in Silicon Valley.

And this time AI is packing some problems.

"I make this analogy that AI is the new electricity," said Andrew Ng, the conference's first speaker and the chief scientist at the Chinese internet giant Baidu. "A lot of years ago, as we started to electrify the U.S., that transformed industry after industry. ... I think that we now see a clear path for AI to transform multiple industries as well."

One of these industries, Ng said, will be transportation. He said Baidu is already working in that area.

Ng's comments are easy to comprehend when taking into scope what Google has achieved in the past few years. The internet behemoth has been a front-runner in applying AI to autonomous vehicles. Its self-driving cars recently passed the 2 million-mile mark.

John Krafcik is CEO of Google's self-driving car project. "One of the most amazing things to see," he said, "even for industry veterans who have been serving in the [car] industry for two or three decades, [is when] we put them in self-driving cars. ... Within two and a half minutes they are trusting [their AI pilot] and they feel OK.

"That has given us a lot of optimism with the state of the car now."

Krafcik noted that it will take some time for companies to mass-market self-driving cars. When the time comes, though, it will spell trouble for those with jobs related to driving.

The potential impact goes beyond drivers. In a "singularity" world, where AI becomes smarter than humans, many of the jobs that humans now do are likely to be taken by smart machines.

Economists and technologists have been sounding alarms about this.

"I think the jobs issue is a very serious one," Ng said, adding that there are many jobs "in the cross hairs" of AI. "I do think we are facing, and we will face, increased labor displacements."

Ng said changes need to be made in societal safety nets and education systems.

Fetching coffee

Stuart Russell, professor of computer science at the University of California at Berkeley, gave the conference's closing speech. He said that given the rate of investment, the number of people working in the field and the amount of research lavished on it, artificial intelligence will soon be on the same level as human beings.

For Russell, this is a graver concern than job displacement.

"Think about what it means for an incredibly intelligent machine to be given an objective," Russell said. "It is going to do anything in its power to achieve the objective that you gave it."

Russell then told his audience to imagine a scene in which an intelligent machine is ordered to fetch a cup of coffee. "What might prevent it from achieving what you asked it to do?" Russell asked rhetorically. "Maybe someone could switch it off. That is one way it could fail to achieve its objective. So it is going to defend itself against any attempt to switch it off, because if it is a goal as simple as fetching a coffee, it knows it cannot fetch a coffee if it is dead.

"The machine will prevent any attempts to switch it off. So self-preservation is not built into the machines, it is a logical consequence of any objective that we give them."

Russell said we need to stop thinking about AI as an all-encompassing, all-purpose intelligence and start thinking about it as a tool that can fulfill the single objective of maximizing and realizing human value. "The robot should only have one objective, which is to maximize the realization of human values," he said. "It has no intrinsic preferences of its own. It does not want to preserve its own existence. It does not want to achieve anything for its own means."

The problem is that AI does not know what human values are. Therefore, AI systems "have to learn what the humans want," Russell said, by observing human behavior. This, Russell said, reveals our preferences.

"The actions that the AI researchers are taking now may have a significant impact on the future of the human race," he said. "So we absolutely have to try our best to figure out what to do when this technology comes along."

Sponsored Content

About Sponsored Content This content was commissioned by Nikkei's Global Business Bureau.

You have {{numberArticlesLeft}} free article{{numberArticlesLeft-plural}} left this monthThis is your last free article this month

Stay ahead with our exclusives on Asia;
the most dynamic market in the world.

Stay ahead with our exclusives on Asia

Get trusted insights from experts within Asia itself.

Get trusted insights from experts
within Asia itself.

Try 1 month for $0.99

You have {{numberArticlesLeft}} free article{{numberArticlesLeft-plural}} left this month

This is your last free article this month

Stay ahead with our exclusives on Asia; the most
dynamic market in the world
.

Get trusted insights from experts
within Asia itself.

Try 3 months for $9

Offer ends July 31st

Your trial period has expired

You need a subscription to...

  • Read all stories with unlimited access
  • Use our mobile and tablet apps
See all offers and subscribe

Your full access to the Nikkei Asian Review has expired

You need a subscription to:

  • Read all stories with unlimited access
  • Use our mobile and tablet apps
See all offers
NAR on print phone, device, and tablet media