ArrowArtboardCreated with Sketch.Title ChevronTitle ChevronEye IconIcon FacebookIcon LinkedinIcon Mail ContactPath LayerIcon MailPositive ArrowIcon PrintTitle ChevronIcon Twitter
Business

Flaws in AI seen despite AlphaGo victory

 (placeholder image)
Google's AlphaGo beat South Korea's Lee Sedol, one of the best human players of Go, in a best-of-five series held in March in Seoul.   © Kyodo

TOKYO -- The victory of Google's Go-playing computer program in the recent best-of-five match against one of the best living human players of the ancient board game demonstrated the amazing power of cutting-edge artificial intelligence. But AlphaGo's performances during the match against South Korea's Lee Sedol, held on March 9-15 in Seoul, also shed lights on some weaknesses, as well as strengths, of an AI approach known as deep learning.

     AlphaGo, an AI system designed by a team of researchers at Google DeepMind, an AI lab in London, defeated the Go grandmaster by winning four of the five games.

     Google is on the leading edge of research in deep learning, a branch of AI focused on mimicking the activity in layers of neurons in the human brain.

Millions of moves

Go is a game of strategy and intuition that poses a dazzling challenge for AI experts. The DeepMind team taught AlphaGo to play the game by using a deep neural network that can develop the ability to recognize specific objects as it is fed enough visual and other data about them. 

     One important advantage of deep learning shown in the Go match is its remarkable versatility. The flexibility exhibited by this AI technology could be used for a broad array of purposes.

     AlphaGo's software does not even know the rules of Go. The DeepMind team has developed and improved AlphaGo by feeding its artificial neural network millions of moves from expert Go players -- the deep learning part -- and then, in an approach called reinforcement learning, setting up countless matches in which the machine played against itself.

     As of October 2015, AlphaGo had played itself as many as 30 million times, and from each victory and loss the system learned what are the good and bad moves in specific situations. 

     That means AlphaGo can discover new and innovative strategies by itself and steadily improve its performance by playing a staggering number of games, and in the process gain more practical experience than a human player could hope for in a lifetime. 

     In other words, at least concerning the game of Go, AI has reached the so-called technological singularity -- a point in AI evolution where machines will be able to create ever more powerful and smarter machines than themselves at an accelerating pace. 

     The focus now will shift to what kind of innovations a combination of human and artificial intelligence could produce in this game.

     Through a mix of deep learning -- simulating how the human brain works -- and reinforcement learning -- learning by trial and error driven by reward feedback to improve decisions -- AI can acquire amazing abilities in the areas of recognition and detection. These domains have long been the exclusive reserves of human intelligence. AI is now beginning to outperform humans in a wide range of tasks in these areas, including image, voice and facial recognition, analyzing human behaviors captured by security cameras, detecting signs of cyberattacks and analyzing medical images for diagnostic and other purposes.

Not so fast

But the man versus machine Go match in Seoul also highlighted two potential flaws of deep learning that could hamper its practical applications.

     One of them concerns misguided decisions made by AI, and the fact that it is extremely difficult for humans to pinpoint the factors that led to such mistakes.

     During game four of the series, AlphaGo lost to Lee after making a string of clearly bad moves. But even the members of the DeepMind team could not identify the cause of these errors.

     With an ordinary computer program, experts can find and resolve bugs by checking the code. But deep learning does not involve a logic code that humans can read. The only elements using software are parameters indicating the strength of connections between artificial neurons, with the algorithm being a black box to us.

     The other flaw is that highly trained AI can take actions that we cannot understand, but still produce good results.

     In game two, AlphaGo made an unusual move that flummoxed a commentator, himself a professional Go player. Later, the commentator repeatedly said that he could not understand why the baffling move had led to the machine's victory. 

     This could be a big problem in an environment where humans and AI work together, or where decisions made by AI are a matter of life and death for people. 

     If, for instance, AI for a self-driving car operates the vehicles in a way that baffles other drivers, it could cause serious accidents.

Enhancing advantages

The Go match also showed that companies that have tremendous data and computing power, like Google, have overwhelming advantages in AI research and development.

     Traditional deep learning technology is not quite good at distributed computing, in which tasks are carried out simultaneously by multiple servers. But Google has enhanced the distributed computing capacity of its TensorFlow machine learning library. The distributed computing version of AlphaGo was used for the Go match.

     When AlphaGo faced Fan Hui, Europe's reigning Go champion, in a machine-versus-man contest in October, the DeepMind researchers used a large network of computers spanning 176 graphics processing units and 1,202 central processing units.

     In AlpahGo's reinforcement learning process, the researchers took advantage of the vast computing resources of the company's infrastructure as a service capabilities, called "Google Cloud Platform." The Go match also served as a major public relations event for Google's AI research.

     In 1997 IBM's Deep Blue beat the reigning world chess champion at that time, and in 2011 the company's Watson supercomputer defeated two former champions on the "Jeopardy" trivia game show.

     In AlphaGo's showdown with the Go grandmaster, Google took a page out of IBM's book and landed a spectacular PR feat.

     Obviously, the latest AI technology milestone has further enhanced Google's power to attract deep learning and related AI talent by capitalizing on its stock of astronomical amounts of data and IT resources.

Sponsored Content

About Sponsored Content This content was commissioned by Nikkei's Global Business Bureau.

You have {{numberArticlesLeft}} free article{{numberArticlesLeft-plural}} left this monthThis is your last free article this month

Stay ahead with our exclusives on Asia;
the most dynamic market in the world.

Stay ahead with our exclusives on Asia

Get trusted insights from experts within Asia itself.

Get trusted insights from experts
within Asia itself.

Try 1 month for $0.99

You have {{numberArticlesLeft}} free article{{numberArticlesLeft-plural}} left this month

This is your last free article this month

Stay ahead with our exclusives on Asia; the most
dynamic market in the world
.

Get trusted insights from experts
within Asia itself.

Try 3 months for $9

Offer ends October 31st

Your trial period has expired

You need a subscription to...

  • Read all stories with unlimited access
  • Use our mobile and tablet apps
See all offers and subscribe

Your full access to Nikkei Asia has expired

You need a subscription to:

  • Read all stories with unlimited access
  • Use our mobile and tablet apps
See all offers
NAR on print phone, device, and tablet media

Nikkei Asian Review, now known as Nikkei Asia, will be the voice of the Asian Century.

Celebrate our next chapter
Free access for everyone - Sep. 30

Find out more