TOKYO -- Google has demonstrated that artificial intelligence can defeat one of the strongest human players of Go, a game whose myriad positions presented a steep learning curve for computers.
The AlphaGo program won the first round of a best-of-five tournament in Seoul against a "surprised" Lee Sedol, who told reporters after Wednesday's match that he hadn't expected to lose or that the machine "would play the game in such a perfect manner."
"We are very excited about this historic moment," said Demis Hassabis, co-founder of DeepMind, the team that built AlphaGo.
The tournament continues until Tuesday, with the winner taking home a prize of $1 million.
Google revealed in January that AlphaGo had beaten the European Go champion and that a contest was being planned to test its skills against South Korea's Lee, a top-ranked professional Go player who has won international tournament after tournament since the 2000s.
In Go, players take turns placing black or white stones on a grid-like board, with the aim of encircling their opponent.
Lee conceded after three and a half hours of battling the computer. One of those watching the match online was Yuta Iyama, a leading Japanese player, who said he hadn't imagined Lee would lose and that "the shock is unbelievable."
AlphaGo didn't do anything out of the ordinary but rather attacked with "the kind of formidable moves that top pros would think of," according to Iyama. For his part, Lee made no obvious blunders, and "there were no times when he may have won had he moved in such and such a way," the Japanese player said.
Hitoshi Matsubara, head of the Japanese Society for Artificial Intelligence, called the latest victory "a great milestone" for AI.
"As with chess and shogi, artificial intelligence has caught up with humans in Go, the most difficult game of thought," Matsubara said.
AlphaGo's success stems from deep learning, a process by which computers seek to emulate the human brain's way of processing information. Deep learning looks for patterns in vast amounts of data. AlphaGo learned where and when to place its stones from analyzing professional Go matches.
Deep learning gained wider prominence in 2012 when a Canadian team adapted it to image recognition and convincingly won an international competition on the first try.
"What happened in image recognition has also happened in the world of Go," said Yutaka Matsuo, a project associate professor working on AI at the University of Tokyo.
"Going forward, the same thing may occur in robotics and other fields as well," Matsuo added.
AlphaGo's potential isn't limited to board games, according to DeepMind's Hassabis.
"Because the methods we've used are general-purpose, our hope is that one day they could be extended to help us address some of society's toughest and most pressing problems, from climate modeling to complex disease analysis," he has argued.