Computational encroachment and dominance of the human pastime of board games is almost complete, with one of the few remaining vestiges of human supremacy taking a beating with Google's AlphaGo system besting the European Go champion by five games to nil.
Due to the sheer amount of moves available on a Go board -- 10^761 possible games, versus 10^120 for chess -- the game has been seen as harder for computers to crack, and almost impossible by the brute force methods used to conquer many other board games.
Published in Nature today, Google has taken the wraps off its AlphaGo system, which trades raw power for a more nuanced approach involving a pair of neural networks and a new search algorithm.
"The key to AlphaGo is reducing the enormous search space to something more manageable," David Silver and Demis Hassabis, from Google DeepMind, said in a blog post. "One neural network, the 'policy network', predicts the next move, and is used to narrow the search to consider only the moves most likely to lead to a win.
"The other neural network, the 'value network', is then used to reduce the depth of the search tree -- estimating the winner in each position in place of searching all the way to the end of the game."
AlphaGo uses a Monte-Carlo tree search to look ahead to possible moves, with the neural networks suggesting moves and judging the board position. The system was trained on 30 million moves from games played by human experts, Google said, until it could predict the human move 57 percent of the time. Then it played against itself thousands of times.
Against existing Go programs, AlphaGo won all but one of 500 games, and in October beat 2-dan professional player Fan Hui 5-0 -- replays of which can be viewed -- with a March match locked in against 9-dan professional Lee Sedol.
Google said the most significant part of AlphaGo was not mastering Go, but rather using general-purpose learning techniques that could be applied to climate modelling or disease analysis.
"While games are the perfect platform for developing and testing AI algorithms quickly and efficiently, ultimately, we want to apply these techniques to important real-world problems," Silver and Hassabis said. "Because the methods we have used are general purpose, our hope is that one day, they could be extended to help us address some of society's toughest and most pressing problems."
Artificial intelligence systems beating humans at games entered popular consciousness when IBM's Deep Blue system was able to defeat Gary Kasparov in the late '90s and early 2000s. One of the towers that made up Deep Blue is now in the Smithsonian Museum.