Robohub.org
 

Google’s Go-playing AI system beat Korean grandmaster 4 out of 5 games

by
17 March 2016



share this:

Machines have been beating humans for years in games and contests: chess, Scrabble, checkers, Jeopardy! and more recently, the Chinese board game Go. Google’s AI machine won 4 out of 5 games in a $1.3 million contest against South Korean Lee Sedol, rated as the world’s best Go player. Sedol comments to ABC News at a post-game press conference:

I didn’t expect to lose. I didn’t think AlphaGo would play the game in such a perfect manner. Personally, I am regretful about the result, but would like to express my gratitude to everyone who supported and encouraged me throughout the match.

The 2,500-year-old board game is purported to be more complex than chess. It is played with black and white stones and the aim is to surround more territory than the opponent. Go appears to be a simple game and begins with an empty board. Two players (one using black stones, the other white), alternate placing stones in squares, trying to grab territory without getting their pieces captured.

According to Alan Levinovitz, writing in Wired a few years ago, “there are 400 possible board positions after the first round of moves in Chess and 129,960 in Go. There are 35 possible moves on any turn in a Chess game, and 250 for Go.” DeepMind’s David Silver and Demis Hassabis noted that the number of possible board configurations in Go is larger than the number of atoms in the universe.

Drake Baer, in a Tech Insider piece, describes DeepMind’s history and role in the development of AlphaGo:

DeepMind didn’t “program” AlphaGo with evaluations of  “good” and “bad” moves. Instead, AlphaGo’s algorithms studied a database of online Go matches, giving it the equivalent experience of doing nothing but playing Go for 80 years straight.

“This deep neural net is able to train and train and run forever on these thousands or millions of moves, to extract these patterns that leads to selection of good actions,” says Carnegie Mellon computer scientist Manuela Veloso, who studies agency in artificial intelligence systems.

Google acquired DeepMind in 2014. Founded in 2010 by chess prodigy-turned-artificial intelligence researcher Demis Hassabis, the company’s mission is to “solve intelligence,” and it claims that “the algorithms we build are capable of learning for themselves directly from raw experience or data.”

In February 2015, DeepMind revealed in Nature that the program learned to play vintage arcade games like Pong or Space Invaders as well as human players. Now it’s about to master a game that once seemed unmasterable for artificial intelligence [Go].

A Google DeepMind paper describes the technical details. You can also watch the Nature video below with DeepMind’s Demis Hassabis, who then describes the process required by the AI.

What does it all mean?

Go’s appeal lies in depth through simplicity, which is why its difficult for computers to master up — until now. There’s limited data available looking at the board. Choosing a good move demands a great deal of intuition. Until AlphaGo, no one had built an effective evaluation function. But AlphaGo’s use of deep learning and neural networks to teach itself enables it to process millions of Go positions and moves from human-played games, then deciding upon moves predicted on that experience.

Sam Byford writes in The Verge:

The twist is that DeepMind continually reinforces and improves the system’s ability by making it play millions of games against tweaked versions of itself. This trains a “policy” network to help AlphaGo predict the next moves, which in turn trains a “value” network to ascertain and evaluate those positions. AlphaGo looks ahead at possible moves and permutations, going through various eventualities before selecting the one it deems most likely to succeed. The combined neural nets save AlphaGo from doing excess work: the policy network helps reduce the breadth of moves to search, while the value network saves it from having to internally play out the entirety of each match to come to a conclusion.

Thus, AlphaGo gets better by playing itself. DeepMind believes that the principles it uses in AlphaGo have broader application and implications. Hassabis makes a distinction between “narrow” AIs like Deep Blue and artificial “general” intelligence (AGI), the latter being more flexible and adaptive. He thinks AlphaGo’s machine learning techniques will be useful in robotics, smartphone assistant systems, and healthcare.

Intuition has been the sole domain of humans. But now AlphaGo, Google, and DeepMind have shown they can build systems with an effective evaluation function to make choices based on a growing universe of experiential data. Reading medical scans and upgrading Siri (and other online resources and social assistants) are just the tip of the iceberg. This technology has implications for self-driving vehicles of all types (air, land and sea), medical diagnostics, and predictive stock picking to name just a few.

If you liked this article, you may also enjoy:



tags: , , , ,


Frank Tobe is the owner and publisher of The Robot Report, and is also a panel member for Robohub's Robotics by Invitation series.
Frank Tobe is the owner and publisher of The Robot Report, and is also a panel member for Robohub's Robotics by Invitation series.





Related posts :



Women in Tech leadership resources from IMTS 2022

There’ve been quite a few events recently focusing on Women in Robotics, Women in Manufacturing, Women in 3D Printing, in Engineering, and in Tech Leadership. One of the largest tradeshows in the US is IMTS 2022. Here I bring you some resources shared in the curated technical content and leadership sessions.
29 September 2022, by and

MIT engineers build a battery-free, wireless underwater camera

The device could help scientists explore unknown regions of the ocean, track pollution, or monitor the effects of climate change.
27 September 2022, by

How do we control robots on the moon?

In the future, we imagine that teams of robots will explore and develop the surface of nearby planets, moons and asteroids - taking samples, building structures, deploying instruments.
25 September 2022, by , and

Have a say on these robotics solutions before they enter the market!

We have gathered robots which are being developed right now or have just entered the market. We have set these up in a survey style consultation.
24 September 2022, by

Shelf-stocking robots with independent movement

A robot that helps store employees by moving independently through the supermarket and shelving products. According to cognitive robotics researcher Carlos Hernández Corbato, this may be possible in the future. If we engineer the unexpected.
23 September 2022, by

RoboCup humanoid league: Interview with Jasper Güldenstein

We talked to Jasper Güldenstein about how teams transferred developments from the virtual humanoid league to the real-world league.
20 September 2022, by and





©2021 - ROBOTS Association


 












©2021 - ROBOTS Association