Thursday, January 28, 2016

go-playing AI

Back when I was a student I was a fairly enthusiastic Go player, and always liked the fact that it seemed to be resistant to efforts to make a strong go-playing computer program. (At any rate, it resisted my own effort to write a strong Go-playing program.) Having followed the progress of go-playing programs, of course I was interested in the success of the Google DeepMind program (articles in the BBC and Guardian) against Fan Hui, Europe’s top Go player. As noted in Neil Lawrence’s Guardian article, the DeepMind program doesn’t achieve the data efficiency of human players, so there is still work to do. And for those of us who like to imagine that Go is really supposed to be hard to program, there is a glimmer of hope: Previous Go-playing programs perform better against a human opponent during the first few games, and then the human opponent learns their weaknesses. Could Fan Hui eventually start winning, with a bit more practice?

2 comments:

Unknown said...

Indeed. It might well be that humans will figure out how to play against AlphaGo once they have been given the opportunity to see it play a few more times.

In the video DeepMind released, Fan Hui seems to be trying to figure out whether playing aggressively is the right approach. Watching this, I did wonder if trying to second-guess the algorithm's bias might have worked against him, in a way that it wouldn't have if he didn't know he was playing against an AI.

Paul Goldberg said...

Thanks; interesting to get comment on the video (much as I am interested, I didn't have time to see that!)