Back when I was a student I was a fairly enthusiastic Go player, and always liked the fact that it seemed to be resistant to efforts to make a strong go-playing computer program. (At any rate, it resisted my own effort to write a strong Go-playing program.) Having followed the progress of go-playing programs, of course I was interested in the success of the Google DeepMind program (articles in the BBC and Guardian) against Fan Hui, Europe’s top Go player. As noted in Neil Lawrence’s Guardian article, the DeepMind program doesn’t achieve the data efficiency of human players, so there is still work to do. And for those of us who like to imagine that Go is really supposed to be hard to program, there is a glimmer of hope: Previous Go-playing programs perform better against a human opponent during the first few games, and then the human opponent learns their weaknesses. Could Fan Hui eventually start winning, with a bit more practice?
What price nostalgia?
3 hours ago