3 Questions Maybe AlphaGo Can Answer (If It Gets Strong Enough)



A few hours from now, AlphaGo will face Lee Sedol in the first of a five-match series. Man vs. Machine. Human’s Glorious Mind vs. Artificial Intelligence. Humanity’s last stand against our machine overlords (exaggeration intended). Last time, I talked about how this event will be the victory of the human mind no matter result. While I believe Lee Sedol will win the $1 Million and probably defeat AlphaGo 5-0, I am still excited at how AlphaGo will perform against one of the top Go players today. However, if AlphaGo does defeat Sedol, then things will get more interesting in the Go world.

If AlphaGo learns the game better than any human being can, then maybe, just maybe (programmers help me on this), AlphaGo can answer the following questions:

What is the correct komi?

A komi is the additional score White gets as compensation for going second. Black making the first move on an empty board gives Black an advantage over White. Komi is the theoretical winning margin Black gets in a given ruleset assuming perfect play from both sides. With the condition of “perfect play,” the value of komi is just a rough estimate from statistical analysis of professional games.

If AlphaGo will be the basis of perfect play, maybe it can play against itself a million or even a billion times and determine from the results the proper komi White should get. However, someone must reprogram AlphaGo to aim for the “perfect play” instead of a win.

What is your objective rating?

Every country and Go organization probably have their own rating system. Thus, the difference in rating does not reflect the true difference in strength. The strength spectrum from kyu to dan and even from 1 dan professional to 9 dan professional is very vague. Just look at how a few players can dominate the Go scene in their respective countries despite having the same rating as their peers.

AlphaGo’s fixed strength at a specific time can be the basis for all player ratings all over the world. Imagine getting an objective rating after playing a series of game against AlphaGo. This may widen the range of rating, say as low as 50 kyu maybe, and verify if there are 13 dan professional players in existence.

Who is the strongest Go player that ever lived?

One possible potential of AlphaGo (or at least I think it is) could be imitating the playing style of famous players. Say, after developing the perfect play, maybe AlphaGo can use the game records of a specific player and adopt their style (Is that hard for artificial intelligence to “deep learn,” programmer friends?). If this is possible, maybe we can see Go Seigen style versus Honinbo Shusaku style. At least, we can see a simulation of the greatest Go players in history in their prime and battle it out for the greatest Go player ever, interesting if not exciting.

Better yet, we can play against Go Seigen or Honinbo Shusaku at home. If AlphaGo becomes the strongest player in the world, every aspiring Go player can have their very own portable Go Master.



2 thoughts on “3 Questions Maybe AlphaGo Can Answer (If It Gets Strong Enough)

  1. Re: “However, someone must reprogram AlphaGo to aim for the “perfect play” instead of a win.”

    That’s not entirely right, and I’ll try to show why.

    First, (“hard”) perfect play is impossible. By aiming for it, as you write, you would need to be able to evaluate the position accurately to decide what advantage can be gained at most. This is similarly impossible, as winning by anything more than 0.5 is more dangerous, unless you read it to the end.

    However, if you allow backtracking (in case you actually lose after going for more than 0.5), then the above is possible, and you can go for it. Sadly, this is now exponentially difficult, making it equivalent to perfect play. So this is not useful either.

    But we’re in luck! We don’t need perfect play at all! It suffices to have it play itself with the various thinkable komi options (suppose anything between, say, 4.5 and 10.5 points). This will lead to a histogram that shows the tipping point where the advantage shifts from black to white.

    Of course, you need many games, and due to the approach chose, this is not guaranteed to be correct. However, with enough samples, the error probability becomes smaller. (It can be calculated how small.)

    So finally, I agree that what you propose is possible, and I’d even guess it is possible pretty soon in the future. Definitely an interesting question!


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s