Search This Blog

Sunday, March 13, 2016

Sedol vs. AlphaGo - Is this really "fair" or an "AI"...?

Sedol is beaten multiple times by AlphaGo.

Is this really a fair contest?

Looking here we see that twenty or more people worked on AlphaGo (or at least got their name on the paper).

Sedol is an early-thirties profession Go player with a wife and child.  Even if he spent full time on Go for twenty years that would only be 40,000 or so hours of "learning" to become the (or one of the) best Go players in the world.

If each person on the AlphaGo paper spent two years working on the project (which has been around for at least three years) the would have at least as much time into it.  Plus they have presumably a nearly unlimited amount of compute and historical Go information to work from.

Despite all the hype AlphaGo can't do anything else.  Much like a mechanical baseball throwing device you find at a batting cage.  Neither can really do anything but what they were designed for.

Are they "intelligent?"

I don't think so... I recently got a new dog from the pet store perhaps nine months ago.  The dog has learned a lot: like how to go in and out the dog door, run around the property and come home, where and from whence the food comes from, many things.

While you could argue this is merely "mimicry" of what the other dogs do I think you'd be wrong.

The dog only consumes about 1/2 cup of food per day - no team of scientists to shepherd her along, no giant bay of cheap PC clones to process what other dogs have done over the years.  Nothing like that.

Then there is the question of "insight."  If we attached a logger to the AlphaGo processor we can record every single step in the processing it used to beat Sedol (except if the Monte Carlo sampling choice were truly random).  While there might be many computations involved its certainly a finite, closed list of steps.

Will we find "insight" there?

Nope!

Perhaps there is insight on the part of the people who wrote the code - but the code is not writing itself.  AlphaGo is a simple machine programmed by clever individuals to accomplish one thing - beating the best human Go player.

The human coders also have decades of research to build upon - DeepBlue, Watson, and so forth.

Today Google and its might search engines are still fundamentally flawed: If I spend a lot of time searching for things I find dangerous or that I hate or never want to see again Google Ads happily displays these items to me every there is an opportunity.

On the other hand the dog generally knows if I am mad at her or just mad in general.

I think this is all really about sales: make it sound impressive and mysterious and people will buy it.

About thirty years ago I came up with the ultimate test for "AI."

At the time there was a program called "Prospector" often touted as "AI."

My thought was "ProctSpector" - an "AI" based proctologist that would perform a colonoscopy.

My gold standard of "turing test equivalent AI" was an AI researcher allowing him or her self to be examined (given a colonoscopy) by "ProctSpector."

(Of course, that would not necessarily really be "AI" either - but its certainly a lot more interesting than Go or chess.)

EDIT: QuantaMagazine: https://www.quantamagazine.org/20160329-why-alphago-is-really-such-a-big-deal/