The world is a buzz with discussion of Google’s AlphaGo program beating Korean Go player Lee Sedol, considered by many to be one of the best players on Earth, in the first three matches they played. Lee was able to take the fourth match, however, by deliberately playing lower probability moves that managed to confuse the AI.
In a facebook post Eliezer Yudkowsky coined the term “Kasparov Window” to describe a range of systems with superhuman abilities that nevertheless have flaws that human players can discover and exploit. Pondering this concept, I had a different idea:
Say you had a way of measuring how “unintuitive” a given move is for a human player. That is, if a move is minimally unintuitive it can reasonably be assumed that even a novice player would make it in the same situation, and if a move is maximally unintuitive it can be reasonably assumed that not even an expert player would make the move in the same situation.
Using this measure, might it be possible to calibrate AI systems to gradually introduce more and more unintuitive moves into a game? If so, it seems like you might be able to train the best human players to become even better by getting them to think way outside the box.
And if you used a similar technique with something like an automated theorem prover, you might also be able to get skilled human mathematicians to produce proofs that a human normally wouldn’t be able to produce because such a proof simply wouldn’t occur to them.
One problem with these scenarios is that it may be feasible to train humans in this way but it may just not be possible to extend the range of what counts as “intuitive for a human” very far. So, Lee Sedol learns to make some unintuitive moves but the quality of his gameplay only increases very slightly.
Another problem is that the training may turn out to be feasible, but AI technology simply progresses so rapidly that there just isn’t any point.