I’m writing a series of posts summarizing my position on the Intelligence Explosion, and in addition to the actual empirical, real-world achievements discussed in the last post, significant advances have also been made in developing the theoretical underpinnings of superintelligent agents. Two of particular interest are Marcus Hutter’s AIXI and Jurgen Schmidhuber’s Gödel Machine
Now, I have to confess at this point that I’m very much out of my depth here mathematically. But as I understand it, AIXI is a set of surprisingly compact equations which describe how an optimal reasoner would gather evidence and update its beliefs.
These equations turn out to not be computable, but they can be approximated. MCAIXI, for example, is a scaled-down version of AIXI that managed to learn how to play a number of games on its own from scratch.
The Gödel machine is a theoretical piece of software with two components: one devoted to performing some arbitrary task like calculating digits of pi, and another called a proof searcher which is capable of rewriting any part of the Gödel Machines code, including itself, as soon as its found proof that the rewrite would be an improvement.
The first superintelligence might bear little mathematical resemblance to AIXI or the Gödel Machine, but these theoretical successes, combined with all the progress that’s been made in AI in previous years, lend weight to the notion that smarter-than-human machines will be a part of the future.