Fast Losing Ground

I’m writing a series of posts summarizing my position on the Intelligence Explosion, and here I want to give a couple of examples of recent AI developments which should make even hardened skeptics consider the possibility that our creations might soon catch up with us.

But first, I want to point out that while the history of early AI research is marred by over-confident prognostications that wound up not panning out and causing several “AI winters”, it is also true that AI skeptics have a history of believing that ‘machines will never do X’, only to have machines do X not very long thereafter.

This is humorously captured in the following cartoon, attributed to Ray Kurzwiel:

 

Kurzweil-AI-cartoon

Most of us are rapidly becoming acquainted with living in a world suffused with increasingly smart software. But many would be surprised to learn that there are computer programs in existence now which can write compelling classical music. Emily Howell is the product of several decades work by David Cope, who conceived of the idea of creating software to help with his music after experiencing a particularly bad case of composer’s block. The results speak for themselves:

 

Granted this is not exactly breathtaking; it might be what we’d expect from an advanced piano student who was still primarily leaning on technique because she hadn’t found her creative voice yet. But it’s a long way from the soundtracks of 8-bit video games I grew up playing, and it was written by a computer program.

But what about natural language? Computer-generated music is impressive, but can computers rise to the challenge of processing and responding to speech in real time? IBM’s Watson, a truly monumental achievement, managed to not only do this, but to utterly stomp two of the best jeopardy players of all time. Last I checked the technology was being turned to helping doctors perform better diagnoses.

In my mind the most impressive example is the lesser-well-known Adam (King et al., 2004), an almost fully autonomous science laboratory which, when fed data on yeast genetics, managed to form a hypothesis, design and carry out an experiment to test its hypothesis, and in the process discover something that was unknown to any scientist before. Though this may seem like light-years away from an AI doing, say, astrophysics research, the difference is one of degree, not kind.

Admittedly, we’re still not talking about general intelligences like human beings here. But the weight of the evidence points to a future where increasingly large chunks of civilization are being managed by intelligent machines. This may come to include the production of art, science, and even the design of new intelligent systems.

 

Your Intelligence Isn’t Magical.

I’m writing a series of posts summarizing my views on the Intelligence Explosion, and the first claim I want to defend is that we should take seriously the possibility of human-level artificial intelligence because fundamentally human intelligence is not magic.

Human intelligence is the product of the brain, an object of staggering complexity which, nevertheless, is built up from thoroughly non-magical components. When neurons are networked together into more and more sophisticated circuitry, there is no point at which magic enters the process and gives rise to intelligence.

Furthermore, human intelligence is the product of the blind, brute-force search algorithm which is evolution. Organisms are born with random mutations into environments which act as fitness functions.  Beneficial mutations preserve themselves by leading to greater reproductive success while deleterious ones eliminate themselves by lowering reproductive success. Evolution slowly explores possibilities by acting on and changing existing DNA patterns.

Even without engineering oversight, evolution managed to produce Homo Sapiens, primates with the ability to reason across a wide variety of domains and use their intelligence in ways radically different from the uses for which it evolved.

This is not to imply that our intelligence is well understood; my impression is that great strides have been made in modeling brain activity, but we are surely still a long way from having probed these mysteries fully.

Nor does it imply that building a human-level intelligence will be easy. For decades now AI researchers and computer scientists have been trying, making progress in various narrowly defined tasks like chess, but still nowhere near achieving the creation of a general reasoner on par with humans.

Additionally, it doesn’t imply that a human-level AI must actually resemble human intelligence in any way. AI research is a vast field, and within it there are approaches which draw on neuroscience and mathematical psychology, and de novo approaches which want to build an AI ‘from the ground up’, as it were.

But don’t lose sight of this key fact: the intelligence which produced these words is a non-magical product of a brain made of non-magical components which was produced by a non-magical process. It is hard for me to see where or why a skeptic could draw a special line in the sand at the level of a human and say ‘machines won’t ever get this far’.