Profundis: The Nexus Trilogy

Feeling a little like a set of thrillers coauthored by Tom Clancy and Greg Egan, Ramez Naam’s “Nexus” trilogy follows a scattered group of hackers, ex-soldiers, government officials, and artificial intelligences as they struggle to cope with the implications of a world changed by a powerful new technology.

Nexus is a drug comprised of nano-scale robots that bind with the nervous system, allowing individual people to interact with their brains via a command line interface and groups of humans to share thoughts and emotions.

Naam does a good job of painting a realistic portrait of the secondary and tertiary ripples of having such a drug in play: Near the beginning of the trilogy we observe genetically enhanced supersoldiers becoming vegetarians and pacifists after being dosed with Nexus and realizing first-hand the suffering caused by their actions. At the climax of the final book a distributed intelligence made up of thousands of Nexus-linked humans tries to save the world by healing a posthuman AI goddess who was tortured into madness by her near-sighted human captors. In between, autistic children are healed by being able to feel the minds of other people, mothers connect with the budding consciousness of their unborn children, and sociopaths dose with Nexus so they can feel the pain they inflict on others.

I found this seriousness refreshing, because too often science fiction is confined to riffing on one or two implications of a new technology while leaving almost everything else unchanged.

The 2014 film “Her”, in which Joaquin Phoenix plays a writer who falls in love with an advanced AI operating system, is a good example. While it may seem far fetched that a human could form a romantic connection with a disembodied intelligence, if such a being were advanced enough to be capable of passing the Turing test, would being in love with one be that different from being in love with a person living on the other side of an ocean?

The problem, though, is that other than this we don’t see much change as a result of extremely advanced AIs being turned loose. A few people become attached to them and complications arise which serve to move the plot forward. But where is the vastly accelerated research in mathematics and computer science, where are the internecine struggles between AIs competing for resources, where are the panicked reactionary  governments trying desperately to cling to power?

I realize the film wasn’t meant to be a comprehensive meditation on the changes human-level AIs will usher in, but I still found it’s extremely limited scope unsatisfying.

The “Nexus” trilogy explores these questions and much more besides. It doesn’t convey the vast, smoldering existential horror of Peter Watts’s “Blindsight”, nor does it quite live up to the narrative majesty of a Vernor Vinge book, to whose “Rainbow’s End” it is comparable, but it is an ably-crafted, fast-paced international spy story filled to the brim with plausible near-future technology centered around advances in neuroscience and nanotechnology.

There is a decent chance I’ll reread the whole trilogy at some point in the future, which is a high recommendation indeed.

Is Evolution Stoopid?

In a recent post I made the claim the evolution is a blind, stupid process that does what it does by brute-forcing through adjacent regions of possibility space with a total lack of foresight. When I said this during a talk I gave on superintelligence I met with some resistance along the lines of ‘calling evolution stupid is a mistake because sometimes there are design features in an evolved organism or process which are valuable even if human engineers are not sure why’.

This is true, but doesn’t conflict with the characterization of evolution as stupid because by that I just meant that evolution is incapable of the sort of planning and self-reflection that a human is capable of.

This is very different from saying that it’s trivial for a human engineer to out think evolution on any arbitrary problem. So far is I know nobody has figure out how to make replicators as good as RNA or how to make things that can heal themselves, both problems evolution has solved.

The difference is not unlike the difference between intelligence, which is something like processing speed, and wisdom, which is something like intelligence applied to experience.

You can be a math prodigy at the age of 7, but you must accrue significant experience before you can be a wisdom prodigy, and that has to happen at the rate of a human life. If one person is much smarter than another they may become wiser faster, but there’s still a hard limit to how fast you can become wise.

I’ve personally found myself in situations where I’ve been out-thought by someone who I’m sure isn’t smarter than me, simply because that other person has seen so many more things than I have.

Evolution is at one limit of the wisdom/intelligence distinction. Even zero intelligence can produce amazing results given a head start of multiple billions of years, and thus we can know ourselves to be smarter than evolution while humbly admitting that its designs are still superior to our own in many ways.

Whither Discontinuity?

I’m writing a series of posts clarifying my position on the intelligence explosion hypothesis and today I want to discuss discontinuity.

This partially addresses the ‘explosion’ part of ‘intelligence explosion’. Given the fact that most developments in the history of the universe have not been discontinuous, what reason do we have to suspect that an AI takeoff might be?

Eliezer identifies the following five sources of discontinuity:

Cascades – Cascades occur when ‘one thing leads to another’. A (possibly untrue) example is the evolution of human intelligence.

It is conceivable that other-modeling abilities in higher primates became self-modeling abilities, which allowed the development of complex language, which allowed for the development of politics, which put selection pressure on the human ability to outwit opponents in competition for food and mates, which caused humans to ‘fall up the stairs’ and quickly become much smarter than the next smartest animal.

Cycles – Cycles are like cascades but the output hose is connected to the input hose. It’s possible for businesses or even individual people to capture enormous parts of a market by investing large fractions of their profits into infrastructure and research. Of course this isn’t the sort of extreme discontinuity we’re interested in, but it’s the same basic idea.

Insight – An insight is something like the theory of evolution by natural selection which, once you have it, dissolves lots of other mysteries which before might’ve looked only loosely connected. The resultant gain in knowledge can look like a discontinuity to someone on the outside who doesn’t have access to the insight.

Recursion – Is the turning of a process back on itself. An AI that manages to produce strong, sustained recursive self-improvements could rapidly become discontinuous with humans.

Magic – Magic is a term of art for any blank spaces in our maps. If something smarter than me turns its intelligence to the project of becoming smarter, then there should be results not accounted for in my analysis. I should expect to be surprised.

Any one of these things can produce apparent discontinuities, especially if they occur together. A self-improving AI could produce novel insights, make use of cascades and cycles, and might be more strongly recursive than any other known process.

Takeoff Speed II: Recalcitrance in AI pathways to Superintelligence.

I’m writing a series of posts clarifying my position on the intelligence explosion hypothesis. Last time I took a look at various non-AI pathways to Superintelligence and concluded that the recalcitrance profile for most of them was moderate to high.

This doesn’t mean it isn’t possible to reach Superintelligence via these routes, but it does indicate that doing so will probably be difficult even by the standards of people who think about building Superintelligences all day long.

AI-based pathways to Superintelligence might have lower recalcitrance than these alternatives, because of a variety of advantages a software mind could have over a biological one.

These advantages have been discussed at length elsewhere, but relevant to the present discussion is that software minds could have far greater introspective access to their own algorithms than humans do.

Of course programmers building such a mind might fear an intelligence explosion and endeavor to prevent this sort of deep introspection. But in principle an AI with such capabilities could become smart enough to start directly modifying and improving its own code.

Humans can only do a weak sort of introspection, and therefore can only do a weak sort of optimization to their thinking patterns. So far, anyway.

At a futurist party recently I was discussing these ideas with someone and they asked me what might happen if a recursively self-improving AI hit diminishing returns on each optimization. Might an intelligence explosion just sort of… fizzle out?

The answer is yes, that might happen. But so far as I can tell there isn’t any good reason to assume that that will happen, and thus the safest bet is to act as though it probably will happen and start thinking hard about how to steer this runaway process in a direction that leads to a valuable future.

Convergent AI Drives.

I’m writing a series of posts clarifying my position on the Intelligence Explosion, and here I want to discuss some theoretical work on the types of goals self-improving systems might converge upon.

Stephen Omohundro has made a convincing case that we can expect to see a wide variety of systems, with different utility functions and different architectures, to manifest a very similar set of sub-goals because such sub-goals are required to achieve almost any macro-goal.

These sub-goals are commonly referred to as the AI ‘drives’, and my discussion below isn’t exhaustive. Consult Omohundro (2008) and Bostrom (2014) for more lengthy treatments.

Imagine two different systems, one designed to solve the goldbach conjecture and another to manufacture solar panels. Both systems are at about the intelligence level of a reasonably bright human and they are capable of making changes to their own code.

These systems find that they can better accomplish their goals if they improve themselves by acquiring more resources and optimizing their reasoning algorithms. Further, they become protective of themselves and their utility functions because, well, they can’t accomplish their current goals if those goals change or they allow themselves to be shut off.

Despite how very different the terminal goals of these two systems are, each of them nevertheless develop drives to self-improve, defend themselves, and preserve their utility function even though neither system had these drives explicitly programmed in at the beginning.

Now, to my knowledge no one is claiming that each and every AI system will manifest all the drives in the course of self-improving. But Omohundro’s analysis might furnish a way to think about the general contours of recursive self-improvement in intelligent machines.

Thinking about the drives in advance is important because we might find that, to our surprise, the first Artificial General Intelligences we make have goals we didn’t give them. They might resist being unplugged or having their goals tampered with.

The drive to self improve is particularly important, though, because it could be a catalyst for an Intelligence Explosion.

The Mathematics of Superintelligence.

I’m writing a series of posts summarizing my position on the Intelligence Explosion, and in addition to the actual empirical, real-world achievements discussed in the last post, significant advances have also been made in developing the theoretical underpinnings of superintelligent agents. Two of particular interest are Marcus Hutter’s AIXI and Jurgen Schmidhuber’s Gödel Machine

Now, I have to confess at this point that I’m very much out of my depth here mathematically. But as I understand it, AIXI is a set of surprisingly compact equations which describe how an optimal reasoner would gather evidence and update its beliefs.

These equations turn out to not be computable, but they can be approximated. MCAIXI, for example, is a scaled-down version of AIXI that managed to learn how to play a number of games on its own from scratch.

The Gödel machine is a theoretical piece of software with two components: one devoted to performing some arbitrary task like calculating digits of pi, and another called a proof searcher which is capable of rewriting any part of the Gödel Machines code, including itself, as soon as its found proof that the rewrite would be an improvement.

The first superintelligence might bear little mathematical resemblance to AIXI or the Gödel Machine, but these theoretical successes, combined with all the progress that’s been made in AI in previous years, lend weight to the notion that smarter-than-human machines will be a part of the future.

 

Peripatesis: Narrow And General AI, Maximus ‘The Delayer’ Avoids Battle With Hannibal.

‘Peripatesis’ is a made-up word related to the word ‘peripatetic’, which is an adjective that means ‘roaming’ or ‘meandering’. I’ve always liked to think of knowledge as a huge structure through which a person could walk, sprint, dive, climb, or fly in as straightforward or peripatetic a fashion as they like.

Here’s are my recent wanderings and wonderings:

Bostrom, N. Superintelligence, p 1-22

The book’s far-ranging introduction spends most of its time taking a high-altitude look at the history and state-of-the-art of AI. After its founding the field was beset by boom periods of high investment and optimism followed by ‘winters’ during which funding disappeared and AI research fell out of favor. Behind the scenes, however, the actual nitty-gritty of AI development continued, resulting in more sophisticated expert systems, better neural nets, and numerous problems of the ‘computers will never do X’ variety being solved.

While surveying some of the astonishing successes of modern AI Bostrom introduces the distinction between a ‘narrow AI’, one with extremely high performance in a single domain like chess playing, and ‘general AI’, software able to reason across a wide variety of domains like humans can. No matter how impressive Watson or Deep Blue might be, they are only able to outperform humans in very limited ways; the real interest lies in machines that are as good or better than humans in lots of different ways.

Chapter 1 ends with a discussion of three different surveys taken of the opinions of AI experts. One survey was on when the experts thought human-level AI would be developed, one was on how long it would take human-level machines to become superintelligent, and another was on the overall impact of superintelligent AIs. It is notoriously difficult to predict when and what progress will be made in AI and so expert opinions were, predictably, all over the place. But the results do hint that the problem of AI safety is worth thinking seriously about.

Goldsworthy, A. The Fall of Carthage, p. 190-196

In the face of several crushing defeats the Roman government elected a dictator, Quintus Fabius Maximus, who would spend his six-month term carefully avoiding engagements with Hannibal, a passivity for which he received the nickname ‘the delayer’. This strategy, while causing much consternation among war-hungry Roman aristocrats, later came to be seen as Rome’s salvation, giving her time to recover from the defeats at Trebia and Trasimene.

Hannibal spent this period criss-crossing the Appennines, pillaging and looting freely. During one particularly crafty maneuver, he managed to move through a pass blocked by Fabius’ army by first tying wooden branches to the horns of oxen and then lighting the branches on fire, sending them into the pass first. The Roman troops occupying the pass believed the Carthaginian army was on the move and so descended to engage. In the resulting confusion Hannibal managed to slip the main column of his army through to the other side, carrying with them the spoils of war gathered over the previous weeks.

Peripatesis: Suffering And The Self, Hannibal v. Longus In Northern Italy.

‘Peripatesis’ is a made-up word related to the word ‘peripatetic’, which is an adjective that means ‘roaming’ or ‘meandering’. I’ve always liked to think of knowledge as a huge structure through which a person could walk, sprint, dive, climb, or fly in as straightforward or peripatetic a fashion as they like.

Here’s are my recent wanderings and wonderings:

Harris, S. Waking Up, p. 1-118:

Sam Harris opens his book on secular spirituality by discussing his early experiments in contemplative practice, and sets the context for the discussion by clearing away some troublesome underbrush.

It’s become fashionable to view all religions as variations on an underlying theme, and the intellectual edifices of the worlds religions do look the same, in the sense that forests look the same when viewed at high altitudes from the passenger seat of a supersonic jet. If one parachutes out of the jet, however, the requirements for survival vary greatly depending on whether the forest they land in is of the deciduous, evergreen, or tropical rainforest variety.

But there is a sense in which the experiences people have in the context of religious practice really are universal. Better still, when lifted from the philosophical ruins in which they’re normally found, these experiences can be viewed as the empirical, verifiable outcome of certain ways of paying attention.

Mindfulness is probably the most widely known attention-based practice here in the West. It doesn’t require the adoption of any religious beliefs, it only requires that you learn to experience each moment simply and directly, without being lost in a never-ending cascade of discursive thought. This is a deceptively simple set of instructions. Harris claims, however, that if one learns to do so, one can find a kind of happiness that is available regardless of what direction one’s life is going. This is the point of spirituality.

But spiritual practices also furnish an indispensable set of tools for studying consciousness. No one can rule out the possibility that we’ll some day develop information-theoretic or neuroscientific concepts that allow us to speak of mind and matter as one thing, but that day is not today. We are stuck simply poking brains and asking subjects what is happening between their ears, and those with the ability to make fine-grained introspective distinctions will be able to provide better first-person data.

In chapter 2 Harris discusses a fascinating implication of the split-brain phenomenon that hadn’t occurred to me before: it’s possible that a functionally normal human brain harbors multiple centers of consciousness. It’s already known that when a person is put to sleep to have their corpus callosum cut, (at least) two people wake up. Further, there is reason to believe that an intact corpus callosum is insufficient to integrate all the information occurring in both hemispheres.  This raises the possibility that each of us is walking around with a first-person point of view, and one or more silent intelligences inhabiting the circuitry of our brain.

Harris gets down to what is really his primary philosophical objective in chapter 3: painting a bull’s eye on the sense of self.

As a matter of subjective experience most people feel like they are a ghostly presence hovering behind their eyes, in possession of a body but not identical to it, watching a stream of consciousness but distinct from it. Harris believes this to not only be incorrect, but to be one of the largest tributaries of human suffering.

If I understand Harris’s arguments, he is claiming that the illusion of the self persists because most of us spend so much of our lives buffeted by hurricanes of discursive thinking, inner monologues, memories, speculation, and emotion that we never stop to inspect it. Once one develops the contemplative tools necessary to actually begin looking for the self, it disappears in much the same way many optical illusions do when examined closely.

With this disappearance comes recognition of the impermanence of the states of mind through which we cartwheel from one moment to the next, and it then becomes possible to glimpse an ego-less consciousness prior to and between the arrival of thoughts. Navigating to this space is profoundly restful because one can cease, however briefly, to be a slave to the chatter of their minds.

Goldstein, A., The Fall of Carthage, p. 173-181.

Caught unawares by the appearance of Hannibal in northern Italy after he executed his famous crossing of the Alps in 218 BC, the Roman senate ordered the return of one of the consuls, Sempronius Longus, who joined forces with Scipio just a few miles from Hannibal’s camp. Hannibal, suspicious that the Gallic tribes in the area might be courting the Romans, sent parties to loot and plunder the Galls, who then did request Roman help. Roman velites engaged Hannibal’s raiding parties, and the ensuing skirmish would have erupted into a full-scale conflict but for Hannibal’s brilliant leadership and unwillingness to fight unprepared.

Both Longus and Hannibal had good reasons for wanting to force an engagement, but it was Hannibal who emerged victorious when the leaders finally squared off at the battle of Trebia, this despite the fact that a large chunk of Roman legionnaires managed to punch through the Carthagenian lines late into the day.

 

Fast Losing Ground

I’m writing a series of posts summarizing my position on the Intelligence Explosion, and here I want to give a couple of examples of recent AI developments which should make even hardened skeptics consider the possibility that our creations might soon catch up with us.

But first, I want to point out that while the history of early AI research is marred by over-confident prognostications that wound up not panning out and causing several “AI winters”, it is also true that AI skeptics have a history of believing that ‘machines will never do X’, only to have machines do X not very long thereafter.

This is humorously captured in the following cartoon, attributed to Ray Kurzwiel:

 

Kurzweil-AI-cartoon

Most of us are rapidly becoming acquainted with living in a world suffused with increasingly smart software. But many would be surprised to learn that there are computer programs in existence now which can write compelling classical music. Emily Howell is the product of several decades work by David Cope, who conceived of the idea of creating software to help with his music after experiencing a particularly bad case of composer’s block. The results speak for themselves:

 

Granted this is not exactly breathtaking; it might be what we’d expect from an advanced piano student who was still primarily leaning on technique because she hadn’t found her creative voice yet. But it’s a long way from the soundtracks of 8-bit video games I grew up playing, and it was written by a computer program.

But what about natural language? Computer-generated music is impressive, but can computers rise to the challenge of processing and responding to speech in real time? IBM’s Watson, a truly monumental achievement, managed to not only do this, but to utterly stomp two of the best jeopardy players of all time. Last I checked the technology was being turned to helping doctors perform better diagnoses.

In my mind the most impressive example is the lesser-well-known Adam (King et al., 2004), an almost fully autonomous science laboratory which, when fed data on yeast genetics, managed to form a hypothesis, design and carry out an experiment to test its hypothesis, and in the process discover something that was unknown to any scientist before. Though this may seem like light-years away from an AI doing, say, astrophysics research, the difference is one of degree, not kind.

Admittedly, we’re still not talking about general intelligences like human beings here. But the weight of the evidence points to a future where increasingly large chunks of civilization are being managed by intelligent machines. This may come to include the production of art, science, and even the design of new intelligent systems.

 

Your Intelligence Isn’t Magical.

I’m writing a series of posts summarizing my views on the Intelligence Explosion, and the first claim I want to defend is that we should take seriously the possibility of human-level artificial intelligence because fundamentally human intelligence is not magic.

Human intelligence is the product of the brain, an object of staggering complexity which, nevertheless, is built up from thoroughly non-magical components. When neurons are networked together into more and more sophisticated circuitry, there is no point at which magic enters the process and gives rise to intelligence.

Furthermore, human intelligence is the product of the blind, brute-force search algorithm which is evolution. Organisms are born with random mutations into environments which act as fitness functions.  Beneficial mutations preserve themselves by leading to greater reproductive success while deleterious ones eliminate themselves by lowering reproductive success. Evolution slowly explores possibilities by acting on and changing existing DNA patterns.

Even without engineering oversight, evolution managed to produce Homo Sapiens, primates with the ability to reason across a wide variety of domains and use their intelligence in ways radically different from the uses for which it evolved.

This is not to imply that our intelligence is well understood; my impression is that great strides have been made in modeling brain activity, but we are surely still a long way from having probed these mysteries fully.

Nor does it imply that building a human-level intelligence will be easy. For decades now AI researchers and computer scientists have been trying, making progress in various narrowly defined tasks like chess, but still nowhere near achieving the creation of a general reasoner on par with humans.

Additionally, it doesn’t imply that a human-level AI must actually resemble human intelligence in any way. AI research is a vast field, and within it there are approaches which draw on neuroscience and mathematical psychology, and de novo approaches which want to build an AI ‘from the ground up’, as it were.

But don’t lose sight of this key fact: the intelligence which produced these words is a non-magical product of a brain made of non-magical components which was produced by a non-magical process. It is hard for me to see where or why a skeptic could draw a special line in the sand at the level of a human and say ‘machines won’t ever get this far’.