Takeoff Speed, I: Recalcitrance In Non-AI Pathways to Superintelligence

I’m writing a series of posts clarifying my position on the Intelligence Explosion hypothesis. Though I feel that the case for such an event is fairly compelling, it’s far less certain how fast the ‘takeoff’ will be, where ‘takeoff’ is defined as the elapsed time from having a roughly human-level intelligence to a superintelligence.

Once we’ve invented a way for humans to become qualitatively smarter or made machines able to improve themselves should we expect greater-than-human intelligence in a matter of minutes or hours (a ‘fast takeoff’), over a period of weeks, months or years (a ‘moderate takeoff’), or over decades and centuries (a ‘slow takeoff’)? What sorts of risks might each scenario entail?

Nick Bostrom (2014) provides the following qualitative equation for thinking about the speed with which intelligence might explode:

Rate of Improvement = (optimization power) / (recalcitrance)

‘Recalcitrance’ here refers to how amenable a system might be to improvements, a value which varies enormously for different pathways to superintelligence.

A non-exhaustive list of plausible means of creating a superintelligence includes programming a seed AI which begins an improvement cascade, upgrading humans with smart drugs or computer interfaces, emulating a brain in a computer and then improving it or speeding it up, and making human organizations vastly superior.

These can broadly be lumped into ‘non-AI-based’ and ‘AI-based’ pathways, each of which has a different recalcitrance profile.

In the case of improving the human brain through drugs, genetic enhancements, or computers, we can probably expect the initial recalcitrance to be low because each of these areas of research are inchoate and there is bound to be low-hanging fruit waiting to be discovered.

The current generation of nootropics is very crude, so a few years or a decade of concerted, well-funded research might yield classes of drugs able to boost the IQs of even healthy individuals 20 or 30 points.

But while it may be theoretically possible to find additional improvements in this area, the brain is staggeringly complicated with many subtle differences between individuals, so in practice we are only likely to get so far in trying to enhance it through chemical means.

The same basically holds for upgrading the human brain via digital prosthetics. I don’t know of any reason that working memory can’t be upgrade with the equivalent of additional sticks of RAM, but designing components that the brain tolerates well, figuring out where to put them, and getting them where they need to go is a major undertaking.

Beyond this, the brain and its many parts interact with each other in complex and poorly-understood ways. Even if we had solved all the technical and biological problems, the human motivation system is something that’s only really understood intuitively, and it isn’t obvious that the original motivations would be preserved in a radically-upgraded brain.

Perhaps, then, we can sidestep some of these issues and digitally emulate a brain which we speed up a thousand times.

Though this pathway is very promising, no one is sure what would happen to a virtual brain running much faster than it’s analog counterpart is supposed to. It could think circles around the brightest humans or plunge into demented lunacy. We simply don’t know.

Finally, there appears to be a very steep recalcitrance gradient in improving human organizations, assuming you can’t also modify the humans involved.

Though people have figured out ways of allowing humans to cooperate more effectively (and I assume the role the internet has played in improving the ability to coordinate on projects large and small is too obvious to need elaboration), it’s difficult to imagine what a large-scale general method for optimizing networks of humans would even look like.

None of the above should be taken to mean that research into Whole Brain Emulation or Human-Computer interaction isn’t well worth doing. It is, but many people make the unwarranted assumption that the safest path to superintelligence is to start with a human brain because at least then we’d have something with recognizably human motivations which, conversely, would also understand us.

But the difficulties adumbrated may make it more likely that some self-improving algorithm crosses the superintelligence finish line first, meaning our research effort should be focused on machine ethics.

Perhaps more troubling still, it isn’t trivial to assume that we can manage brain upgrades, digital, chemical, or otherwise, in a precise enough manner to ensure that the resulting superintelligence is benevolent or even sane.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s