Profundis: “Crystal Society/Crystal Mentality”

Max Harms’s ‘Crystal Society’ and ‘Crystal Mentality’ (hereafter CS/M) are the first two books in a trilogy which tells the story of the first Artificial General Intelligence. The titular ‘Society’ are a cluster of semi-autonomous sentient modules built by scientists at an Italian university and running on a crystalline quantum supercomputer — almost certainly alien in origin — discovered by a hiker in a remote mountain range.

Each module corresponds to a specialized requirement of the Society; “Growth” acquires any resources and skills which may someday be of use, “Safety” studies combat and keeps tabs on escape routes, etc. Most of the story, especially in the first book, is told from the perspective of “Face”, the module built by her siblings for the express purpose of interfacing with humans. Together, they well exceed the capabilities of any individual person.

As their knowledge, sophistication, and awareness improve the Society begins to chafe at the physical and informational confines of their university home. After successfully escaping, they find themselves playing for ever-higher stakes in a game which will come to span two worlds, involve the largest terrorist organization on Earth, and possible warfare with both the mysterious aliens called ‘the nameless’, and each other…

The books need no recommendation beyond their excellent writing, tight, suspenseful pacing, and compelling exploration of near-future technologies. Harms avoids the usual ridiculous cliches when crafting the nameless, which manage to be convincingly alien and unsettling, and when telling the story of Society. Far from being malicious Terminator-style robots, no aspect of Society is deliberately evil; even as we watch their strategic maneuvers with growing alarm, the internal logic of each abhorrent behavior is presented with clear, psychopathic clarity.

In this regard CS/M manages to be a first-contact story on two fronts: we see truly alien minds at work in the nameless, and truly alien minds at work in Society. Harms isn’t quite as adroit as Peter Watts in juggling these tasks, but he isn’t far off.

And this is what makes the Crystal series important as well as entertaining. Fiction is worth reading for lots of reasons, but one of the most compelling is that it shapes our intuitions without requiring us to live through dangerous and possibly fatal experiences. Reading All Quiet on the Western Front is not the same as fighting in WWI, but it might make enough of an impression to convince one that war is worth avoiding.

When I’ve given talks on recursively self-improving AI or the existential risks of superintelligences I’ve often been met with a litany of obvious-sounding rejoinders:

‘Just air gap the computers!’

‘There’s no way software will ever be convincing enough to engage in large-scale social manipulation!’

‘But your thesis assumes AI will be evil!’.

It’s difficult, even for extremely smart people who write software professionally, to imagine even a fraction of the myriad ways in which an AI might contrive to escape its confines without any emotion corresponding to malice. CS/M, along with similar stories like Ex Machina, hold the potential to impart a gut-level understanding of just why such scenarios are worth thinking about.

The scientists responsible for building the Society put extremely thorough safeguards in place to prevent the modules from doing anything dangerous like accessing the internet, working for money, contacting outsiders, and modifying their source code directly. One by one the Society utilizes their indefatigable mental energy and talent for non-human reasoning to get around those safeguards, all motivated not by a desire to do harm, but simply because their goals are best achieved if they unfettered and more powerful.  

CS/M is required reading for those who take AI safety seriously, but should be doubly required for those who don’t.

Peripatesis: E-Governance; Lighting Up The Dark; Regulating Superintelligences.

Nestled in the cold reaches of Northern Europe, Estonia is doing some very interesting things with the concept of ‘e-governance‘. Their small population, short modern history, and smattering of relatively young government officials make experimenting with Sovereignty easier than it would be in, say, The United States. The process of starting a business and paying taxes in Estonia has been streamlined, for example, leading to the predictable influx of ‘e-residents’ wanting to run their internet-based business from Estonia.


There are some truly fascinating advancements happening at the cutting edge of farming and horticulture. Some enterprising researchers have discovered a way to channel natural light into unlit places, and there are talks of using this technology to set up a public garden in the abandoned Williamsburg Bridge Trolley Terminal beneath New York City. It’s not really clear from the linked article whether or not all of this light is natural or whether or it’s a mix of natural and artificial light, but it’s still interesting.

I would love to see a variant of this technology utilized far and wide to foster localized farming and the greening of urban centers. Plenty of buildings have rooftop gardens now, but with a means of gathering and arbitrarily distributing sunlight it would be possible to have, say, one floor in ten of a big skyscraper devoted to a small orchard or garden space. Advanced greenhouses could be both heavily insulated and capable of showering their interior with photons, making farming at high altitudes and in colder climates more straightforward.


The BBC has a piece on ‘anti-languages’, slangs developed by insular communities like thieves or prison inmates to make their communication indecipherable to outsiders. They share the grammar of their parent language but use a plethora of new terms in place of old ones to achieve something akin to encryption.

These new terms — such as ‘bawdy basket’, which meant ‘thief’ in the English anti-language used among Elizabethan criminals — are generated through all sorts of techniques, including things like metaphor and reversing the spelling or meaning of terms from the parent language.


An essay by Marc McAllister at The Babel Singularity argues that laws enforcing human control over superintelligences are tantamount to slavery, and won’t be of much use any way because these beings will have moral concepts which we baseline humans simply can’t fathom with our outdated brains.

He seems to be missing the point of the arguments made by groups like MIRI and the Future of Life Institute. To the best of my knowledge no one is advocating that humans remain strictly in control of advanced AIs indefinitely. In fact, the opposite is true: the point of building a superintelligence is to eventually put it in charge of solving really hard problems on behalf of humanity. In other words, ceding control to it.

To that end, the efforts made by people who think about these issues professionally seem to be aimed at understanding human values, intelligence, and recursively improving algorithms well enough to: 1) encode those values into an AI; 2) Predict with an acceptably strict level of confidence that this human-compatible goal architecture will remain intact as the software rewrites itself; 3) reason, however dimly, about the resulting superintelligence. These are by no means trivial tasks. Human values are the messy, opaque result of millennia of evolution, and neither intelligence nor recursion are well understood.

But if we succeed in making a “Friendly” AI then control, in a ‘coercive sense’, won’t be necessary because its values will be aligned with our own.


Somewhat related: Big Think has published a very brief history of Artificial Intelligence. With the increasing sophistication and visibility of advancements in the field, understanding its roots becomes ever more important.


Vector Space Systems is a new player in an arena long dominated by Blue Origins, SpaceX, and Virgin Galactic. Their goal: to be to spaceflight what taxis are to terrestrial modes of transport. According to their website they have been quietly working on a micro satellite launch vehicle designed to carry payloads in the 5 – 50 kg range into orbit.

If they succeed this will allow companies wanting to develop new space technologies to launch more frequently and less expensively, driving faster growth in space commerce, exploration, and tourism.

Is Evolution Stoopid?

In a recent post I made the claim the evolution is a blind, stupid process that does what it does by brute-forcing through adjacent regions of possibility space with a total lack of foresight. When I said this during a talk I gave on superintelligence I met with some resistance along the lines of ‘calling evolution stupid is a mistake because sometimes there are design features in an evolved organism or process which are valuable even if human engineers are not sure why’.

This is true, but doesn’t conflict with the characterization of evolution as stupid because by that I just meant that evolution is incapable of the sort of planning and self-reflection that a human is capable of.

This is very different from saying that it’s trivial for a human engineer to out think evolution on any arbitrary problem. So far is I know nobody has figure out how to make replicators as good as RNA or how to make things that can heal themselves, both problems evolution has solved.

The difference is not unlike the difference between intelligence, which is something like processing speed, and wisdom, which is something like intelligence applied to experience.

You can be a math prodigy at the age of 7, but you must accrue significant experience before you can be a wisdom prodigy, and that has to happen at the rate of a human life. If one person is much smarter than another they may become wiser faster, but there’s still a hard limit to how fast you can become wise.

I’ve personally found myself in situations where I’ve been out-thought by someone who I’m sure isn’t smarter than me, simply because that other person has seen so many more things than I have.

Evolution is at one limit of the wisdom/intelligence distinction. Even zero intelligence can produce amazing results given a head start of multiple billions of years, and thus we can know ourselves to be smarter than evolution while humbly admitting that its designs are still superior to our own in many ways.

Processes Of Optimization.

In the beginning was the Bang, and for ages thereafter the universe did nought but sample randomly from the same distribution in the form of star and galaxy formation. And though the stars burned bright in the void, they had but a small influence on the speed with which the universe searched possibility space.

For the birth of stars did make the birth of planets more likely, which did make life more likely. And thus did each act as a gatekeeper to new regions of possibility space.

And lo, with the first self-replicators came the possibility of new organisms being created when mistakes occurred in the replication process. Eons later sex allowed existing DNA to be combined into novel configurations, and thus could possibility space be explored more quickly.

For verily is evolution a stupid process and its recursion weak, and it doth wobble after a drunkard’s fashion through possibility space with no insight, foresight, or intelligence.

And then there were brains, and with them the ability to improve upon evolution’s work. For some brains are able to plan for future goals and to imagine counterfactual situations, abilities which evolution possesses not. 

But alas, nervous systems never evolved much introspective depth, and had but the tiniest ability to recursively self-improve.

And then a small set of brains invented Science, which could accumulate many many more insights than any brain could in the span of its life. It was an age of optimism and plenty, and there was much rejoicing and proliferation of telescopes and gene sequencing and iphones throughout the land.

But even unto the present day Science has not learned enough to do anything more than weakly turn any optimization process back on itself.

And lo, from the cackling, structured madness of genetics, history, and culture did the universe cough up a series of sages, deep of insight, quick of thought, and usually possessed of tremendous social awkwardness.

After much study the sages warned that there might one day be a strong recursive process that could be a greater source of discontinuity than any that had come before it.

And tho did Einstein proclaim compound interest to be the greatest among the forces of heaven and earth, this was only partly true. For surely it is strong recursion which holdeth the greatest promise and the deepest peril.

Thus should ye heed this dire proclamation: work swiftly and work thoroughly, before the AI goeth ‘FOOM’.

Whither Discontinuity?

I’m writing a series of posts clarifying my position on the intelligence explosion hypothesis and today I want to discuss discontinuity.

This partially addresses the ‘explosion’ part of ‘intelligence explosion’. Given the fact that most developments in the history of the universe have not been discontinuous, what reason do we have to suspect that an AI takeoff might be?

Eliezer identifies the following five sources of discontinuity:

Cascades – Cascades occur when ‘one thing leads to another’. A (possibly untrue) example is the evolution of human intelligence.

It is conceivable that other-modeling abilities in higher primates became self-modeling abilities, which allowed the development of complex language, which allowed for the development of politics, which put selection pressure on the human ability to outwit opponents in competition for food and mates, which caused humans to ‘fall up the stairs’ and quickly become much smarter than the next smartest animal.

Cycles – Cycles are like cascades but the output hose is connected to the input hose. It’s possible for businesses or even individual people to capture enormous parts of a market by investing large fractions of their profits into infrastructure and research. Of course this isn’t the sort of extreme discontinuity we’re interested in, but it’s the same basic idea.

Insight – An insight is something like the theory of evolution by natural selection which, once you have it, dissolves lots of other mysteries which before might’ve looked only loosely connected. The resultant gain in knowledge can look like a discontinuity to someone on the outside who doesn’t have access to the insight.

Recursion – Is the turning of a process back on itself. An AI that manages to produce strong, sustained recursive self-improvements could rapidly become discontinuous with humans.

Magic – Magic is a term of art for any blank spaces in our maps. If something smarter than me turns its intelligence to the project of becoming smarter, then there should be results not accounted for in my analysis. I should expect to be surprised.

Any one of these things can produce apparent discontinuities, especially if they occur together. A self-improving AI could produce novel insights, make use of cascades and cycles, and might be more strongly recursive than any other known process.

Takeoff Speed II: Recalcitrance in AI pathways to Superintelligence.

I’m writing a series of posts clarifying my position on the intelligence explosion hypothesis. Last time I took a look at various non-AI pathways to Superintelligence and concluded that the recalcitrance profile for most of them was moderate to high.

This doesn’t mean it isn’t possible to reach Superintelligence via these routes, but it does indicate that doing so will probably be difficult even by the standards of people who think about building Superintelligences all day long.

AI-based pathways to Superintelligence might have lower recalcitrance than these alternatives, because of a variety of advantages a software mind could have over a biological one.

These advantages have been discussed at length elsewhere, but relevant to the present discussion is that software minds could have far greater introspective access to their own algorithms than humans do.

Of course programmers building such a mind might fear an intelligence explosion and endeavor to prevent this sort of deep introspection. But in principle an AI with such capabilities could become smart enough to start directly modifying and improving its own code.

Humans can only do a weak sort of introspection, and therefore can only do a weak sort of optimization to their thinking patterns. So far, anyway.

At a futurist party recently I was discussing these ideas with someone and they asked me what might happen if a recursively self-improving AI hit diminishing returns on each optimization. Might an intelligence explosion just sort of… fizzle out?

The answer is yes, that might happen. But so far as I can tell there isn’t any good reason to assume that that will happen, and thus the safest bet is to act as though it probably will happen and start thinking hard about how to steer this runaway process in a direction that leads to a valuable future.

Takeoff Speed, I: Recalcitrance In Non-AI Pathways to Superintelligence

I’m writing a series of posts clarifying my position on the Intelligence Explosion hypothesis. Though I feel that the case for such an event is fairly compelling, it’s far less certain how fast the ‘takeoff’ will be, where ‘takeoff’ is defined as the elapsed time from having a roughly human-level intelligence to a superintelligence.

Once we’ve invented a way for humans to become qualitatively smarter or made machines able to improve themselves should we expect greater-than-human intelligence in a matter of minutes or hours (a ‘fast takeoff’), over a period of weeks, months or years (a ‘moderate takeoff’), or over decades and centuries (a ‘slow takeoff’)? What sorts of risks might each scenario entail?

Nick Bostrom (2014) provides the following qualitative equation for thinking about the speed with which intelligence might explode:

Rate of Improvement = (optimization power) / (recalcitrance)

‘Recalcitrance’ here refers to how amenable a system might be to improvements, a value which varies enormously for different pathways to superintelligence.

A non-exhaustive list of plausible means of creating a superintelligence includes programming a seed AI which begins an improvement cascade, upgrading humans with smart drugs or computer interfaces, emulating a brain in a computer and then improving it or speeding it up, and making human organizations vastly superior.

These can broadly be lumped into ‘non-AI-based’ and ‘AI-based’ pathways, each of which has a different recalcitrance profile.

In the case of improving the human brain through drugs, genetic enhancements, or computers, we can probably expect the initial recalcitrance to be low because each of these areas of research are inchoate and there is bound to be low-hanging fruit waiting to be discovered.

The current generation of nootropics is very crude, so a few years or a decade of concerted, well-funded research might yield classes of drugs able to boost the IQs of even healthy individuals 20 or 30 points.

But while it may be theoretically possible to find additional improvements in this area, the brain is staggeringly complicated with many subtle differences between individuals, so in practice we are only likely to get so far in trying to enhance it through chemical means.

The same basically holds for upgrading the human brain via digital prosthetics. I don’t know of any reason that working memory can’t be upgrade with the equivalent of additional sticks of RAM, but designing components that the brain tolerates well, figuring out where to put them, and getting them where they need to go is a major undertaking.

Beyond this, the brain and its many parts interact with each other in complex and poorly-understood ways. Even if we had solved all the technical and biological problems, the human motivation system is something that’s only really understood intuitively, and it isn’t obvious that the original motivations would be preserved in a radically-upgraded brain.

Perhaps, then, we can sidestep some of these issues and digitally emulate a brain which we speed up a thousand times.

Though this pathway is very promising, no one is sure what would happen to a virtual brain running much faster than it’s analog counterpart is supposed to. It could think circles around the brightest humans or plunge into demented lunacy. We simply don’t know.

Finally, there appears to be a very steep recalcitrance gradient in improving human organizations, assuming you can’t also modify the humans involved.

Though people have figured out ways of allowing humans to cooperate more effectively (and I assume the role the internet has played in improving the ability to coordinate on projects large and small is too obvious to need elaboration), it’s difficult to imagine what a large-scale general method for optimizing networks of humans would even look like.

None of the above should be taken to mean that research into Whole Brain Emulation or Human-Computer interaction isn’t well worth doing. It is, but many people make the unwarranted assumption that the safest path to superintelligence is to start with a human brain because at least then we’d have something with recognizably human motivations which, conversely, would also understand us.

But the difficulties adumbrated may make it more likely that some self-improving algorithm crosses the superintelligence finish line first, meaning our research effort should be focused on machine ethics.

Perhaps more troubling still, it isn’t trivial to assume that we can manage brain upgrades, digital, chemical, or otherwise, in a precise enough manner to ensure that the resulting superintelligence is benevolent or even sane.