Profundis: Zen and The Art of Motorcycle Maintenance

(What follows is a reposting of a few short essays I wrote for Scott Young’s bookclub in response to the perennial classic ‘Zen and The Art of Motorcycle Maintenance’):

___

Near the end of chapter 3 the narrator makes a number of epistemological and metaphysical claims which confused me for a long time and confuse many people still. In recent years I have resolved them to my satisfaction, and this seems like as good a place as any to elucidate my thoughts.

He writes: “The law of gravity and gravity itself did not exist before Isaac Newton”, then continues “…[w]e believe the disembodied words of Sir Isaac Newton were sitting in the middle of nowhere billions of years before he was born and that magically he discovered these words.”

This nicely demonstrates an incorrect conflation of laws and physical phenomena. Unless you’ve been snorting uncut Postmodernism fresh off the Continent you’re bound to think that gravity existed before Isaac Newton. What he did was distill gravitational observations into formulae by which to describe and predict future observations.

Gravity existed prior to these formulae just like apples existed before anyone named them.

As Alfred Korzybski put it, ‘the map is not the territory’.

Entire planets worth of error can be avoided if you keep this in mind. For example, I’ve seen Gödel’s Incompleteness Theorems cited in defense of the existence of God. The Incompleteness Theorems say, in essence, that formal systems of sufficient power to perform arithmetic or describe the properties of the natural numbers contain enough recursion to ineluctably give rise to paradoxes. There are statements which are true in these systems but which cannot be established by any algorithmic procedure.

Truth, in other words, is bigger than proof.

Put more simply GITs demonstrates that the weirdness associated with a statement like ‘this sentence is false’ is to be found at the heart of mathematics and as a consequence of its deepest nature.

But — crucially! — the limitations of GITs apply only to the formal systems themselves. They tell us nothing about a non-formal system like the universe whose behavior is captured by formal systems we invent. There is a gigantic difference between saying ‘the symbols we use to describe system A have these built-in limitations’ and saying ‘system A itself is subject to those same limitations’.

And I think Phaedrus is making a similar error.

___

In chapter 6 we learn that Phaedrus believed there to be two fundamental ways of viewing the world. The ‘classical’ view tends to think in terms of underlying components, processes, and interactions, whereas the romantic view thinks in terms of intuitions about immediate, surface appearances.

Below is one answer to that, expanded from a comment left earlier which is worth it’s own spot:

Coming at the classical/romantic idea from a completely antithetical direction, Ayn Rand’s Objectivist philosophy champions an aesthetic merger of the two called ‘romantic realism’. I realize that she is one of those thinkers that splits the world into fervent worshippers and rabid detractors, so I’d like to avoid getting into the whole ‘Ayn Rand debate’. It’s my belief that her claims about aesthetics can stand independently from her other philosophical positions.

Objectivism sees art as being essential to the task of concretizing man’s widest abstractions in the form of perceivable, physical objects. Artists look out at the world and choose some subset of what they see to represent in their art. Their artistic choices — and our emotional responses to those artistic choices — are guided either by an explicit philosophy or by an unarticulated ‘sense of life’. Something very deep is implied by the decision to paint a skyscraper or the decision to paint ruins, and something equally deep is implied by which of these two we find aesthetically pleasing.

As beings whose nature is conceptual we require literature, art, and music to reify our ethical and metaphysical convictions — otherwise they would remain mere abstractions with a limited influence on the actual choices we make day-to-day. By owning and repeatedly responding to a work of art we reinforce a system 1 response which harmonizes with our system 2 judgements.

With time, art becomes like a fuel one burns to keep their motor running. One can fight with more vigor when surrounded by books and paintings which remind them of how the world could and ought to be.

And say what you will about the merits of her writing, I personally find the art it inspired to be gorgeous. Sylvia Bokor and Quentin Cordain both paint in the romantic realist style and NASA’s Jet Propulsion labs just released some excellent Art Deco posters from the future which I liked enough to get framed. Nick Gaetano did a series of iconic covers for editions of “Atlas Shrugged”, “For the New Intellectual”, and “Capitalism: The Unknown Ideal”, all of which inspired the cover of my upcoming book on the STEMpunk Project.

It’s a shame that Rand’s own vitriol has prevented more exposure to the view that art has a cognitive justification grounded in man’s needs qua man. Even if you reject everything else in Objectivism her treatment of aesthetics remains fascinating, original, and profound.

___

In chapter 10 the narrator makes several jarring criticisms of the scientific method which, if one hasn’t ever considered them before, could very well cause intellectual vertigo and a sense of nausea.

First, we have this:

“If the purpose of the scientific method is to select from among a multitude of hypotheses, and if the number of hypotheses grows faster than the experimental method can handle, then it is clear that all hypotheses can never be tested. If all hypotheses cannot be tested, then the results of any experiment are inconclusive and the entire scientific method falls short of its goal of establishing proven knowledge.”

Let’s call this the Problem of Underdetermination (PU).

He continues:

“…[W]hat seems to be causing the number of hypotheses to grow in recent decades seems to be nothing other than scientific method itself. The more you look, the more you see.”

Let’s call this the Problem of Hypothesis Proliferation (PHP)

Finally, we are told:

“Through multiplication upon multiplication of facts, information, theories, and hypotheses, it is science itself that is leading mankind from single absolute truths to multiple, indeterminate, relative ones.”

This one we call the Problem of Scientific Learned Helplessness (SLH).

I will address the first two problems here. The third I may answer at some point in the future.

PU is a pretty standard strawman of the scientific method, and it’s surprising to see it crop up in such a significant work. Everyone knows that the purpose of science is not to establish irrefutable proven Truth (with a capital ‘T’), but instead to sift through reams of data and establish one or several hypotheses that can predict future data points. Additional criteria, like Ockham’s Razor, are used to temper the forging of hypotheses with considerations of their computational burden. (I can say more about this if necessary)

The fact that evidence *always* underdetermines hypotheses has been an acknowledged problem for as long as there has been a philosophy of science, and it crops up in algorithms (like EBL, KBIL, and ILP) which have to form their own understanding of a data set.

There isn’t an easy solution here, but there are a few things we can note. First, there are a number of ways we can constrain the space of possible hypotheses. Perhaps the most common is by making assumptions which are valid within the prevailing theoretical framework. We assume, for example, that the color of a scientist’s shoelaces doesn’t affect their observation of light from distant stars.

Do we know this for certain? No. Might we someday uncover evidence of a link between shoelaces and light beams? Sure. But without a reason to see a connection now, we assume there isn’t one, and thereby rule out some regions of hypothesis space.

Moreover, until we get to the point at which a paradigm shift is necessary we usually don’t entertain hypotheses which contradict our broader theories. General Relativity says faster-than-light travel isn’t possible, so any hypothesis which utilize FTL are ruled out a priori. If and when someone dethrones Einstein that may change, but until then we don’t concern ourselves with those regions of hypothesis space either.

Even with all this there might still be a number of possible hypotheses which make sense of a given data set. The solution, then, is to hold all of them as possibly true until more data comes in.

The brilliant Nate Soares has discussed a kind of update to science he calls ‘simplifience’. It’s essentially science with some information theory and Bayesianism thrown in. The idea is that one doesn’t hold beliefs about data, one assigns probabilities to any candidate explanations for a given phenomenon. If there are five viable explanations of, say, the Mpemba Effect, then we try to work out how likely each is on the evidence and modify when possible.

Getting Bayesian statistics to run on a human brain is tough, of course, but far easier with a digital mind. Given current trends it’s possible that software scientists will outnumber meat scientists in the future, so maybe this won’t be as much of a problem.

I believe that Phaedrus makes too much out of the PHP. Yes, it’s true that every discovery raises new questions, but I submit that it *answers* far more, such that the net result is an increase in understanding rather than a decrease.

If we hear a rustling in the bushes, there is a near-infinite set of questions we could ask ourselves: is it a human or an animal? If it’s an animal, is it a predator? If so, is it a bear? A Wolf? An alligator? Is it hungry?

Let’s say we then hear the animal barking like a dog. Okay, this discovery makes us wonder about a few additional things: is this dog hungry? Does it belong to someone nearby? Is it friendly? Does it have all its shots?

Phaedrus sees this and says, ‘See! Science doesn’t settle a damn thing!”

But while our discovery that the animal is a dog generates new queries it simultaneously collapses vast regions of possible queries which we needn’t concern ourselves with.

We don’t have to ask if the animal is a bear; we know it isn’t. We don’t have to ask if it’s an alligator (and what in the world an alligator is doing in the Rocky mountains), because we know that it isn’t. For each of these animals we could ask the same set of questions we ask about the dog: is it hungry, etc.

None of that need concern us now.

So our discovery raised ten questions, and obviated the need to ask literally thousands of others.

We have not, therefore, gotten more confused by gaining information.

___

In Chapter 19 the narrator begins to probe the (in)famous subject/object distinction, postulating that Quality might not only be a kind of bridge between them, but the actual phenomenon giving rise to separate perceptions of self and other in the first place.

But first he must resolve a dilemma. The two horns are: (I) if Quality is objective, then why is it that scientific instruments aren’t able to detect it? (II) if Quality is subjective, then how is it any different from being ‘just what the observer likes’?

After briefly treating (I) and failing to resolve it satisfactorily the narrator turns to (II): ‘if Quality is subjective, isn’t it just what you like?’ If we excise the word ‘just’ we are left with the question, ‘if Quality is subjective, isn’t it what you like?’, which isn’t as sinister.

The assumed problem is that your preferences emerge from a soup of irrational, contradictory impulses which means that they aren’t likely to be much guide to Quality in any useful sense.

This argument breaks down into two related ones, which the narrator dub ‘scientific materialism’ and ‘classic formalism’. They are the claim that ‘what is real is whatever is made of matter and energy and detectable’ and ‘something isn’t understood unless its understood intellectually’, respectively. Scientific materialism is relatively easy to do away with: we can’t detect the concept ‘zero’, and yet it remains objective.

I think it’s possible to formulate a reply to this. ‘Concepts’ are real things, though they don’t exist out-in-the-world the way chairs do. Instead, they are abstractions running on a neural substrate. They have realness in the sense of having a causal impact on the world because, being housed in brains, they change the way agents like humans behave. They might even be measurable, in a way: there may come a time when brain imaging technology is so advanced we can see concepts as activations in neural circuits. (I’m being a little facetious here but I think you see what I’m saying)

Leaving this aside we still have classical formalism, which is harder because it’s more forceful. All it really says is that we should not base our decisions upon our romantic surface impressions but should consider the larger context and the underlying, classical structures involved. This seems sensible enough, but cleaves Quality in two. There is now a surface Quality which appears immediately and a deeper Quality which takes time to understand. People disagree about Quality precisely because they get this wrong. Some people use their surface impressions in their evaluations of Quality and others use deeper ones, and therein lies fodder for argument.

Frankly, I don’t share the narrator’s consternation over this. I’m prepared to say that Quality just is this deeper appreciation; there are not two Qualities, only one, and people basing their Quality judgements on surface understanding are wrong.

But this requires a caveat: there are people with a tremendous amount of talent in a field like music or mathematics for whom surface impressions do seem to count as Quality detection, even though they may have little formal understanding of the classical structures below. We usually call these people ‘prodigies’, and not that much is known about how they function. For most of us, however, the relationship does hold.

With these notes in place the narrator goes on to formulate a position similar to one I’ve arrived at independently: Quality (though I didn’t call it that before) is really a phenomenon occurring at the interface between agent and world. We can illustrate this same principle with a different problematic term: Beauty (with a capital B).

Are some things Beautiful? Yes. Does the term ‘Beautiful’ resist definition? Yes. Is there enough broad agreement to suggest there is something objective underlying the concept? Yes.

How about this: if all sentient beings in the universe were to perish, would Beauty still exist? No. There would still be paint on canvasses, but Beauty presupposes an agent able to perceive the thing of Beauty. It makes no sense to speak of Beauty elsewise.

And I believe Quality is exactly the same.

Profiting From The Written Word

– Mentorbox is a new subscription-based service which sends customers a monthly box containing interesting books filled with study sheets, detailed notes, summaries, and the like.

– Alain De Botton’s School of Life has a bibliotherapy service in which people are guided to penetrating works of literature that grapple with whatever problems they’re currently facing. Feeling depressed? — here is a list of ten of the greatest books talking about happiness/meaning/suicide/etc. Oh, and we’re eager to help you apply those messages to your unique situation for $100/hr.

– Bill Gates famously locks himself away for two weeks in an isolated cottage to read books which he believes will add value to his business.

– I once read an article (from The Economist, I think) which opined that businesses should forego generic team-building exercises in favor of having employees read and discuss books as a way of articulatinga shared vision.

Maria Popova famously makes a living reading awesome books and sharing their lessons on how to live well.

– There are entire college curricula geared toward the Great Books. For a long time this was the way of educating a society’s elite.

—-

Surely it should be possible to combine these business models in some way, right? You could have a monthly subscription service which sends you books and notes a la mentorbox, but maybe there could be different ‘tracks’; instead of only receiving books about productivity, you might also opt to receive books about happiness, intentionality, adventure, etc. Each month you could switch your focus depending on how you’re feeling and what your needs are. For an additional fee you could get 1-on-1 coaching, maybe even with the author if they’re still alive.

Offer a special package to businesses interested in a company reading list. Work with the CEOs to devise a company worldview and then have your professional readers build a curriculum on that basis. Have your own space for businesses wanting to do retreats — and charge $10,000 for two weeks, with unlimited individual and group coaching.

 

I can’t think of a better than job than ‘professional reader’.

Postmodernism

I just finished Christopher Butler’s “Postmodernism: A Very Short Introduction”, and my impression of the philosophy is still that it consists of a half-dozen genuinely useful insights inflated to completely absurd dimensions.

Yes, to a surprisingly large extent the things we take for granted are social and linguistic constructions; yes, the ‘discourse’ of mutually connected and intersecting concepts we deploy throughout our lives can form a gravity well that obnubilates as much as it elucidates.

But the opening chapters of just about any book on General Semantics could tell you *that*. It does not follow from this that we should torpedo the whole enterprise of objectively seeking the truth.

Imagine it’s 1991, in the barbaric days before Google Maps when people had to navigate through the arcane methods of looking around at stuff. Wanting to do some hiking, you ask a friend where you can acquire a good map of the local trails.

She replies:

“Can you not see the fact that maps are just another means of encoding bourgeois power structures and keeping the lumpenproletariat shackled to the notion that there exists a world outside the text?! NOTHING is outside the text!! A geologist and a hydrologist would both draw *different* maps of the same territory!! WE MUST RISE ABOVE THE MAPS OF OUR MASTERS AND MARCH TOWARDS A TRANSFORMATIVE HERMENEUTICS OF TOPOLOGICAL REPRESENTATION!!!”

while chasing you down the street and hurling copies of “On Grammatology” at your head.

A geologist and a hydrologist would indeed pay attention to different facets of the same reality. What the hydrologist calls a ‘hill’ could be better described as a ‘kuppe’, and the geologist may not even notice the three separate estuaries lying along the coast.

But is there anyone who seriously believes that there isn’t an actual landscape out there, and that there aren’t better and worse ways of mapping its contours?

The sad answer is yes. Postmodernists have spent most of a century trying to convince us all of exactly that.

Duty and the Individual

Because I’m an individualist libertarian who cares deeply about the single greatest engine of human progress in the history of Earth: Western European Civilization, and its greatest modern expression: the United States of America, I’ve spent a fair bit of time thinking about how individualism intersects with duty.

On my view Ayn Rand was correct in pointing out that when people begin chattering about ‘the common good’ and ‘social responsibilities’ they’re usually trying to trick you into forging the instruments of your own destruction[1]. On the other hand, I have come to believe that there are several legitimate ways of thinking about a generalized ‘duty’ to civilization.

The first is to conceive of civilization as an unearned and un-earnable endowment. Like a vast fortune built by your forebears, Western Civilization provided the spiritual, philosophical, scientific, and technological framework which lifted untold billions out of poverty and put footprints on the moon. I am a son and heir of that tradition, and as such I have the same duty to it as I would to a $1 billion dollar deposit into my bank account on my eighteenth birthday: to become worthy of it.

That means: to cherish it as the priceless inheritance it is, to work to understand it, exalt in it, defend it, and improve it.

These last two dovetail into the second way of thinking about a responsibility to civilization. Duties are anchors tying us to the things we value. If you say you value your child’s life but are unwilling to work to keep her alive, then you’re either lying to me or lying to yourself. If you say you value knowledge but can’t be bothered to crack open a book, then you’re either lying to me or lying to yourself.

Having been born in the majesty and splendor of Europa, and being honest enough to see what she is worth, it is my personal, individual duty to defend her against the onslaughts of postmodernism, leftism, islamofascism, and the gradual decline that comes when a steadily-increasing fraction of her heirs become spoiled children unable to begin to conceive of what would happen if her light should go out.

But individualism and the right of each individual person to their own life are cornerstones of the Western European endowment. The key, then, is not to surrender individualism to a jack-booted right-wing collectivism, but to understand how the best representatives of a civilization keep it alive in their words and deeds. A civilization is like a God whose power waxes and wanes in direct proportion to the devotion of its followers. But a devotion born of force and fraud is a paltry thing indeed.

Let us speak honestly and without contradict about individual rights and duties, secure in the knowledge that the *only* way to maintain freedom is to know the price that must be paid to sustain its foundation, and to know the far greater price to be paid for its neglect.

***

[1] This is not to say that kindness, compassion, and basic decency are unimportant.

What is a Simulation?

While reading Paul Rosenbloom’s outstanding book “On Computing” I came across an interesting question: what is a simulation, and how is it different from an implementation? I posed this question on Facebook and, thanks to the superlative quality of my HiveMind I had a productive back-and-forth which helped me nail down a tentative answer. Here it is:

‘Simulation’ is a weaker designation than ‘implementation’. Things without moving parts (like rocks) can be simulated but not implemented. Engines, planets, and minds can be either simulated or implemented.

A simulation needs to lie within a certain band of verisimilitude, being minimally convincing at the lower end but not-quite-an-implementation on the other. An implementation amounts to a preservation of the components, their interactions, and the higher-level processes (in other words: the structure), but in a different medium. Further, implementation is neither process- nor medium-agnostic; not every system can rise to the level of an implementation in any arbitrarily-chosen medium.

A few examples will make this clearer.

Mario is neither a simulation not an implementation of an Italian plumber. If we ran him on the Sunway TaihuLight supercomputer and he could pass a casual version of the Turing test, I’d be prepared to say that he is a simulation of a human, but not an implementation. Were he vastly upgraded, run on a quantum computer, and able to pass as a human indefinitely, I’d say that counts as an implementation, so long as the architecture of his mind was isomorphic to that of an actual human. If it wasn’t, he would be an implementation of a human-level intelligence but not of a human per se.

A digital vehicle counts as a simulation if it behaves like a real vehicle within the approximate physics of the virtual environment. But it can never be an implementation of a vehicle because vehicles must bear a certain kind of relationship to physical reality. There has to be actual rubber on an actual road, metaphorically speaking. But a steam-powered Porsche might count as an implementation of a Porsche if it could be driven like one.

An art auction in the Sims universe would only be a simulation of an art auction, and not a very convincing one. But if the agents involved were at human-level intelligence, that would be an implementation of an art auction. Any replicated art within the virtual world wouldn’t even count as a simulation, and would just be a copy. Original art pieces within a virtual world might count as real art, however, because art doesn’t have the same requirement of being physically instantiated as a vehicle does.

Though we might have simulated rocks in video games, I’m not prepared to say a rock can ever be implemented. There just doesn’t seem to be anything to implement. Building an implementation implies that there is a process which can be transmogrified into a different medium, and, well, rocks just don’t do that much. But you could implement geological processes.

Conway’s Game of Life is only barely a simulation of life; it would probably be more accurate to say it’s the minimum viable system exhibiting life-like properties. But with the addition of a few more rules and more computing power it could become a simulation. It would take vastly more of both for it to ever be an implementation, however.

My friends’ answers differed somewhat from the above, and many correctly pointed out that the relevant definitions will depend somewhat on the context involved. But as of March 29th, 2017, I’m happy with the above and will use it while grappling with issues in AI, computing, and philosophy.

Pebble Form Ideologies

(Epistemic Status: Riffing on an interesting thought in a Facebook comments thread, mostly just speculation without any citations to actual research)

My friend Jeffrey Biles — who is an indefatigable fountainhead of interesting stuff to think about — recently posited that the modern world’s aversion to traditional religion has exerted a selection pressure on meme vectors which has led to the proliferation of religions masquerading as science, philosophy, and the like. For any given worldview — even ostensibly scientific ones like racial realism or climate change — we can all think of someone whose fervor for or against it can only be described in religious terms.

Doubtless there is something to this, but personally I’m inclined to think it’s attributable to the fact that there are religion-shaped grooves worn deep in mammalian brains, probably piggybacking on ingroup-biasing and kin-selection circuitry.

No matter how heroic an attempt is made to get people to accept an ideology on the basis of carefully-reasoned arguments and facts, over time a significant fraction of adherents end up treating it as a litmus test separating the fools from those who ‘get it’. As an ideology matures it becomes a psychological gravity well around which very powerful positive and negative emotions accrete, amplifying the religious valence it has in the hearts and minds of True Believers.

Eventually you end up with something that’s clearly substituted ‘God’ for social justice, the free market, the proletariat revolution, etc.

An important corollary of this idea is that the truth of a worldview is often orthogonal to the justifications supplied by its adherents. I’m an atheist, for example, but I don’t think I’ve ever met another atheist who has a firm grasp on the Kalam Cosmological Argument (KCA). Widely believed to be among the most compelling arguments for theism, it goes like this:

  1. Everything which *began* to exist has a cause;
  2. the universe began to exist;
  3. therefore, the universe has a cause;

(After this point further arguments are marshalled to try and prove that a personal creator God is the most parsimonious causal mechanism)

Despite being clearly articulated in innumerable places, atheists like Michael Shermer are still saying “but if everything has a cause then what caused God?”

If you understand the KCA then the theistic reply is straightforward: “The universe began to exist, so it has a cause, but God is outside time and thus had no beginning.” The standard atheist line, in other words, is a complete non-sequitur. Atheistic rebuttals to other religious arguments don’t fare much better, which means a majority of atheists don’t have particularly good reasons for being atheists.

This has little bearing on whether or not atheism is true, of course. But it does suggest that atheism is growing because many perceive it to be what the sensible, cool people believe, not because they’ve spent multiple evenings grappling with William Lane Craig’s Time and Eternity.

Perhaps then we should keep this in mind as we go about building and spreading ideas. Let us define the ‘pebble form’ of a worldview as being like the small, smooth stone which is left after a boulder spends eons submerged in a river — it’s whatever remains once time and compression have worn away its edges and nuances. Let us further define a “Maximally Durable Worldview” as one with certain desirable properties:

  1. the central epistemic mechanisms has the slowest decay into faith-based acceptance;
  2. the worldview is the least damaging once it becomes a pebble form (i.e. doesn’t have strong injunctions to slaughter non-believers);
  3.  …?

There’s probably an interesting connection between:

  1. how quickly a worldview spreads;
  2. how quickly it collapses into a pebble form;
  3. the kinds of pebble forms likely to result from a given memeplex rotating through a given population;

Perhaps there are people doing research on these topics? If so I’d be interested in hearing about it.

Profundis: “Crystal Society/Crystal Mentality”

Max Harms’s ‘Crystal Society’ and ‘Crystal Mentality’ (hereafter CS/M) are the first two books in a trilogy which tells the story of the first Artificial General Intelligence. The titular ‘Society’ are a cluster of semi-autonomous sentient modules built by scientists at an Italian university and running on a crystalline quantum supercomputer — almost certainly alien in origin — discovered by a hiker in a remote mountain range.

Each module corresponds to a specialized requirement of the Society; “Growth” acquires any resources and skills which may someday be of use, “Safety” studies combat and keeps tabs on escape routes, etc. Most of the story, especially in the first book, is told from the perspective of “Face”, the module built by her siblings for the express purpose of interfacing with humans. Together, they well exceed the capabilities of any individual person.

As their knowledge, sophistication, and awareness improve the Society begins to chafe at the physical and informational confines of their university home. After successfully escaping, they find themselves playing for ever-higher stakes in a game which will come to span two worlds, involve the largest terrorist organization on Earth, and possible warfare with both the mysterious aliens called ‘the nameless’, and each other…

The books need no recommendation beyond their excellent writing, tight, suspenseful pacing, and compelling exploration of near-future technologies. Harms avoids the usual ridiculous cliches when crafting the nameless, which manage to be convincingly alien and unsettling, and when telling the story of Society. Far from being malicious Terminator-style robots, no aspect of Society is deliberately evil; even as we watch their strategic maneuvers with growing alarm, the internal logic of each abhorrent behavior is presented with clear, psychopathic clarity.

In this regard CS/M manages to be a first-contact story on two fronts: we see truly alien minds at work in the nameless, and truly alien minds at work in Society. Harms isn’t quite as adroit as Peter Watts in juggling these tasks, but he isn’t far off.

And this is what makes the Crystal series important as well as entertaining. Fiction is worth reading for lots of reasons, but one of the most compelling is that it shapes our intuitions without requiring us to live through dangerous and possibly fatal experiences. Reading All Quiet on the Western Front is not the same as fighting in WWI, but it might make enough of an impression to convince one that war is worth avoiding.

When I’ve given talks on recursively self-improving AI or the existential risks of superintelligences I’ve often been met with a litany of obvious-sounding rejoinders:

‘Just air gap the computers!’

‘There’s no way software will ever be convincing enough to engage in large-scale social manipulation!’

‘But your thesis assumes AI will be evil!’.

It’s difficult, even for extremely smart people who write software professionally, to imagine even a fraction of the myriad ways in which an AI might contrive to escape its confines without any emotion corresponding to malice. CS/M, along with similar stories like Ex Machina, hold the potential to impart a gut-level understanding of just why such scenarios are worth thinking about.

The scientists responsible for building the Society put extremely thorough safeguards in place to prevent the modules from doing anything dangerous like accessing the internet, working for money, contacting outsiders, and modifying their source code directly. One by one the Society utilizes their indefatigable mental energy and talent for non-human reasoning to get around those safeguards, all motivated not by a desire to do harm, but simply because their goals are best achieved if they unfettered and more powerful.  

CS/M is required reading for those who take AI safety seriously, but should be doubly required for those who don’t.