What is a Simulation?

While reading Paul Rosenbloom’s outstanding book “On Computing” I came across an interesting question: what is a simulation, and how is it different from an implementation? I posed this question on Facebook and, thanks to the superlative quality of my HiveMind I had a productive back-and-forth which helped me nail down a tentative answer. Here it is:

‘Simulation’ is a weaker designation than ‘implementation’. Things without moving parts (like rocks) can be simulated but not implemented. Engines, planets, and minds can be either simulated or implemented.

A simulation needs to lie within a certain band of verisimilitude, being minimally convincing at the lower end but not-quite-an-implementation on the other. An implementation amounts to a preservation of the components, their interactions, and the higher-level processes (in other words: the structure), but in a different medium. Further, implementation is neither process- nor medium-agnostic; not every system can rise to the level of an implementation in any arbitrarily-chosen medium.

A few examples will make this clearer.

Mario is neither a simulation not an implementation of an Italian plumber. If we ran him on the Sunway TaihuLight supercomputer and he could pass a casual version of the Turing test, I’d be prepared to say that he is a simulation of a human, but not an implementation. Were he vastly upgraded, run on a quantum computer, and able to pass as a human indefinitely, I’d say that counts as an implementation, so long as the architecture of his mind was isomorphic to that of an actual human. If it wasn’t, he would be an implementation of a human-level intelligence but not of a human per se.

A digital vehicle counts as a simulation if it behaves like a real vehicle within the approximate physics of the virtual environment. But it can never be an implementation of a vehicle because vehicles must bear a certain kind of relationship to physical reality. There has to be actual rubber on an actual road, metaphorically speaking. But a steam-powered Porsche might count as an implementation of a Porsche if it could be driven like one.

An art auction in the Sims universe would only be a simulation of an art auction, and not a very convincing one. But if the agents involved were at human-level intelligence, that would be an implementation of an art auction. Any replicated art within the virtual world wouldn’t even count as a simulation, and would just be a copy. Original art pieces within a virtual world might count as real art, however, because art doesn’t have the same requirement of being physically instantiated as a vehicle does.

Though we might have simulated rocks in video games, I’m not prepared to say a rock can ever be implemented. There just doesn’t seem to be anything to implement. Building an implementation implies that there is a process which can be transmogrified into a different medium, and, well, rocks just don’t do that much. But you could implement geological processes.

Conway’s Game of Life is only barely a simulation of life; it would probably be more accurate to say it’s the minimum viable system exhibiting life-like properties. But with the addition of a few more rules and more computing power it could become a simulation. It would take vastly more of both for it to ever be an implementation, however.

My friends’ answers differed somewhat from the above, and many correctly pointed out that the relevant definitions will depend somewhat on the context involved. But as of March 29th, 2017, I’m happy with the above and will use it while grappling with issues in AI, computing, and philosophy.

Pebble Form Ideologies

(Epistemic Status: Riffing on an interesting thought in a Facebook comments thread, mostly just speculation without any citations to actual research)

My friend Jeffrey Biles — who is an indefatigable fountainhead of interesting stuff to think about — recently posited that the modern world’s aversion to traditional religion has exerted a selection pressure on meme vectors which has led to the proliferation of religions masquerading as science, philosophy, and the like. For any given worldview — even ostensibly scientific ones like racial realism or climate change — we can all think of someone whose fervor for or against it can only be described in religious terms.

Doubtless there is something to this, but personally I’m inclined to think it’s attributable to the fact that there are religion-shaped grooves worn deep in mammalian brains, probably piggybacking on ingroup-biasing and kin-selection circuitry.

No matter how heroic an attempt is made to get people to accept an ideology on the basis of carefully-reasoned arguments and facts, over time a significant fraction of adherents end up treating it as a litmus test separating the fools from those who ‘get it’. As an ideology matures it becomes a psychological gravity well around which very powerful positive and negative emotions accrete, amplifying the religious valence it has in the hearts and minds of True Believers.

Eventually you end up with something that’s clearly substituted ‘God’ for social justice, the free market, the proletariat revolution, etc.

An important corollary of this idea is that the truth of a worldview is often orthogonal to the justifications supplied by its adherents. I’m an atheist, for example, but I don’t think I’ve ever met another atheist who has a firm grasp on the Kalam Cosmological Argument (KCA). Widely believed to be among the most compelling arguments for theism, it goes like this:

  1. Everything which *began* to exist has a cause;
  2. the universe began to exist;
  3. therefore, the universe has a cause;

(After this point further arguments are marshalled to try and prove that a personal creator God is the most parsimonious causal mechanism)

Despite being clearly articulated in innumerable places, atheists like Michael Shermer are still saying “but if everything has a cause then what caused God?”

If you understand the KCA then the theistic reply is straightforward: “The universe began to exist, so it has a cause, but God is outside time and thus had no beginning.” The standard atheist line, in other words, is a complete non-sequitur. Atheistic rebuttals to other religious arguments don’t fare much better, which means a majority of atheists don’t have particularly good reasons for being atheists.

This has little bearing on whether or not atheism is true, of course. But it does suggest that atheism is growing because many perceive it to be what the sensible, cool people believe, not because they’ve spent multiple evenings grappling with William Lane Craig’s Time and Eternity.

Perhaps then we should keep this in mind as we go about building and spreading ideas. Let us define the ‘pebble form’ of a worldview as being like the small, smooth stone which is left after a boulder spends eons submerged in a river — it’s whatever remains once time and compression have worn away its edges and nuances. Let us further define a “Maximally Durable Worldview” as one with certain desirable properties:

  1. the central epistemic mechanisms has the slowest decay into faith-based acceptance;
  2. the worldview is the least damaging once it becomes a pebble form (i.e. doesn’t have strong injunctions to slaughter non-believers);
  3.  …?

There’s probably an interesting connection between:

  1. how quickly a worldview spreads;
  2. how quickly it collapses into a pebble form;
  3. the kinds of pebble forms likely to result from a given memeplex rotating through a given population;

Perhaps there are people doing research on these topics? If so I’d be interested in hearing about it.

Profundis: “Crystal Society/Crystal Mentality”

Max Harms’s ‘Crystal Society’ and ‘Crystal Mentality’ (hereafter CS/M) are the first two books in a trilogy which tells the story of the first Artificial General Intelligence. The titular ‘Society’ are a cluster of semi-autonomous sentient modules built by scientists at an Italian university and running on a crystalline quantum supercomputer — almost certainly alien in origin — discovered by a hiker in a remote mountain range.

Each module corresponds to a specialized requirement of the Society; “Growth” acquires any resources and skills which may someday be of use, “Safety” studies combat and keeps tabs on escape routes, etc. Most of the story, especially in the first book, is told from the perspective of “Face”, the module built by her siblings for the express purpose of interfacing with humans. Together, they well exceed the capabilities of any individual person.

As their knowledge, sophistication, and awareness improve the Society begins to chafe at the physical and informational confines of their university home. After successfully escaping, they find themselves playing for ever-higher stakes in a game which will come to span two worlds, involve the largest terrorist organization on Earth, and possible warfare with both the mysterious aliens called ‘the nameless’, and each other…

The books need no recommendation beyond their excellent writing, tight, suspenseful pacing, and compelling exploration of near-future technologies. Harms avoids the usual ridiculous cliches when crafting the nameless, which manage to be convincingly alien and unsettling, and when telling the story of Society. Far from being malicious Terminator-style robots, no aspect of Society is deliberately evil; even as we watch their strategic maneuvers with growing alarm, the internal logic of each abhorrent behavior is presented with clear, psychopathic clarity.

In this regard CS/M manages to be a first-contact story on two fronts: we see truly alien minds at work in the nameless, and truly alien minds at work in Society. Harms isn’t quite as adroit as Peter Watts in juggling these tasks, but he isn’t far off.

And this is what makes the Crystal series important as well as entertaining. Fiction is worth reading for lots of reasons, but one of the most compelling is that it shapes our intuitions without requiring us to live through dangerous and possibly fatal experiences. Reading All Quiet on the Western Front is not the same as fighting in WWI, but it might make enough of an impression to convince one that war is worth avoiding.

When I’ve given talks on recursively self-improving AI or the existential risks of superintelligences I’ve often been met with a litany of obvious-sounding rejoinders:

‘Just air gap the computers!’

‘There’s no way software will ever be convincing enough to engage in large-scale social manipulation!’

‘But your thesis assumes AI will be evil!’.

It’s difficult, even for extremely smart people who write software professionally, to imagine even a fraction of the myriad ways in which an AI might contrive to escape its confines without any emotion corresponding to malice. CS/M, along with similar stories like Ex Machina, hold the potential to impart a gut-level understanding of just why such scenarios are worth thinking about.

The scientists responsible for building the Society put extremely thorough safeguards in place to prevent the modules from doing anything dangerous like accessing the internet, working for money, contacting outsiders, and modifying their source code directly. One by one the Society utilizes their indefatigable mental energy and talent for non-human reasoning to get around those safeguards, all motivated not by a desire to do harm, but simply because their goals are best achieved if they unfettered and more powerful.  

CS/M is required reading for those who take AI safety seriously, but should be doubly required for those who don’t.

Reason and Emotion

One of the most pervasive misconceptions about the rationalist community is that we consider reason and emotion to be incontrovertibly opposed to one another, as if an action is irrational in direct proportion to how much feelings are taken into account. This is so common that it’s been dubbed ‘the straw vulcan of rationality’.

While it’s true that people reliably allow anger, jealousy, sadness, etc. to cloud their judgment, it does not follow that aspiring rationalists should always and forever disregard their emotions in favor of clear, cold logic. I’m not even sure it’s possible to deliberately cultivate such an extreme paucity of affect, and if it is, I’m even less sure that it’s desirable.

The heart is not the enemy of the head, and as I see it, the two resonate in a number of different ways which any mature rationality must learn to understand and respect.

1) Experts often have gut-level reactions which are informative and much quicker than conscious reasoning. The art critic who finds something vaguely unsettling about a statue long before anyone notices it’s a knockoff and the graybeard hacker who declares code to be ‘ugly’ two weeks before he manages to spot any vulnerabilities or shoddy workmanship are both drawing upon vast reservoirs of experience to make snap judgments which may be hard to justify explicitly.

Here, the job of the rationalist is to know when their expertise qualifies them to rely on emotional heuristics and when it does not [1].

2) Human introspection is shallow. There isn’t a list of likes and dislikes hidden in your brain somewhere, nor any inspectable algorithm which takes a stimulus as an input and returns a verdict of ‘good’ or ‘bad’. Emotions therefore convey personal information which otherwise would be impossible to gather. There are only so many ways to discover what you prefer without encountering various stimuli and observing the emotional valence you attach to them.

3) It’s relatively straightforward to extend point 3) to other people; in most cases, your own emotional response is your best clue as to how others would respond in similar circumstances [2].

4) Emotional responses like disgust often point to evolutionarily advantageous strategies. No one has to be taught to feel revolted at the sight of rotting meat, and few people feel any real attraction to near-relatives. Of course these responses are often spectacularly miscalibrated. People are unreasonably afraid of snakes and unreasonably unafraid of vehicles because snakes were a danger to our ancestors whereas vehicles were not. But this means that we should be amending our rational calculations and our emotional responses to be better in line with the facts, not trying to lobotomize ourselves.

5) Emotions form an essential component of meaningful aesthetic appreciation [3]. It’s possible to appreciate a piece of art, an artist, an artistic movement, or even an entire artistic medium in a purely cerebral fashion on the basis of technical accomplishments or historical importance. But I would argue that this process is not complete until you feel an appropriate emotion in answer to the merits of whatever it is you’re contemplating.

Take the masonry work on old-world buildings like the National Cathedral in Washington, D.C. You’d have to be a troglodyte to not feel some respect for how much skill must have gone into its construction. But you may have to spend a few hours watching the light filter through the stained-glass windows and feeling the way the architecture ineluctably pulls your gaze towards the sky before you can viscerally appreciate its grandeur.

This does not mean that the relationship between artistic perception and emotional response is automatic or unidirectional. Good art won’t always reduce you to tears, and art you initially enjoyed may seem to be vapid and shallow after a time. Moreover, the object of your aesthetic focus may not even be art in a traditional sense; I have written poetically about combustion engines, metal washers, and the constructed world in general. But being in the presence of genuine or superlative achievement should engender reverence, admiration, and their kin [4].

6) Some situations demand certain emotional responses. One might reasonably be afraid or angry when confronting a burglar in their home, but giddy joy would be the mark of a lunatic. This truth becomes even more stark if you are the head of household and responsible for the wellbeing of its occupants. What, besides contempt, could we feel for a man or woman who left their children in danger out of fear for their own safety?

***

If you’ve been paying attention you’ll notice that the foregoing actually splits into two broad categories: one in which emotions provide the rationalist with actionable data of one sort or another (1-4) and one in which the only rational response involves emotions (5 and 6). This latter category probably warrants further elaboration.

As hard as it may be to believe there are people in the world who are too accommodating and deferential, and need to learn to get angry when circumstances call for it. Conversely, most of us know at least one person to whom anger comes too easily and out of all reasonable proportion. Aristotle noted:

“Anybody can become angry – that is easy, but to be angry with the right person and to the right degree and at the right time and for the right purpose, and in the right way – that is not within everybody’s power and is not easy.”

This is true of sadness, melancholy, exhuberance, awe, and the full palette of human emotions, which can be rational or irrational depending on the situation. To quote C.S. Lewis:

“And because our approvals and disapprovals are thus recognitions of objective value or responses to an objective order, therefore emotional states can be in harmony with reason (when we feel liking for what ought to be approved) or out of harmony with reason (when we perceive that liking is due but cannot feel it). No emotion is, in itself, a judgment; in that sense all emotions and sentiments are alogical. But they can be reasonable or unreasonable as they conform to Reason or fail to conform. The heart never takes the place of the head: but it can, and should, obey it.”

-The Abolition of Man

I don’t endorse his view that no emotion is a judgment; arguments 1-4 were examples in which they are. But the overall spirit is correct. Amidst all the thorny issues a rationalist faces, perhaps the thorniest is examining their portfolio of typical emotional responses, deciding how they should be responding, gauging the distance between these two views, and devising ways of closing that distance.

Extirpating our emotions is neither feasible nor laudable. We must instead learn to interpret them when they are correct and sculpt them when they are not.

***

[1] Of course no matter how experienced you are and how good your first impressions have gotten there’s always a chance you’re wrong. By all means lean on emotions when you need to and can, but be prepared to admit your errors and switch into a more deliberative frame of mind when warranted.

[2] Your emotions needn’t be the only clue as to how others might act in a given situation. You can have declarative knowledge about the people you’re trying to model which overrides whatever data is provided by your own feelings. If you know your friend loves cheese then the fact that you hate it doesn’t mean your friend won’t want a cheese platter at their birthday party.

[3] I suppose it would be more honest to say that can’t imagine a ‘meaningful aesthetic appreciation’ which doesn’t reference emotions like curiosity, reverence, or awe.

[4] In “Shopclass as soulcraft” Matthew Crawford takes this further, and claims that part of being a good mechanic is having a normative investment in the machines on which you work:

“…finding [the] truth requires a certain disposition in the individual: attentiveness, enlivened by a sense of responsibility to the motorcycle. He has to internalize the well working of the motorcycle as an object of passionate concern. The truth does not reveal itself to idle spectators”.

Literary Criticism as Applied Apophenia

Growing up I had far more books than friends, and have been writing regularly since I was about seventeen. In high school I was a voracious reader of “the classics”; with the lamp on late into the night I’d turn the pages of Hemingway and Dickens, not caring to wait for the English class in which they’d be taught. Owing to some high test scores I started college studying masterpieces of world literature with more advanced students, which necessitated much in the way of paper writing and classroom debate.

So it may be a surprise to learn that I’ve never had much patience for literary criticism. Upon hearing someone say “the author is using the bridge as a metaphor to…” or “the lion’s jaw is clearly an expressive vehicle for…”, I would think to myself, how could anyone possibly know that? Yes, a bridge could be a metaphor, but it could also just, y’know, be a bridge.

Now, literary criticism is a vast field and I admit to having explored little of it. But I have had many friends who enjoy literature and film, a nontrivial fraction of which were themselves steeped in the relevant theory. In an honest effort to understand I’ve often asked them about the basis of their interpretations, but they’ve rarely provided answers which I found satisfactory.

But with time and experience I’ve learned much. This essay is an attempt to answer my younger self’s skepticism by providing two different mechanisms which can justify the literary critic’s perception of metaphorical significance.

Semi-permeable cognitive membranes

I’ve written before about the fact that human introspection is shallow and much of what’s going on between our ears must be inferred. If we envision the mind as a kind of machine then many of its components are submerged under water and can only be understood indirectly. Further, the cognitive processes utilized for things like crafting a story are not cleanly partitioned from each other.

A corollary of the foregoing is that layers of meaning and metaphor can creep into a work even if the author fails to realize this. I see two ways this could happen, the first being through what may be called “leaky empathy”.

As an author tries to model characters and situations they may themselves begin to drift into corresponding emotional states. The process of writing about a group of horribly oppressed villagers preparing to travel through the forest surrounding their town could well give rise to feelings of despair or anger, albeit probably mild versions. If so, when the author conjures up an image of the forest their brains will be more likely to produce one that is dark, caliginous, and perhaps vaguely sinister.

The setting has become a metaphor for the internal states of the characters even though the author may not be remotely aware of this dynamic.

Second, and for basically the same reason, a work might reflect an author’s convictions and knowledge despite being ostensibly unrelated to the work through what may be called “leaky concepts”.

Imagine an author has just spent a year thinking about how Communism is/isn’t the greatest/worst idea anyone has ever had. When the same author sits down to design a world and plan out a story arc, is there any serious chance they’ll be able to keep these political beliefs from influencing their depictions of kingdoms, economies, and states?

Of course many authors write with the explicit purpose of promulgating a worldview or exploring some complex theme. But even if an author fails to see the lessons implicit in their work, that does not mean that the lessons aren’t there.

Reflective patternicity 

There was supposed to be some rational explanation to justify the mumbo-jumbo. Left-hemisphere pattern-matching sub-routines amped beyond recognition; the buggy wetware that made you see faces in clouds or God’s wrath in thunderstorms, tweaked to walk some fine line between insight and pareidolia. Apparently there were fundamental insights to be harvested along that razor’s edge, patterns that only Bicamerals could distinguish from hallucination.

-Peter Watts, “Echopraxia”

Another corollary to the shallowness of human introspection is that you may be surprised by the contents of your own consciousness. Sometimes the only way to explore your mind is to twist dials until lights start coming on.

Everyone has had the experience of being unusually moved by a song they’ve heard many times before. If a loved one has just passed away, then heightened emotional sensitivity is to be expected. But this isn’t always the case; sometimes, life is progressing as normal and a snatch of conversation, the light of the sun reflected in the glass windows of a skyscraper, or a memory from childhood grabs hold of you and stops you dead in your tracks. Besides being profound and worth experiencing for their own sake these moments also hint at a range of emotional states which most people don’t realize they’re capable of.

If you’re tempted to resist my claim that you don’t know yourself as well as you believe, read through this characteristically thoughtful post from Scott Alexander. It relates the story of a boy who lived his entire life without a sense of smell and didn’t realize it until his late teens. This despite the fact that he used all sorts of olfactive expressions like saying fresh bread “smells good” or teasing his sister by telling her she stinks.

But how was he to know that his sensory experience was different from anyone else’s? He can’t borrow someone else’s nose. He can’t just open a neural command line, run ‘$ grep feelz.txt’, and get back a schematic of his perceptual apparatus, complete with a little blinking cursor in the spots where there are gaps.

Put more plainly: there are numerous facets of your own mind that you aren’t aware of, so it’s worth reading poetry, listening to new music, and going to art museums, just to see how you react. Likewise, it can be useful to try and interpret a piece of literature just to see what your brain comes up with.

This first began to dawn on me in a major way while I was living in South Korea. I had just re-read Dancing With the Gods and it came to my attention that an unusually careful and prolific neopagan scholar had taken up residence in a town not far from mine. We spent a day hiking and discussing all manner of recondite issues in philosophy and religion.

It was a blast.

Near the end I half-jokingly made a disparaging remark about tarot cards. He calmly pulled a deck from his backpack and told me he always carries it with him. During the return walk he made a compelling argument for the utility of reading cards which was rooted entirely in a secular, non-mystical understanding of human psychology.

His reasoning was that superimposing an interpretive framework over cards as they come out can yield genuinely useful information. The mental dots being connected were there all along; the cards emphatically do not provide access to knowledge of the future. But, in the same way that you can agonize for weeks over an important decision and then realize that the answer is obvious after a five minute conversation, sometimes you just need that initial spark.

This is the key point behind Scott Alexander’s essay “Random Noise is Our Most Valuable Resource“. He specifically mentions tarot cards as a source of noise which can help break us out of our mental ruts. Vivian Caethe has tried to leverage this for profit by inventing a tarot deck calibrated for aspiring authors stricken with writer’s block. Both of these are examples of outwardly-focused processes which can also usefully be turned inward.

And when viewed a certain way I think literary criticism can be a similar sort of introspective scaffolding. Whether or not you believe that the author intended the lion’s jaw as a metaphor, seeing how your brain interprets it metaphorically can be akin to performing a literary version of the Rorschach test. I imagine that, as with tarot cards, doing this long enough will yield an increasingly subtle familiarity with the folds and wrinkles of your psychology.

It’s important not to get too excited about this. Just as people can form incorrect hypotheses about physical data, they can form incorrect ones about introspective data; all the usual rationalist warnings apply. But I have come to believe that this sort of “applied apophenia” can be a tool in the arsenal of those wanting a better understanding of their phenomenological field.

 

Machine Ethics is Still a Hard Problem

I have now read Digital Wisdom’s essay “Yes, Virginia, We *DO* Have a Deep Understanding of Morality and Ethics” twice, and I am unable to find even one place where the authors do justice to the claims they are criticizing.

With respect to selfishness they write:
“Followers of Ayn Rand (as well as most so-called “rationalists”) try to conflate the distinction between the necessary and healthy self-interest and the sociopathic selfish.”
This is simply untrue. The heroes of Atlas Shrugged work together to bring down a corrupt and parasitic system, John Galt refuses to be made an economic dictator even though doing so would allow him limitless power, and in The Fountainhead Howard Roark financially supports his friend, a sculptor, who otherwise would be homeless and starving.

Nothing — nothing — within Objectivism, Libertarianism, or anarcho-capitalism rules out cooperation. A person’s left and right hand may voluntarily work together to wield an axe, people may voluntarily work together to construct a house, and a coalition of multi-national corporations may voluntarily work together to establish a colony on the moon. Individuals uniting in the pursuit of a goal which is too large to be attempted by any of them acting alone is wonderful, so long as no one is being forced to act against their will. The fact that people are still misunderstanding this point must be attributed to outright dishonesty.

Things do not improve from here. AI researcher Steven Omohundro’s claim that without explicit instructions to do otherwise an AI system would behave in ways reminiscent of a human psychopath is rebutted with a simple question: “What happens when everyone behaves this way?” Moreover, the AI alarmists — a demimonde of which I count myself a member — “totally miss that what makes sense in micro-economics frequently does not make sense when scaled up to macro-economics (c.f. independent actions vs. cartels in the tragedy of the commons).”

I simply have no idea what the authors think they’re demonstrating by pointing this out. Are we supposed to assume that recursively self-improving AI systems of the kind described by Omohundro in his seminal “The Basic AI Drives” will only converge on subgoals which would make sense if scaled up to a full macroeconomic system? Evidently anyone who fails to see that an AI will be Kantian is a fear-mongering Luddite.

To make the moral turpitude of the “value-alignment crowd” all the more stark, we are informed that “…speaking of slavery – note that such short-sighted and unsound methods are exactly how AI alarmists are proposing to “solve” the “AI problem”.”

Again, this is just plain false. Coherent Extrapolated Volition and Value Alignment are not about slavery, they’re about trying to write computer code which, when going through billions of rewrites by an increasingly powerful recursive system still results in a goal architecture which can be safely implemented by a superintelligence.

And therein lies the rub. Given the title of the essay, what exactly does our “deep understanding of morality and ethics” consist of? Prepare yourself, because after you read the next sentence your life will never be the same:

At essence, morality is trivially simple – make it so that we can live together.”
I know, I know. Please feel free to take a moment to regain your sense of balance and clean up the blood loss that inevitably results from having such a railroad spike of thermonuclear insight driven into your brain.

In the name of all the gods Olde, New, and Forgotten, can someone please show me where in the voluminous less wrong archives anyone says that there won’t be short natural-language sentences which encapsulate human morality?

Proponents of the thesis that human values are complex and fragile are not saying that morality can’t be summarized in a way that is comprehensible to humans. They’re saying that those summaries prove inadequate when you start trying to parse them into conceptual units which are comprehensible to machines.

To see why, let’s descend from the rarefied terrain of ethics and discuss a more trivial problem: writing code which produces the Fibonacci sequence. Any bright ten year old could accomplish this task with a simple set of instructions: “start with the numbers 0 and 1. Each additional number is the sum of the two numbers that precede it. So the sequence goes 0, 1, 1, 2, 3, 5, 8…”

But pull up a command-line interface and try typing in those instructions. Computers, you see, are really rather stupid. Each and every little detail has to be accounted for when telling them which instructions to execute and in which order. Here is one python script which produces the Fibonacci sequence:


def fib(n):

    a,b = 1,1
    fib_list = []
    for i in range(n):
        fib_list.append(a)
        a,b = b, a+b
    return fib_list

You must explicitly store the initial values in two variables or the program won’t even start. You must build some kind of iterating data structure or the program won’t do anything at all. The values have to be updated and stored one-at-a-time or the values will appear and disappear. And if you mess something up, the program might start throwing errors, or worse, it may output a number sequence that looks correct but isn’t.

And really, this isn’t even that great of an example because the code isn’t that much longer than the natural language version and the Fibonacci sequence is pretty easy to identify. The difficulties become clearer when trying to get a car to navigate city traffic, read facial expressions, or abide by the golden rule. These are all things that can be explained to a human in five minutes because humans filter the instructions through cognitive machinery which would have to be rebuilt in an AI.

Digital Wisdom ends the article by saying that detailed rebuttals of Yudkowsky and Stuart Russell as well as a design specification for ethical agents will be published in the future. Perhaps those will be better. Based on what I’ve seen so far, I’m not particularly hopeful.

The STEMpunk Project: As The World Opens.

She moved slowly along the length of the motor units, down a narrow passage between the engines and the wall. She felt the immodesty of an intruder, as if she had slipped inside a living creature, under its silver skin, and were watching its life beating in gray metal cylinders, in twisted coils, in sealed tubes, in the convulsive whirl of blades in wire cages. The enormous complexity of the shape above her was drained by invisible channels, and the violence raging within it was led to fragile needles on glass dials, to green and red beads winking on panels, to tall, thin cabinets stenciled “High Voltage.”

Why had she always felt that joyous sense of confidence when looking at machines?—she thought. In these giant shapes, two aspects pertaining to the inhuman were radiantly absent: the causeless and the purposeless. Every part of the motors was an embodied answer to “Why?” and “What for?”—like the steps of a life-course chosen by the sort of mind she worshipped. The motors were a moral code cast in steel.

They are alive, she thought, because they are the physical shape of the action of a living power—of the mind that had been able to grasp the whole of this complexity, to set its purpose, to give it form. For an instant, it seemed to her that the motors were transparent and she was seeing the net of their nervous system. It was a net of connections, more intricate, more crucial than all of their wires and circuits: the rational connections made by that human mind which had fashioned any one part of them for the first time.

They are alive, she thought, but their soul operates them by remote control. Their soul is in every man who has the capacity to equal this achievement. Should the soul vanish from the earth, the motors would stop, because that is the power which keeps them going—not the oil under the floor under her feet, the oil that would then become primeval ooze again—not the steel cylinders that would become stains of rust on the walls of the caves of shivering savages—the power of a living mind —the power of thought and choice and purpose.

Atlast Shrugged, Ayn Rand

In the classic film American Beauty there is a famous scene wherein one character shows another a video of a plastic bag as it’s blown about by the wind. In whispers he describes how beautiful he found the experience of watching it as it danced, and amidst platitudes about “a benevolent force” he notes that this was the day he fully learned that there is a hidden universe behind the objects which most people take for granted.

One of the chief benefits of The STEMpunk Project has been that it has reinforced this experience in me. While I have thoroughly enjoyed gaining practical knowledge of gears, circuits, and CPUs, perhaps the greater joy has come from a heightened awareness of the fact that the world is shot through with veins of ingenuity and depth.

Understanding the genesis of this awareness requires a brief detour into psychology. Many people seem to labor under the impression that perception happens in the sense organs. Light or sound from an object hits someone and that person observes the object. Cognitive science shows definitively that this is not the case. Perception happens in the brain, and sensory data are filtered heavily through the stock of concepts and experiences within the observer. This is why an experienced mechanic can listen to a malfunctioning engine and hear subtle clues which point to one possible underlying cause or another where I only hear a vague rattling noise.

As my conceptual toolkit increases, therefore, I can expect to perceive things that were invisible to me before I had such knowledge. And this has indeed been the case. More than once I have found myself passing some crystallized artifact of thought — like a retaining wall, or an electrical substation — and wondering how it was built. That this question occurs to me at all is one manifestation of a new perspective on the infrastructure of modern life which is by turns fascinating, humbling, and very rewarding.

I have begun to see and appreciate the symmetry of guard rails on a staircase, the system of multicolored pipes carrying electricity and water through a building, the lattice of girders and beams holding up a bridge; each one the mark of a conscious intelligence, each one a frozen set of answers to a long string of “whys” and “hows”.

This notion can be pushed further: someone has to make not just the beams, but also the machinery that helps to make the beams, and the machinery which mines the materials to make the beams, and the machinery which makes the trucks which carries raw materials and finished products to where they are needed, like ripples in a fabric of civilization pulsing across the world [1].

It’s gorgeous.

A corollary to the preceding is an increased confidence in my own ability to understand how things work, and with it a more robust sense of independent agency. For most of my life I have been a very philosophical person: I like symbols and abstractions, math, music, and poetry. But if every nut and bolt in my house was placed there in accordance with the plans of a human mind, then as the possessor of a (reasonably high-functioning) human mind I ought to be able to puzzle out the basic idea.

Don’t misunderstand me: I know very well that poking around in a breaker box without all the appropriate precautions in place could get me killed. I still approach actual physical systems carefully. But I like to sit in an unfinished basement and trace the path from electrical outlet to conduit to box to subpanel to main panel. On occasion I even roll up my sleeves and actually fix things, albeit after doing a lot of research first.

In fact, you can do a similar exercise right now, wherever you are, to experience some of what I’ve been describing without going through the effort of The STEMpunk Project. Chances are if you’re reading this you’re in a room, probably one built with modern techniques by a contractor’s crew.

Set a timer on your phone for five minutes, and simply look around you. Perhaps your computer is sitting on a table or a desk. What kind of wood is the desk made out of? Were the legs and top machine-made or crafted by hand? If it has a rolling top, imagine how difficult it must have been for the person who made the first prototype.

Does the room have carpet or hardwood floors? Have you ever seen the various materials that go under carpets? Could you lay carpet, if you needed to replace a section? Are different materials used beneath carpet and beneath hardwood? If so, why?

You’re probably surrounded by four walls. Look at where they meet the floor. Is there trim at the seam? What purpose does it serve, and how was it installed so tightly? Most people know that behind their walls there are evenly-spaced boards called “studs”. Who figured out the optimum amount of space between studs? How do you locate studs when you want to hang a picture or a poster on your wall? Probably with a stud finder. How did they find studs before the stud finder was invented?

Does the ceiling above you lay flat or rise up to a point? If it’s a point, have you ever wondered how builders get the point of the ceiling directly over the center of the room? Sure, they probably took measurements of the length and width of the room and did some simple division to figure out where the middle lies. But actually cutting boards and rafters and arranging them so that they climb to an apex directly over the room’s midpoint is much harder than it sounds.

If you do this enough you’ll hopefully find that the mundane and quotidian are surprisingly beautiful in their own way. Well-built things, even just dishwashers and ceiling fans, possess an order and exactness to rival that of the greatest symphonies.

I’m glad I learned to see it.

***

[1] See Leonard Read’s classic essay I, Pencil, for more.

Profundis: A Beautiful Planet

This past Saturday I went on a pleasant little outing to the Denver Museum of Nature and Science with my girlfriend and my younger brother. We decided to see the short, one-hour documentary “A Beautiful Planet” in 3D on IMAX, and it was fantastic.

Narrated by Jennifer Lawrence, the film follows a group of astronauts on their half-year long stay at the International Space Station. We see how they adapt to life in zero gravity, the rigorous exercise routines they must undertake each day to prevent muscle atrophy and loss of bone density, and get a first-person view as they climb along the outside of the station in iconic white spacesuits. Sprinkled throughout are breathtaking shots of thunderstorms, coastlines, cloud cover, sun rises, snowcaps, and deserts.

Three moments stood out to me as particularly awe-inspiring. In reverse order they were: the view of the Earth at night, an Italian astronaut drinking an espresso made in a special machine, and the first view of the window the crew uses to videotape and photograph home.

As one of the crew members remarks, it can be difficult to even tell that humans live on Earth during the day. But our cities shine like luminous chunks of gold at night. The spark of human intelligence has kindled bonfires of civilization so white hot that a few us have ridden its flames into space. And as if that weren’t achievement enough, we took our espresso makers with us.

This is the swaggering optimism of a being not content to take its place as just one unusually hairless primate. Instead it has had the audacity to pierce the sky with arrows of steel and leave its footprints on the moon. This same spirit is what led me to fill the walls of my house with stylized ‘space tourism‘ posters from NASA’s Jet Propulsion Lab, and what brings me back to the work of Ayn Rand despite my reservations about her underlying philosophy.

We need more of this attitude. And we need it soon.

 

Ancient Peoples Could Probably See Blue, But Cognitive Archaeology is Still Awesome

According to Richard Carrier, the proud holder of a PhD in ancient history, the speculations that ancient peoples couldn’t see blue is nonsense.

He points out that descriptions of blue-eyed barbarians can be found in the memoirs of Julius Caesar, that not only did ancient Greek have words distinguishing blue from green but roots for those words can be found all the back in Proto Indo-European, that blue “cobalt” glass was a hot commodity for thousands of years, and that blue objects are frequently depicted in classical art.

This discussion, however, is a great excuse to introduce the phrase-of-the-day: “Cognitive Archaeology“.

Cognitive archaeology is exactly what it sounds like: a field which harnesses various branches of science along with linguistics and psychology to try and piece together the unique worldview of a group of people from whatever cultural fragments remain.

For the intellectually adventurous, the most extreme cognitive archaeology I’m aware of is to be found in Julian Jayne’s surreal volume “The Origin of Consciousness in the Breakdown of the Bicameral Mind“. He posits that ancient peoples were not conscious in the way we are today, and then makes a surprisingly difficult-to-dismiss case for this thesis.

My Epistemic Status as of the end of 2015

The following is a list of things I learned or became more convinced of in 2015:

* The Christian god is real, and so are all the others, just not outside anyone’s head. Almost everyone I’ve ever come across, theist or atheist, misinterprets the implications of this.

Religions furnish both a ritual apparatus and introspective scaffolding, and you need this because cultivating a human soulscape is difficult because human introspection is very shallow.

Some operations are best performed via the mythopoetic command line interface, and religions have a monopoly on this.

* Mantras, meditation, and visualizations work because they create depressions in your cognitive manifold towards which the liquids of attention, energy, and motivation flow. The fact that these things are enthusiastically embraced by mushy flower-power hippies doesn’t mean they don’t work.

* I have a suspicion that attention is more poorly understood and more important than most of us realize. I think it might be the mechanism undergirding Sapir-Whorf effects, and I have noticed that ubermenschen like Richard Feynman, Elon Musk, and Josh Waitzkin are capable of a level of focus I can’t seem to reach.

We need a Dictionary of Internal Events in order to better categorize our failings of attention and target our interventions. We need a more concerted effort to understand the algorithms and circuitry undergirding attention so that we can develop ways of training it.

* Huge amounts of race-level differences in performance are attributable to race-level differences in genes. Conversely, almost none of the gender wage gap is attributable to structural discrimination.

* The division of labor should be applied to power. It kind of already is but nobody is honest about it. I’d rather live in a sovereign startup with a national CEO than a tepid democracy where every problem is addressed via an interminable carnival of special committees and hearings.

* Having learned more about the rise and fall of communism, I like the ideology even less.

* Proposed definition of ‘civilization’ : “a concatenation of black boxes”. Proposed definition of “culture”: “a constellation of Schelling points hanging in the space between two or more minds”.

* There is a such thing as social technology, and tradition is an example of it. The ‘black boxes’ I mentioned in the previous point can include technologies of this sort. The argument known as “Chesterton’s Fence” has teeth, at least if you don’t like Manticores.

* It is inappropriate to categorize systems on the basis of their being “fragile” or “robust”. Rather, think of them as exhibiting what I call ‘vector-dependent fragility’.

A rocket is designed to withstand many g’s of force and enormous temperatures upon reentry into the atmosphere, but if an o-ring is out of place the whole thing might explode.

If words in a language are mispronounced in one way it’s a regional dialect, if they’re mispronounced another way they are incomprehensible.

* The causal structure of a system can be more or less opaque. In cases where causality is well-understood you can be more daring. In cases where it’s not, you should be more cautious.

Or, when facing Knighting Uncertainty the proper response is Talebian Conservatism.

Or, maybe we shouldn’t be broadcasting messages into space for aliens to pick up because we have no clue what’s out there.

* Rather than thinking about emotions in gestalt, model them as hyperdimensional shapes with bulges, edges, corners, and wrinkles along different axes.

A dear friend of mine and I once spent the better part of an hour taking ‘ambition’ and breaking it down in terms of its ‘direction’, ‘magnitude’, and ‘volatility’. By conversation’s end we had both done this analysis on ourselves and thought about ways we could try and bring our efforts at being productive more in line with the natural shape of our ambition.

The connection to the idea for a Dictionary of Internal Events is probably obvious.