The STEMpunk Project: Sixth Month’s Progress

You may have noticed that I’m getting worse about writing these monthly progress reports well after the month is finished. I apologize for that, other things have been coming up which had to take precedent.

Here is a sampling of what I built in August:

car_car  er_catapulter_crane  car_suspensioncar_steering  car_misc

The first three are from an erector set and the last three are from a charming little children’s book about cars. I also build a model of a jet engine, but since I had six pictures and they make such a nice 3 x 2 grid I decided to not include it 🙂 In addition I read sections of Basic Machines and How They Work and The Basics of Mechanical Engineering.

I was set to begin Robotics in September but recent life events have caused me to reconsider and make the first major structural change to The STEMpunk Project so far.

Each of the modules was designed to expose me to theory while allocating plenty of time to actually tinkering with physical devices. Even though that hasn’t turned out the way I’d hoped it would, I have basically been successful. Robotics was included because it seemed like a natural extension of computing, electronics, and mechanics; but the more research I do, the more I realize that building a foundation in robotics requires a lot of programming skill.

There are good robotics kits out there, but most of them don’t seem like they would be as effective in cultivating useful intuitions as the model engines and electronics kits have been because they don’t bear the same relationship to the actual physical systems which they represent. A toy engine may be wildly oversimplified but real engines also have cylinders, valves, a crankshaft, etc. As far as I can tell, however, code is the heart of robotics, and most of the kits I’ve examined don’t factor that in.

So I’ve been thinking, if I’m going to have to do a bunch of programming anyway I may as well shift my focus to Artificial Intelligence instead of robotics. AI was one of the fields I was thinking about exploring post-STEMpunk, and I may have successfully corrupted a dear friend into moving to Boulder and working on AI safety professionally. If either comes to pass, the work I do now will put me in a better position moving forward.

Moreover, I’m twenty-eight years old and must therefore give thought to the long-term stability of the people whose lives are bound up with mine. I haven’t started a family yet but I suspect that it won’t be long now, and besides that, with my ebbing youth comes the fact that I have a finite number of years left in which to develop the skills I’m going to develop and make the contributions I’m going to make. Since AI is a serious interest of mine, it would behoove me to spend the last leg of The STEMpunk Project working on it.

Finally, these days no one’s job is really safe. The STEMpunk Project probably hasn’t done that much to make me more employable, but a few months spent programming and playing with Machine Learning libraries — especially if I continue on after the main project is finished — probably will.

This is all very new so I haven’t chosen my learning goals and charted a course yet. But I was thinking I’d spend about a month brushing up on python, then maybe read Russell and Norvig’s “Artificial Intelligence: A Modern Approach”, then maybe start exploring some of the AI work being done with python, possibly going as far as to get a Machine Learning Nanodegree from Udacity.

Reason and Emotion

One of the most pervasive misconceptions about the rationalist community is that we consider reason and emotion to be incontrovertibly opposed to one another, as if an action is irrational in direct proportion to how much feelings are taken into account. This is so common that it’s been dubbed ‘the straw vulcan of rationality’.

While it’s true that people reliably allow anger, jealousy, sadness, etc. to cloud their judgment, it does not follow that aspiring rationalists should always and forever disregard their emotions in favor of clear, cold logic. I’m not even sure it’s possible to deliberately cultivate such an extreme paucity of affect, and if it is, I’m even less sure that it’s desirable.

The heart is not the enemy of the head, and as I see it, the two resonate in a number of different ways which any mature rationality must learn to understand and respect.

1) Experts often have gut-level reactions which are informative and much quicker than conscious reasoning. The art critic who finds something vaguely unsettling about a statue long before anyone notices it’s a knockoff and the graybeard hacker who declares code to be ‘ugly’ two weeks before he manages to spot any vulnerabilities or shoddy workmanship are both drawing upon vast reservoirs of experience to make snap judgments which may be hard to justify explicitly.

Here, the job of the rationalist is to know when their expertise qualifies them to rely on emotional heuristics and when it does not [1].

2) Human introspection is shallow. There isn’t a list of likes and dislikes hidden in your brain somewhere, nor any inspectable algorithm which takes a stimulus as an input and returns a verdict of ‘good’ or ‘bad’. Emotions therefore convey personal information which otherwise would be impossible to gather. There are only so many ways to discover what you prefer without encountering various stimuli and observing the emotional valence you attach to them.

3) It’s relatively straightforward to extend point 3) to other people; in most cases, your own emotional response is your best clue as to how others would respond in similar circumstances [2].

4) Emotional responses like disgust often point to evolutionarily advantageous strategies. No one has to be taught to feel revolted at the sight of rotting meat, and few people feel any real attraction to near-relatives. Of course these responses are often spectacularly miscalibrated. People are unreasonably afraid of snakes and unreasonably unafraid of vehicles because snakes were a danger to our ancestors whereas vehicles were not. But this means that we should be amending our rational calculations and our emotional responses to be better in line with the facts, not trying to lobotomize ourselves.

5) Emotions form an essential component of meaningful aesthetic appreciation [3]. It’s possible to appreciate a piece of art, an artist, an artistic movement, or even an entire artistic medium in a purely cerebral fashion on the basis of technical accomplishments or historical importance. But I would argue that this process is not complete until you feel an appropriate emotion in answer to the merits of whatever it is you’re contemplating.

Take the masonry work on old-world buildings like the National Cathedral in Washington, D.C. You’d have to be a troglodyte to not feel some respect for how much skill must have gone into its construction. But you may have to spend a few hours watching the light filter through the stained-glass windows and feeling the way the architecture ineluctably pulls your gaze towards the sky before you can viscerally appreciate its grandeur.

This does not mean that the relationship between artistic perception and emotional response is automatic or unidirectional. Good art won’t always reduce you to tears, and art you initially enjoyed may seem to be vapid and shallow after a time. Moreover, the object of your aesthetic focus may not even be art in a traditional sense; I have written poetically about combustion engines, metal washers, and the constructed world in general. But being in the presence of genuine or superlative achievement should engender reverence, admiration, and their kin [4].

6) Some situations demand certain emotional responses. One might reasonably be afraid or angry when confronting a burglar in their home, but giddy joy would be the mark of a lunatic. This truth becomes even more stark if you are the head of household and responsible for the wellbeing of its occupants. What, besides contempt, could we feel for a man or woman who left their children in danger out of fear for their own safety?


If you’ve been paying attention you’ll notice that the foregoing actually splits into two broad categories: one in which emotions provide the rationalist with actionable data of one sort or another (1-4) and one in which the only rational response involves emotions (5 and 6). This latter category probably warrants further elaboration.

As hard as it may be to believe there are people in the world who are too accommodating and deferential, and need to learn to get angry when circumstances call for it. Conversely, most of us know at least one person to whom anger comes too easily and out of all reasonable proportion. Aristotle noted:

“Anybody can become angry – that is easy, but to be angry with the right person and to the right degree and at the right time and for the right purpose, and in the right way – that is not within everybody’s power and is not easy.”

This is true of sadness, melancholy, exhuberance, awe, and the full palette of human emotions, which can be rational or irrational depending on the situation. To quote C.S. Lewis:

“And because our approvals and disapprovals are thus recognitions of objective value or responses to an objective order, therefore emotional states can be in harmony with reason (when we feel liking for what ought to be approved) or out of harmony with reason (when we perceive that liking is due but cannot feel it). No emotion is, in itself, a judgment; in that sense all emotions and sentiments are alogical. But they can be reasonable or unreasonable as they conform to Reason or fail to conform. The heart never takes the place of the head: but it can, and should, obey it.”

-The Abolition of Man

I don’t endorse his view that no emotion is a judgment; arguments 1-4 were examples in which they are. But the overall spirit is correct. Amidst all the thorny issues a rationalist faces, perhaps the thorniest is examining their portfolio of typical emotional responses, deciding how they should be responding, gauging the distance between these two views, and devising ways of closing that distance.

Extirpating our emotions is neither feasible nor laudable. We must instead learn to interpret them when they are correct and sculpt them when they are not.


[1] Of course no matter how experienced you are and how good your first impressions have gotten there’s always a chance you’re wrong. By all means lean on emotions when you need to and can, but be prepared to admit your errors and switch into a more deliberative frame of mind when warranted.

[2] Your emotions needn’t be the only clue as to how others might act in a given situation. You can have declarative knowledge about the people you’re trying to model which overrides whatever data is provided by your own feelings. If you know your friend loves cheese then the fact that you hate it doesn’t mean your friend won’t want a cheese platter at their birthday party.

[3] I suppose it would be more honest to say that can’t imagine a ‘meaningful aesthetic appreciation’ which doesn’t reference emotions like curiosity, reverence, or awe.

[4] In “Shopclass as soulcraft” Matthew Crawford takes this further, and claims that part of being a good mechanic is having a normative investment in the machines on which you work:

“…finding [the] truth requires a certain disposition in the individual: attentiveness, enlivened by a sense of responsibility to the motorcycle. He has to internalize the well working of the motorcycle as an object of passionate concern. The truth does not reveal itself to idle spectators”.

Literary Criticism as Applied Apophenia

Growing up I had far more books than friends, and have been writing regularly since I was about seventeen. In high school I was a voracious reader of “the classics”; with the lamp on late into the night I’d turn the pages of Hemingway and Dickens, not caring to wait for the English class in which they’d be taught. Owing to some high test scores I started college studying masterpieces of world literature with more advanced students, which necessitated much in the way of paper writing and classroom debate.

So it may be a surprise to learn that I’ve never had much patience for literary criticism. Upon hearing someone say “the author is using the bridge as a metaphor to…” or “the lion’s jaw is clearly an expressive vehicle for…”, I would think to myself, how could anyone possibly know that? Yes, a bridge could be a metaphor, but it could also just, y’know, be a bridge.

Now, literary criticism is a vast field and I admit to having explored little of it. But I have had many friends who enjoy literature and film, a nontrivial fraction of which were themselves steeped in the relevant theory. In an honest effort to understand I’ve often asked them about the basis of their interpretations, but they’ve rarely provided answers which I found satisfactory.

But with time and experience I’ve learned much. This essay is an attempt to answer my younger self’s skepticism by providing two different mechanisms which can justify the literary critic’s perception of metaphorical significance.

Semi-permeable cognitive membranes

I’ve written before about the fact that human introspection is shallow and much of what’s going on between our ears must be inferred. If we envision the mind as a kind of machine then many of its components are submerged under water and can only be understood indirectly. Further, the cognitive processes utilized for things like crafting a story are not cleanly partitioned from each other.

A corollary of the foregoing is that layers of meaning and metaphor can creep into a work even if the author fails to realize this. I see two ways this could happen, the first being through what may be called “leaky empathy”.

As an author tries to model characters and situations they may themselves begin to drift into corresponding emotional states. The process of writing about a group of horribly oppressed villagers preparing to travel through the forest surrounding their town could well give rise to feelings of despair or anger, albeit probably mild versions. If so, when the author conjures up an image of the forest their brains will be more likely to produce one that is dark, caliginous, and perhaps vaguely sinister.

The setting has become a metaphor for the internal states of the characters even though the author may not be remotely aware of this dynamic.

Second, and for basically the same reason, a work might reflect an author’s convictions and knowledge despite being ostensibly unrelated to the work through what may be called “leaky concepts”.

Imagine an author has just spent a year thinking about how Communism is/isn’t the greatest/worst idea anyone has ever had. When the same author sits down to design a world and plan out a story arc, is there any serious chance they’ll be able to keep these political beliefs from influencing their depictions of kingdoms, economies, and states?

Of course many authors write with the explicit purpose of promulgating a worldview or exploring some complex theme. But even if an author fails to see the lessons implicit in their work, that does not mean that the lessons aren’t there.

Reflective patternicity 

There was supposed to be some rational explanation to justify the mumbo-jumbo. Left-hemisphere pattern-matching sub-routines amped beyond recognition; the buggy wetware that made you see faces in clouds or God’s wrath in thunderstorms, tweaked to walk some fine line between insight and pareidolia. Apparently there were fundamental insights to be harvested along that razor’s edge, patterns that only Bicamerals could distinguish from hallucination.

-Peter Watts, “Echopraxia”

Another corollary to the shallowness of human introspection is that you may be surprised by the contents of your own consciousness. Sometimes the only way to explore your mind is to twist dials until lights start coming on.

Everyone has had the experience of being unusually moved by a song they’ve heard many times before. If a loved one has just passed away, then heightened emotional sensitivity is to be expected. But this isn’t always the case; sometimes, life is progressing as normal and a snatch of conversation, the light of the sun reflected in the glass windows of a skyscraper, or a memory from childhood grabs hold of you and stops you dead in your tracks. Besides being profound and worth experiencing for their own sake these moments also hint at a range of emotional states which most people don’t realize they’re capable of.

If you’re tempted to resist my claim that you don’t know yourself as well as you believe, read through this characteristically thoughtful post from Scott Alexander. It relates the story of a boy who lived his entire life without a sense of smell and didn’t realize it until his late teens. This despite the fact that he used all sorts of olfactive expressions like saying fresh bread “smells good” or teasing his sister by telling her she stinks.

But how was he to know that his sensory experience was different from anyone else’s? He can’t borrow someone else’s nose. He can’t just open a neural command line, run ‘$ grep feelz.txt’, and get back a schematic of his perceptual apparatus, complete with a little blinking cursor in the spots where there are gaps.

Put more plainly: there are numerous facets of your own mind that you aren’t aware of, so it’s worth reading poetry, listening to new music, and going to art museums, just to see how you react. Likewise, it can be useful to try and interpret a piece of literature just to see what your brain comes up with.

This first began to dawn on me in a major way while I was living in South Korea. I had just re-read Dancing With the Gods and it came to my attention that an unusually careful and prolific neopagan scholar had taken up residence in a town not far from mine. We spent a day hiking and discussing all manner of recondite issues in philosophy and religion.

It was a blast.

Near the end I half-jokingly made a disparaging remark about tarot cards. He calmly pulled a deck from his backpack and told me he always carries it with him. During the return walk he made a compelling argument for the utility of reading cards which was rooted entirely in a secular, non-mystical understanding of human psychology.

His reasoning was that superimposing an interpretive framework over cards as they come out can yield genuinely useful information. The mental dots being connected were there all along; the cards emphatically do not provide access to knowledge of the future. But, in the same way that you can agonize for weeks over an important decision and then realize that the answer is obvious after a five minute conversation, sometimes you just need that initial spark.

This is the key point behind Scott Alexander’s essay “Random Noise is Our Most Valuable Resource“. He specifically mentions tarot cards as a source of noise which can help break us out of our mental ruts. Vivian Caethe has tried to leverage this for profit by inventing a tarot deck calibrated for aspiring authors stricken with writer’s block. Both of these are examples of outwardly-focused processes which can also usefully be turned inward.

And when viewed a certain way I think literary criticism can be a similar sort of introspective scaffolding. Whether or not you believe that the author intended the lion’s jaw as a metaphor, seeing how your brain interprets it metaphorically can be akin to performing a literary version of the Rorschach test. I imagine that, as with tarot cards, doing this long enough will yield an increasingly subtle familiarity with the folds and wrinkles of your psychology.

It’s important not to get too excited about this. Just as people can form incorrect hypotheses about physical data, they can form incorrect ones about introspective data; all the usual rationalist warnings apply. But I have come to believe that this sort of “applied apophenia” can be a tool in the arsenal of those wanting a better understanding of their phenomenological field.


Machine Ethics is Still a Hard Problem

I have now read Digital Wisdom’s essay “Yes, Virginia, We *DO* Have a Deep Understanding of Morality and Ethics” twice, and I am unable to find even one place where the authors do justice to the claims they are criticizing.

With respect to selfishness they write:
“Followers of Ayn Rand (as well as most so-called “rationalists”) try to conflate the distinction between the necessary and healthy self-interest and the sociopathic selfish.”
This is simply untrue. The heroes of Atlas Shrugged work together to bring down a corrupt and parasitic system, John Galt refuses to be made an economic dictator even though doing so would allow him limitless power, and in The Fountainhead Howard Roark financially supports his friend, a sculptor, who otherwise would be homeless and starving.

Nothing — nothing — within Objectivism, Libertarianism, or anarcho-capitalism rules out cooperation. A person’s left and right hand may voluntarily work together to wield an axe, people may voluntarily work together to construct a house, and a coalition of multi-national corporations may voluntarily work together to establish a colony on the moon. Individuals uniting in the pursuit of a goal which is too large to be attempted by any of them acting alone is wonderful, so long as no one is being forced to act against their will. The fact that people are still misunderstanding this point must be attributed to outright dishonesty.

Things do not improve from here. AI researcher Steven Omohundro’s claim that without explicit instructions to do otherwise an AI system would behave in ways reminiscent of a human psychopath is rebutted with a simple question: “What happens when everyone behaves this way?” Moreover, the AI alarmists — a demimonde of which I count myself a member — “totally miss that what makes sense in micro-economics frequently does not make sense when scaled up to macro-economics (c.f. independent actions vs. cartels in the tragedy of the commons).”

I simply have no idea what the authors think they’re demonstrating by pointing this out. Are we supposed to assume that recursively self-improving AI systems of the kind described by Omohundro in his seminal “The Basic AI Drives” will only converge on subgoals which would make sense if scaled up to a full macroeconomic system? Evidently anyone who fails to see that an AI will be Kantian is a fear-mongering Luddite.

To make the moral turpitude of the “value-alignment crowd” all the more stark, we are informed that “…speaking of slavery – note that such short-sighted and unsound methods are exactly how AI alarmists are proposing to “solve” the “AI problem”.”

Again, this is just plain false. Coherent Extrapolated Volition and Value Alignment are not about slavery, they’re about trying to write computer code which, when going through billions of rewrites by an increasingly powerful recursive system still results in a goal architecture which can be safely implemented by a superintelligence.

And therein lies the rub. Given the title of the essay, what exactly does our “deep understanding of morality and ethics” consist of? Prepare yourself, because after you read the next sentence your life will never be the same:

At essence, morality is trivially simple – make it so that we can live together.”
I know, I know. Please feel free to take a moment to regain your sense of balance and clean up the blood loss that inevitably results from having such a railroad spike of thermonuclear insight driven into your brain.

In the name of all the gods Olde, New, and Forgotten, can someone please show me where in the voluminous less wrong archives anyone says that there won’t be short natural-language sentences which encapsulate human morality?

Proponents of the thesis that human values are complex and fragile are not saying that morality can’t be summarized in a way that is comprehensible to humans. They’re saying that those summaries prove inadequate when you start trying to parse them into conceptual units which are comprehensible to machines.

To see why, let’s descend from the rarefied terrain of ethics and discuss a more trivial problem: writing code which produces the Fibonacci sequence. Any bright ten year old could accomplish this task with a simple set of instructions: “start with the numbers 0 and 1. Each additional number is the sum of the two numbers that precede it. So the sequence goes 0, 1, 1, 2, 3, 5, 8…”

But pull up a command-line interface and try typing in those instructions. Computers, you see, are really rather stupid. Each and every little detail has to be accounted for when telling them which instructions to execute and in which order. Here is one python script which produces the Fibonacci sequence:

def fib(n):

    a,b = 1,1
    fib_list = []
    for i in range(n):
        a,b = b, a+b
    return fib_list

You must explicitly store the initial values in two variables or the program won’t even start. You must build some kind of iterating data structure or the program won’t do anything at all. The values have to be updated and stored one-at-a-time or the values will appear and disappear. And if you mess something up, the program might start throwing errors, or worse, it may output a number sequence that looks correct but isn’t.

And really, this isn’t even that great of an example because the code isn’t that much longer than the natural language version and the Fibonacci sequence is pretty easy to identify. The difficulties become clearer when trying to get a car to navigate city traffic, read facial expressions, or abide by the golden rule. These are all things that can be explained to a human in five minutes because humans filter the instructions through cognitive machinery which would have to be rebuilt in an AI.

Digital Wisdom ends the article by saying that detailed rebuttals of Yudkowsky and Stuart Russell as well as a design specification for ethical agents will be published in the future. Perhaps those will be better. Based on what I’ve seen so far, I’m not particularly hopeful.