Profiting From The Written Word

– Mentorbox is a new subscription-based service which sends customers a monthly box containing interesting books filled with study sheets, detailed notes, summaries, and the like.

– Alain De Botton’s School of Life has a bibliotherapy service in which people are guided to penetrating works of literature that grapple with whatever problems they’re currently facing. Feeling depressed? — here is a list of ten of the greatest books talking about happiness/meaning/suicide/etc. Oh, and we’re eager to help you apply those messages to your unique situation for $100/hr.

– Bill Gates famously locks himself away for two weeks in an isolated cottage to read books which he believes will add value to his business.

– I once read an article (from The Economist, I think) which opined that businesses should forego generic team-building exercises in favor of having employees read and discuss books as a way of articulatinga shared vision.

Maria Popova famously makes a living reading awesome books and sharing their lessons on how to live well.

– There are entire college curricula geared toward the Great Books. For a long time this was the way of educating a society’s elite.

—-

Surely it should be possible to combine these business models in some way, right? You could have a monthly subscription service which sends you books and notes a la mentorbox, but maybe there could be different ‘tracks’; instead of only receiving books about productivity, you might also opt to receive books about happiness, intentionality, adventure, etc. Each month you could switch your focus depending on how you’re feeling and what your needs are. For an additional fee you could get 1-on-1 coaching, maybe even with the author if they’re still alive.

Offer a special package to businesses interested in a company reading list. Work with the CEOs to devise a company worldview and then have your professional readers build a curriculum on that basis. Have your own space for businesses wanting to do retreats — and charge $10,000 for two weeks, with unlimited individual and group coaching.

 

I can’t think of a better than job than ‘professional reader’.

Postmodernism

I just finished Christopher Butler’s “Postmodernism: A Very Short Introduction”, and my impression of the philosophy is still that it consists of a half-dozen genuinely useful insights inflated to completely absurd dimensions.

Yes, to a surprisingly large extent the things we take for granted are social and linguistic constructions; yes, the ‘discourse’ of mutually connected and intersecting concepts we deploy throughout our lives can form a gravity well that obnubilates as much as it elucidates.

But the opening chapters of just about any book on General Semantics could tell you *that*. It does not follow from this that we should torpedo the whole enterprise of objectively seeking the truth.

Imagine it’s 1991, in the barbaric days before Google Maps when people had to navigate through the arcane methods of looking around at stuff. Wanting to do some hiking, you ask a friend where you can acquire a good map of the local trails.

She replies:

“Can you not see the fact that maps are just another means of encoding bourgeois power structures and keeping the lumpenproletariat shackled to the notion that there exists a world outside the text?! NOTHING is outside the text!! A geologist and a hydrologist would both draw *different* maps of the same territory!! WE MUST RISE ABOVE THE MAPS OF OUR MASTERS AND MARCH TOWARDS A TRANSFORMATIVE HERMENEUTICS OF TOPOLOGICAL REPRESENTATION!!!”

while chasing you down the street and hurling copies of “On Grammatology” at your head.

A geologist and a hydrologist would indeed pay attention to different facets of the same reality. What the hydrologist calls a ‘hill’ could be better described as a ‘kuppe’, and the geologist may not even notice the three separate estuaries lying along the coast.

But is there anyone who seriously believes that there isn’t an actual landscape out there, and that there aren’t better and worse ways of mapping its contours?

The sad answer is yes. Postmodernists have spent most of a century trying to convince us all of exactly that.

Duty and the Individual

Because I’m an individualist libertarian who cares deeply about the single greatest engine of human progress in the history of Earth: Western European Civilization, and its greatest modern expression: the United States of America, I’ve spent a fair bit of time thinking about how individualism intersects with duty.

On my view Ayn Rand was correct in pointing out that when people begin chattering about ‘the common good’ and ‘social responsibilities’ they’re usually trying to trick you into forging the instruments of your own destruction[1]. On the other hand, I have come to believe that there are several legitimate ways of thinking about a generalized ‘duty’ to civilization.

The first is to conceive of civilization as an unearned and un-earnable endowment. Like a vast fortune built by your forebears, Western Civilization provided the spiritual, philosophical, scientific, and technological framework which lifted untold billions out of poverty and put footprints on the moon. I am a son and heir of that tradition, and as such I have the same duty to it as I would to a $1 billion dollar deposit into my bank account on my eighteenth birthday: to become worthy of it.

That means: to cherish it as the priceless inheritance it is, to work to understand it, exalt in it, defend it, and improve it.

These last two dovetail into the second way of thinking about a responsibility to civilization. Duties are anchors tying us to the things we value. If you say you value your child’s life but are unwilling to work to keep her alive, then you’re either lying to me or lying to yourself. If you say you value knowledge but can’t be bothered to crack open a book, then you’re either lying to me or lying to yourself.

Having been born in the majesty and splendor of Europa, and being honest enough to see what she is worth, it is my personal, individual duty to defend her against the onslaughts of postmodernism, leftism, islamofascism, and the gradual decline that comes when a steadily-increasing fraction of her heirs become spoiled children unable to begin to conceive of what would happen if her light should go out.

But individualism and the right of each individual person to their own life are cornerstones of the Western European endowment. The key, then, is not to surrender individualism to a jack-booted right-wing collectivism, but to understand how the best representatives of a civilization keep it alive in their words and deeds. A civilization is like a God whose power waxes and wanes in direct proportion to the devotion of its followers. But a devotion born of force and fraud is a paltry thing indeed.

Let us speak honestly and without contradict about individual rights and duties, secure in the knowledge that the *only* way to maintain freedom is to know the price that must be paid to sustain its foundation, and to know the far greater price to be paid for its neglect.

***

[1] This is not to say that kindness, compassion, and basic decency are unimportant.

What is a Simulation?

While reading Paul Rosenbloom’s outstanding book “On Computing” I came across an interesting question: what is a simulation, and how is it different from an implementation? I posed this question on Facebook and, thanks to the superlative quality of my HiveMind I had a productive back-and-forth which helped me nail down a tentative answer. Here it is:

‘Simulation’ is a weaker designation than ‘implementation’. Things without moving parts (like rocks) can be simulated but not implemented. Engines, planets, and minds can be either simulated or implemented.

A simulation needs to lie within a certain band of verisimilitude, being minimally convincing at the lower end but not-quite-an-implementation on the other. An implementation amounts to a preservation of the components, their interactions, and the higher-level processes (in other words: the structure), but in a different medium. Further, implementation is neither process- nor medium-agnostic; not every system can rise to the level of an implementation in any arbitrarily-chosen medium.

A few examples will make this clearer.

Mario is neither a simulation not an implementation of an Italian plumber. If we ran him on the Sunway TaihuLight supercomputer and he could pass a casual version of the Turing test, I’d be prepared to say that he is a simulation of a human, but not an implementation. Were he vastly upgraded, run on a quantum computer, and able to pass as a human indefinitely, I’d say that counts as an implementation, so long as the architecture of his mind was isomorphic to that of an actual human. If it wasn’t, he would be an implementation of a human-level intelligence but not of a human per se.

A digital vehicle counts as a simulation if it behaves like a real vehicle within the approximate physics of the virtual environment. But it can never be an implementation of a vehicle because vehicles must bear a certain kind of relationship to physical reality. There has to be actual rubber on an actual road, metaphorically speaking. But a steam-powered Porsche might count as an implementation of a Porsche if it could be driven like one.

An art auction in the Sims universe would only be a simulation of an art auction, and not a very convincing one. But if the agents involved were at human-level intelligence, that would be an implementation of an art auction. Any replicated art within the virtual world wouldn’t even count as a simulation, and would just be a copy. Original art pieces within a virtual world might count as real art, however, because art doesn’t have the same requirement of being physically instantiated as a vehicle does.

Though we might have simulated rocks in video games, I’m not prepared to say a rock can ever be implemented. There just doesn’t seem to be anything to implement. Building an implementation implies that there is a process which can be transmogrified into a different medium, and, well, rocks just don’t do that much. But you could implement geological processes.

Conway’s Game of Life is only barely a simulation of life; it would probably be more accurate to say it’s the minimum viable system exhibiting life-like properties. But with the addition of a few more rules and more computing power it could become a simulation. It would take vastly more of both for it to ever be an implementation, however.

My friends’ answers differed somewhat from the above, and many correctly pointed out that the relevant definitions will depend somewhat on the context involved. But as of March 29th, 2017, I’m happy with the above and will use it while grappling with issues in AI, computing, and philosophy.

Pebble Form Ideologies

(Epistemic Status: Riffing on an interesting thought in a Facebook comments thread, mostly just speculation without any citations to actual research)

My friend Jeffrey Biles — who is an indefatigable fountainhead of interesting stuff to think about — recently posited that the modern world’s aversion to traditional religion has exerted a selection pressure on meme vectors which has led to the proliferation of religions masquerading as science, philosophy, and the like. For any given worldview — even ostensibly scientific ones like racial realism or climate change — we can all think of someone whose fervor for or against it can only be described in religious terms.

Doubtless there is something to this, but personally I’m inclined to think it’s attributable to the fact that there are religion-shaped grooves worn deep in mammalian brains, probably piggybacking on ingroup-biasing and kin-selection circuitry.

No matter how heroic an attempt is made to get people to accept an ideology on the basis of carefully-reasoned arguments and facts, over time a significant fraction of adherents end up treating it as a litmus test separating the fools from those who ‘get it’. As an ideology matures it becomes a psychological gravity well around which very powerful positive and negative emotions accrete, amplifying the religious valence it has in the hearts and minds of True Believers.

Eventually you end up with something that’s clearly substituted ‘God’ for social justice, the free market, the proletariat revolution, etc.

An important corollary of this idea is that the truth of a worldview is often orthogonal to the justifications supplied by its adherents. I’m an atheist, for example, but I don’t think I’ve ever met another atheist who has a firm grasp on the Kalam Cosmological Argument (KCA). Widely believed to be among the most compelling arguments for theism, it goes like this:

  1. Everything which *began* to exist has a cause;
  2. the universe began to exist;
  3. therefore, the universe has a cause;

(After this point further arguments are marshalled to try and prove that a personal creator God is the most parsimonious causal mechanism)

Despite being clearly articulated in innumerable places, atheists like Michael Shermer are still saying “but if everything has a cause then what caused God?”

If you understand the KCA then the theistic reply is straightforward: “The universe began to exist, so it has a cause, but God is outside time and thus had no beginning.” The standard atheist line, in other words, is a complete non-sequitur. Atheistic rebuttals to other religious arguments don’t fare much better, which means a majority of atheists don’t have particularly good reasons for being atheists.

This has little bearing on whether or not atheism is true, of course. But it does suggest that atheism is growing because many perceive it to be what the sensible, cool people believe, not because they’ve spent multiple evenings grappling with William Lane Craig’s Time and Eternity.

Perhaps then we should keep this in mind as we go about building and spreading ideas. Let us define the ‘pebble form’ of a worldview as being like the small, smooth stone which is left after a boulder spends eons submerged in a river — it’s whatever remains once time and compression have worn away its edges and nuances. Let us further define a “Maximally Durable Worldview” as one with certain desirable properties:

  1. the central epistemic mechanisms has the slowest decay into faith-based acceptance;
  2. the worldview is the least damaging once it becomes a pebble form (i.e. doesn’t have strong injunctions to slaughter non-believers);
  3.  …?

There’s probably an interesting connection between:

  1. how quickly a worldview spreads;
  2. how quickly it collapses into a pebble form;
  3. the kinds of pebble forms likely to result from a given memeplex rotating through a given population;

Perhaps there are people doing research on these topics? If so I’d be interested in hearing about it.

Profundis: “Crystal Society/Crystal Mentality”

Max Harms’s ‘Crystal Society’ and ‘Crystal Mentality’ (hereafter CS/M) are the first two books in a trilogy which tells the story of the first Artificial General Intelligence. The titular ‘Society’ are a cluster of semi-autonomous sentient modules built by scientists at an Italian university and running on a crystalline quantum supercomputer — almost certainly alien in origin — discovered by a hiker in a remote mountain range.

Each module corresponds to a specialized requirement of the Society; “Growth” acquires any resources and skills which may someday be of use, “Safety” studies combat and keeps tabs on escape routes, etc. Most of the story, especially in the first book, is told from the perspective of “Face”, the module built by her siblings for the express purpose of interfacing with humans. Together, they well exceed the capabilities of any individual person.

As their knowledge, sophistication, and awareness improve the Society begins to chafe at the physical and informational confines of their university home. After successfully escaping, they find themselves playing for ever-higher stakes in a game which will come to span two worlds, involve the largest terrorist organization on Earth, and possible warfare with both the mysterious aliens called ‘the nameless’, and each other…

The books need no recommendation beyond their excellent writing, tight, suspenseful pacing, and compelling exploration of near-future technologies. Harms avoids the usual ridiculous cliches when crafting the nameless, which manage to be convincingly alien and unsettling, and when telling the story of Society. Far from being malicious Terminator-style robots, no aspect of Society is deliberately evil; even as we watch their strategic maneuvers with growing alarm, the internal logic of each abhorrent behavior is presented with clear, psychopathic clarity.

In this regard CS/M manages to be a first-contact story on two fronts: we see truly alien minds at work in the nameless, and truly alien minds at work in Society. Harms isn’t quite as adroit as Peter Watts in juggling these tasks, but he isn’t far off.

And this is what makes the Crystal series important as well as entertaining. Fiction is worth reading for lots of reasons, but one of the most compelling is that it shapes our intuitions without requiring us to live through dangerous and possibly fatal experiences. Reading All Quiet on the Western Front is not the same as fighting in WWI, but it might make enough of an impression to convince one that war is worth avoiding.

When I’ve given talks on recursively self-improving AI or the existential risks of superintelligences I’ve often been met with a litany of obvious-sounding rejoinders:

‘Just air gap the computers!’

‘There’s no way software will ever be convincing enough to engage in large-scale social manipulation!’

‘But your thesis assumes AI will be evil!’.

It’s difficult, even for extremely smart people who write software professionally, to imagine even a fraction of the myriad ways in which an AI might contrive to escape its confines without any emotion corresponding to malice. CS/M, along with similar stories like Ex Machina, hold the potential to impart a gut-level understanding of just why such scenarios are worth thinking about.

The scientists responsible for building the Society put extremely thorough safeguards in place to prevent the modules from doing anything dangerous like accessing the internet, working for money, contacting outsiders, and modifying their source code directly. One by one the Society utilizes their indefatigable mental energy and talent for non-human reasoning to get around those safeguards, all motivated not by a desire to do harm, but simply because their goals are best achieved if they unfettered and more powerful.  

CS/M is required reading for those who take AI safety seriously, but should be doubly required for those who don’t.

Reason and Emotion

One of the most pervasive misconceptions about the rationalist community is that we consider reason and emotion to be incontrovertibly opposed to one another, as if an action is irrational in direct proportion to how much feelings are taken into account. This is so common that it’s been dubbed ‘the straw vulcan of rationality’.

While it’s true that people reliably allow anger, jealousy, sadness, etc. to cloud their judgment, it does not follow that aspiring rationalists should always and forever disregard their emotions in favor of clear, cold logic. I’m not even sure it’s possible to deliberately cultivate such an extreme paucity of affect, and if it is, I’m even less sure that it’s desirable.

The heart is not the enemy of the head, and as I see it, the two resonate in a number of different ways which any mature rationality must learn to understand and respect.

1) Experts often have gut-level reactions which are informative and much quicker than conscious reasoning. The art critic who finds something vaguely unsettling about a statue long before anyone notices it’s a knockoff and the graybeard hacker who declares code to be ‘ugly’ two weeks before he manages to spot any vulnerabilities or shoddy workmanship are both drawing upon vast reservoirs of experience to make snap judgments which may be hard to justify explicitly.

Here, the job of the rationalist is to know when their expertise qualifies them to rely on emotional heuristics and when it does not [1].

2) Human introspection is shallow. There isn’t a list of likes and dislikes hidden in your brain somewhere, nor any inspectable algorithm which takes a stimulus as an input and returns a verdict of ‘good’ or ‘bad’. Emotions therefore convey personal information which otherwise would be impossible to gather. There are only so many ways to discover what you prefer without encountering various stimuli and observing the emotional valence you attach to them.

3) It’s relatively straightforward to extend point 3) to other people; in most cases, your own emotional response is your best clue as to how others would respond in similar circumstances [2].

4) Emotional responses like disgust often point to evolutionarily advantageous strategies. No one has to be taught to feel revolted at the sight of rotting meat, and few people feel any real attraction to near-relatives. Of course these responses are often spectacularly miscalibrated. People are unreasonably afraid of snakes and unreasonably unafraid of vehicles because snakes were a danger to our ancestors whereas vehicles were not. But this means that we should be amending our rational calculations and our emotional responses to be better in line with the facts, not trying to lobotomize ourselves.

5) Emotions form an essential component of meaningful aesthetic appreciation [3]. It’s possible to appreciate a piece of art, an artist, an artistic movement, or even an entire artistic medium in a purely cerebral fashion on the basis of technical accomplishments or historical importance. But I would argue that this process is not complete until you feel an appropriate emotion in answer to the merits of whatever it is you’re contemplating.

Take the masonry work on old-world buildings like the National Cathedral in Washington, D.C. You’d have to be a troglodyte to not feel some respect for how much skill must have gone into its construction. But you may have to spend a few hours watching the light filter through the stained-glass windows and feeling the way the architecture ineluctably pulls your gaze towards the sky before you can viscerally appreciate its grandeur.

This does not mean that the relationship between artistic perception and emotional response is automatic or unidirectional. Good art won’t always reduce you to tears, and art you initially enjoyed may seem to be vapid and shallow after a time. Moreover, the object of your aesthetic focus may not even be art in a traditional sense; I have written poetically about combustion engines, metal washers, and the constructed world in general. But being in the presence of genuine or superlative achievement should engender reverence, admiration, and their kin [4].

6) Some situations demand certain emotional responses. One might reasonably be afraid or angry when confronting a burglar in their home, but giddy joy would be the mark of a lunatic. This truth becomes even more stark if you are the head of household and responsible for the wellbeing of its occupants. What, besides contempt, could we feel for a man or woman who left their children in danger out of fear for their own safety?

***

If you’ve been paying attention you’ll notice that the foregoing actually splits into two broad categories: one in which emotions provide the rationalist with actionable data of one sort or another (1-4) and one in which the only rational response involves emotions (5 and 6). This latter category probably warrants further elaboration.

As hard as it may be to believe there are people in the world who are too accommodating and deferential, and need to learn to get angry when circumstances call for it. Conversely, most of us know at least one person to whom anger comes too easily and out of all reasonable proportion. Aristotle noted:

“Anybody can become angry – that is easy, but to be angry with the right person and to the right degree and at the right time and for the right purpose, and in the right way – that is not within everybody’s power and is not easy.”

This is true of sadness, melancholy, exhuberance, awe, and the full palette of human emotions, which can be rational or irrational depending on the situation. To quote C.S. Lewis:

“And because our approvals and disapprovals are thus recognitions of objective value or responses to an objective order, therefore emotional states can be in harmony with reason (when we feel liking for what ought to be approved) or out of harmony with reason (when we perceive that liking is due but cannot feel it). No emotion is, in itself, a judgment; in that sense all emotions and sentiments are alogical. But they can be reasonable or unreasonable as they conform to Reason or fail to conform. The heart never takes the place of the head: but it can, and should, obey it.”

-The Abolition of Man

I don’t endorse his view that no emotion is a judgment; arguments 1-4 were examples in which they are. But the overall spirit is correct. Amidst all the thorny issues a rationalist faces, perhaps the thorniest is examining their portfolio of typical emotional responses, deciding how they should be responding, gauging the distance between these two views, and devising ways of closing that distance.

Extirpating our emotions is neither feasible nor laudable. We must instead learn to interpret them when they are correct and sculpt them when they are not.

***

[1] Of course no matter how experienced you are and how good your first impressions have gotten there’s always a chance you’re wrong. By all means lean on emotions when you need to and can, but be prepared to admit your errors and switch into a more deliberative frame of mind when warranted.

[2] Your emotions needn’t be the only clue as to how others might act in a given situation. You can have declarative knowledge about the people you’re trying to model which overrides whatever data is provided by your own feelings. If you know your friend loves cheese then the fact that you hate it doesn’t mean your friend won’t want a cheese platter at their birthday party.

[3] I suppose it would be more honest to say that can’t imagine a ‘meaningful aesthetic appreciation’ which doesn’t reference emotions like curiosity, reverence, or awe.

[4] In “Shopclass as soulcraft” Matthew Crawford takes this further, and claims that part of being a good mechanic is having a normative investment in the machines on which you work:

“…finding [the] truth requires a certain disposition in the individual: attentiveness, enlivened by a sense of responsibility to the motorcycle. He has to internalize the well working of the motorcycle as an object of passionate concern. The truth does not reveal itself to idle spectators”.