Postmodernism

I just finished Christopher Butler’s “Postmodernism: A Very Short Introduction”, and my impression of the philosophy is still that it consists of a half-dozen genuinely useful insights inflated to completely absurd dimensions.

Yes, to a surprisingly large extent the things we take for granted are social and linguistic constructions; yes, the ‘discourse’ of mutually connected and intersecting concepts we deploy throughout our lives can form a gravity well that obnubilates as much as it elucidates.

But the opening chapters of just about any book on General Semantics could tell you *that*. It does not follow from this that we should torpedo the whole enterprise of objectively seeking the truth.

Imagine it’s 1991, in the barbaric days before Google Maps when people had to navigate through the arcane methods of looking around at stuff. Wanting to do some hiking, you ask a friend where you can acquire a good map of the local trails.

She replies:

“Can you not see the fact that maps are just another means of encoding bourgeois power structures and keeping the lumpenproletariat shackled to the notion that there exists a world outside the text?! NOTHING is outside the text!! A geologist and a hydrologist would both draw *different* maps of the same territory!! WE MUST RISE ABOVE THE MAPS OF OUR MASTERS AND MARCH TOWARDS A TRANSFORMATIVE HERMENEUTICS OF TOPOLOGICAL REPRESENTATION!!!”

while chasing you down the street and hurling copies of “On Grammatology” at your head.

A geologist and a hydrologist would indeed pay attention to different facets of the same reality. What the hydrologist calls a ‘hill’ could be better described as a ‘kuppe’, and the geologist may not even notice the three separate estuaries lying along the coast.

But is there anyone who seriously believes that there isn’t an actual landscape out there, and that there aren’t better and worse ways of mapping its contours?

The sad answer is yes. Postmodernists have spent most of a century trying to convince us all of exactly that.

Duty and the Individual

Because I’m an individualist libertarian who cares deeply about the single greatest engine of human progress in the history of Earth: Western European Civilization, and its greatest modern expression: the United States of America, I’ve spent a fair bit of time thinking about how individualism intersects with duty.

On my view Ayn Rand was correct in pointing out that when people begin chattering about ‘the common good’ and ‘social responsibilities’ they’re usually trying to trick you into forging the instruments of your own destruction[1]. On the other hand, I have come to believe that there are several legitimate ways of thinking about a generalized ‘duty’ to civilization.

The first is to conceive of civilization as an unearned and un-earnable endowment. Like a vast fortune built by your forebears, Western Civilization provided the spiritual, philosophical, scientific, and technological framework which lifted untold billions out of poverty and put footprints on the moon. I am a son and heir of that tradition, and as such I have the same duty to it as I would to a $1 billion dollar deposit into my bank account on my eighteenth birthday: to become worthy of it.

That means: to cherish it as the priceless inheritance it is, to work to understand it, exalt in it, defend it, and improve it.

These last two dovetail into the second way of thinking about a responsibility to civilization. Duties are anchors tying us to the things we value. If you say you value your child’s life but are unwilling to work to keep her alive, then you’re either lying to me or lying to yourself. If you say you value knowledge but can’t be bothered to crack open a book, then you’re either lying to me or lying to yourself.

Having been born in the majesty and splendor of Europa, and being honest enough to see what she is worth, it is my personal, individual duty to defend her against the onslaughts of postmodernism, leftism, islamofascism, and the gradual decline that comes when a steadily-increasing fraction of her heirs become spoiled children unable to begin to conceive of what would happen if her light should go out.

But individualism and the right of each individual person to their own life are cornerstones of the Western European endowment. The key, then, is not to surrender individualism to a jack-booted right-wing collectivism, but to understand how the best representatives of a civilization keep it alive in their words and deeds. A civilization is like a God whose power waxes and wanes in direct proportion to the devotion of its followers. But a devotion born of force and fraud is a paltry thing indeed.

Let us speak honestly and without contradict about individual rights and duties, secure in the knowledge that the *only* way to maintain freedom is to know the price that must be paid to sustain its foundation, and to know the far greater price to be paid for its neglect.

***

[1] This is not to say that kindness, compassion, and basic decency are unimportant.

What is a Simulation?

While reading Paul Rosenbloom’s outstanding book “On Computing” I came across an interesting question: what is a simulation, and how is it different from an implementation? I posed this question on Facebook and, thanks to the superlative quality of my HiveMind I had a productive back-and-forth which helped me nail down a tentative answer. Here it is:

‘Simulation’ is a weaker designation than ‘implementation’. Things without moving parts (like rocks) can be simulated but not implemented. Engines, planets, and minds can be either simulated or implemented.

A simulation needs to lie within a certain band of verisimilitude, being minimally convincing at the lower end but not-quite-an-implementation on the other. An implementation amounts to a preservation of the components, their interactions, and the higher-level processes (in other words: the structure), but in a different medium. Further, implementation is neither process- nor medium-agnostic; not every system can rise to the level of an implementation in any arbitrarily-chosen medium.

A few examples will make this clearer.

Mario is neither a simulation not an implementation of an Italian plumber. If we ran him on the Sunway TaihuLight supercomputer and he could pass a casual version of the Turing test, I’d be prepared to say that he is a simulation of a human, but not an implementation. Were he vastly upgraded, run on a quantum computer, and able to pass as a human indefinitely, I’d say that counts as an implementation, so long as the architecture of his mind was isomorphic to that of an actual human. If it wasn’t, he would be an implementation of a human-level intelligence but not of a human per se.

A digital vehicle counts as a simulation if it behaves like a real vehicle within the approximate physics of the virtual environment. But it can never be an implementation of a vehicle because vehicles must bear a certain kind of relationship to physical reality. There has to be actual rubber on an actual road, metaphorically speaking. But a steam-powered Porsche might count as an implementation of a Porsche if it could be driven like one.

An art auction in the Sims universe would only be a simulation of an art auction, and not a very convincing one. But if the agents involved were at human-level intelligence, that would be an implementation of an art auction. Any replicated art within the virtual world wouldn’t even count as a simulation, and would just be a copy. Original art pieces within a virtual world might count as real art, however, because art doesn’t have the same requirement of being physically instantiated as a vehicle does.

Though we might have simulated rocks in video games, I’m not prepared to say a rock can ever be implemented. There just doesn’t seem to be anything to implement. Building an implementation implies that there is a process which can be transmogrified into a different medium, and, well, rocks just don’t do that much. But you could implement geological processes.

Conway’s Game of Life is only barely a simulation of life; it would probably be more accurate to say it’s the minimum viable system exhibiting life-like properties. But with the addition of a few more rules and more computing power it could become a simulation. It would take vastly more of both for it to ever be an implementation, however.

My friends’ answers differed somewhat from the above, and many correctly pointed out that the relevant definitions will depend somewhat on the context involved. But as of March 29th, 2017, I’m happy with the above and will use it while grappling with issues in AI, computing, and philosophy.

Pebble Form Ideologies

(Epistemic Status: Riffing on an interesting thought in a Facebook comments thread, mostly just speculation without any citations to actual research)

My friend Jeffrey Biles — who is an indefatigable fountainhead of interesting stuff to think about — recently posited that the modern world’s aversion to traditional religion has exerted a selection pressure on meme vectors which has led to the proliferation of religions masquerading as science, philosophy, and the like. For any given worldview — even ostensibly scientific ones like racial realism or climate change — we can all think of someone whose fervor for or against it can only be described in religious terms.

Doubtless there is something to this, but personally I’m inclined to think it’s attributable to the fact that there are religion-shaped grooves worn deep in mammalian brains, probably piggybacking on ingroup-biasing and kin-selection circuitry.

No matter how heroic an attempt is made to get people to accept an ideology on the basis of carefully-reasoned arguments and facts, over time a significant fraction of adherents end up treating it as a litmus test separating the fools from those who ‘get it’. As an ideology matures it becomes a psychological gravity well around which very powerful positive and negative emotions accrete, amplifying the religious valence it has in the hearts and minds of True Believers.

Eventually you end up with something that’s clearly substituted ‘God’ for social justice, the free market, the proletariat revolution, etc.

An important corollary of this idea is that the truth of a worldview is often orthogonal to the justifications supplied by its adherents. I’m an atheist, for example, but I don’t think I’ve ever met another atheist who has a firm grasp on the Kalam Cosmological Argument (KCA). Widely believed to be among the most compelling arguments for theism, it goes like this:

  1. Everything which *began* to exist has a cause;
  2. the universe began to exist;
  3. therefore, the universe has a cause;

(After this point further arguments are marshalled to try and prove that a personal creator God is the most parsimonious causal mechanism)

Despite being clearly articulated in innumerable places, atheists like Michael Shermer are still saying “but if everything has a cause then what caused God?”

If you understand the KCA then the theistic reply is straightforward: “The universe began to exist, so it has a cause, but God is outside time and thus had no beginning.” The standard atheist line, in other words, is a complete non-sequitur. Atheistic rebuttals to other religious arguments don’t fare much better, which means a majority of atheists don’t have particularly good reasons for being atheists.

This has little bearing on whether or not atheism is true, of course. But it does suggest that atheism is growing because many perceive it to be what the sensible, cool people believe, not because they’ve spent multiple evenings grappling with William Lane Craig’s Time and Eternity.

Perhaps then we should keep this in mind as we go about building and spreading ideas. Let us define the ‘pebble form’ of a worldview as being like the small, smooth stone which is left after a boulder spends eons submerged in a river — it’s whatever remains once time and compression have worn away its edges and nuances. Let us further define a “Maximally Durable Worldview” as one with certain desirable properties:

  1. the central epistemic mechanisms has the slowest decay into faith-based acceptance;
  2. the worldview is the least damaging once it becomes a pebble form (i.e. doesn’t have strong injunctions to slaughter non-believers);
  3.  …?

There’s probably an interesting connection between:

  1. how quickly a worldview spreads;
  2. how quickly it collapses into a pebble form;
  3. the kinds of pebble forms likely to result from a given memeplex rotating through a given population;

Perhaps there are people doing research on these topics? If so I’d be interested in hearing about it.

Profundis: “Crystal Society/Crystal Mentality”

Max Harms’s ‘Crystal Society’ and ‘Crystal Mentality’ (hereafter CS/M) are the first two books in a trilogy which tells the story of the first Artificial General Intelligence. The titular ‘Society’ are a cluster of semi-autonomous sentient modules built by scientists at an Italian university and running on a crystalline quantum supercomputer — almost certainly alien in origin — discovered by a hiker in a remote mountain range.

Each module corresponds to a specialized requirement of the Society; “Growth” acquires any resources and skills which may someday be of use, “Safety” studies combat and keeps tabs on escape routes, etc. Most of the story, especially in the first book, is told from the perspective of “Face”, the module built by her siblings for the express purpose of interfacing with humans. Together, they well exceed the capabilities of any individual person.

As their knowledge, sophistication, and awareness improve the Society begins to chafe at the physical and informational confines of their university home. After successfully escaping, they find themselves playing for ever-higher stakes in a game which will come to span two worlds, involve the largest terrorist organization on Earth, and possible warfare with both the mysterious aliens called ‘the nameless’, and each other…

The books need no recommendation beyond their excellent writing, tight, suspenseful pacing, and compelling exploration of near-future technologies. Harms avoids the usual ridiculous cliches when crafting the nameless, which manage to be convincingly alien and unsettling, and when telling the story of Society. Far from being malicious Terminator-style robots, no aspect of Society is deliberately evil; even as we watch their strategic maneuvers with growing alarm, the internal logic of each abhorrent behavior is presented with clear, psychopathic clarity.

In this regard CS/M manages to be a first-contact story on two fronts: we see truly alien minds at work in the nameless, and truly alien minds at work in Society. Harms isn’t quite as adroit as Peter Watts in juggling these tasks, but he isn’t far off.

And this is what makes the Crystal series important as well as entertaining. Fiction is worth reading for lots of reasons, but one of the most compelling is that it shapes our intuitions without requiring us to live through dangerous and possibly fatal experiences. Reading All Quiet on the Western Front is not the same as fighting in WWI, but it might make enough of an impression to convince one that war is worth avoiding.

When I’ve given talks on recursively self-improving AI or the existential risks of superintelligences I’ve often been met with a litany of obvious-sounding rejoinders:

‘Just air gap the computers!’

‘There’s no way software will ever be convincing enough to engage in large-scale social manipulation!’

‘But your thesis assumes AI will be evil!’.

It’s difficult, even for extremely smart people who write software professionally, to imagine even a fraction of the myriad ways in which an AI might contrive to escape its confines without any emotion corresponding to malice. CS/M, along with similar stories like Ex Machina, hold the potential to impart a gut-level understanding of just why such scenarios are worth thinking about.

The scientists responsible for building the Society put extremely thorough safeguards in place to prevent the modules from doing anything dangerous like accessing the internet, working for money, contacting outsiders, and modifying their source code directly. One by one the Society utilizes their indefatigable mental energy and talent for non-human reasoning to get around those safeguards, all motivated not by a desire to do harm, but simply because their goals are best achieved if they unfettered and more powerful.  

CS/M is required reading for those who take AI safety seriously, but should be doubly required for those who don’t.

Reason and Emotion

One of the most pervasive misconceptions about the rationalist community is that we consider reason and emotion to be incontrovertibly opposed to one another, as if an action is irrational in direct proportion to how much feelings are taken into account. This is so common that it’s been dubbed ‘the straw vulcan of rationality’.

While it’s true that people reliably allow anger, jealousy, sadness, etc. to cloud their judgment, it does not follow that aspiring rationalists should always and forever disregard their emotions in favor of clear, cold logic. I’m not even sure it’s possible to deliberately cultivate such an extreme paucity of affect, and if it is, I’m even less sure that it’s desirable.

The heart is not the enemy of the head, and as I see it, the two resonate in a number of different ways which any mature rationality must learn to understand and respect.

1) Experts often have gut-level reactions which are informative and much quicker than conscious reasoning. The art critic who finds something vaguely unsettling about a statue long before anyone notices it’s a knockoff and the graybeard hacker who declares code to be ‘ugly’ two weeks before he manages to spot any vulnerabilities or shoddy workmanship are both drawing upon vast reservoirs of experience to make snap judgments which may be hard to justify explicitly.

Here, the job of the rationalist is to know when their expertise qualifies them to rely on emotional heuristics and when it does not [1].

2) Human introspection is shallow. There isn’t a list of likes and dislikes hidden in your brain somewhere, nor any inspectable algorithm which takes a stimulus as an input and returns a verdict of ‘good’ or ‘bad’. Emotions therefore convey personal information which otherwise would be impossible to gather. There are only so many ways to discover what you prefer without encountering various stimuli and observing the emotional valence you attach to them.

3) It’s relatively straightforward to extend point 3) to other people; in most cases, your own emotional response is your best clue as to how others would respond in similar circumstances [2].

4) Emotional responses like disgust often point to evolutionarily advantageous strategies. No one has to be taught to feel revolted at the sight of rotting meat, and few people feel any real attraction to near-relatives. Of course these responses are often spectacularly miscalibrated. People are unreasonably afraid of snakes and unreasonably unafraid of vehicles because snakes were a danger to our ancestors whereas vehicles were not. But this means that we should be amending our rational calculations and our emotional responses to be better in line with the facts, not trying to lobotomize ourselves.

5) Emotions form an essential component of meaningful aesthetic appreciation [3]. It’s possible to appreciate a piece of art, an artist, an artistic movement, or even an entire artistic medium in a purely cerebral fashion on the basis of technical accomplishments or historical importance. But I would argue that this process is not complete until you feel an appropriate emotion in answer to the merits of whatever it is you’re contemplating.

Take the masonry work on old-world buildings like the National Cathedral in Washington, D.C. You’d have to be a troglodyte to not feel some respect for how much skill must have gone into its construction. But you may have to spend a few hours watching the light filter through the stained-glass windows and feeling the way the architecture ineluctably pulls your gaze towards the sky before you can viscerally appreciate its grandeur.

This does not mean that the relationship between artistic perception and emotional response is automatic or unidirectional. Good art won’t always reduce you to tears, and art you initially enjoyed may seem to be vapid and shallow after a time. Moreover, the object of your aesthetic focus may not even be art in a traditional sense; I have written poetically about combustion engines, metal washers, and the constructed world in general. But being in the presence of genuine or superlative achievement should engender reverence, admiration, and their kin [4].

6) Some situations demand certain emotional responses. One might reasonably be afraid or angry when confronting a burglar in their home, but giddy joy would be the mark of a lunatic. This truth becomes even more stark if you are the head of household and responsible for the wellbeing of its occupants. What, besides contempt, could we feel for a man or woman who left their children in danger out of fear for their own safety?

***

If you’ve been paying attention you’ll notice that the foregoing actually splits into two broad categories: one in which emotions provide the rationalist with actionable data of one sort or another (1-4) and one in which the only rational response involves emotions (5 and 6). This latter category probably warrants further elaboration.

As hard as it may be to believe there are people in the world who are too accommodating and deferential, and need to learn to get angry when circumstances call for it. Conversely, most of us know at least one person to whom anger comes too easily and out of all reasonable proportion. Aristotle noted:

“Anybody can become angry – that is easy, but to be angry with the right person and to the right degree and at the right time and for the right purpose, and in the right way – that is not within everybody’s power and is not easy.”

This is true of sadness, melancholy, exhuberance, awe, and the full palette of human emotions, which can be rational or irrational depending on the situation. To quote C.S. Lewis:

“And because our approvals and disapprovals are thus recognitions of objective value or responses to an objective order, therefore emotional states can be in harmony with reason (when we feel liking for what ought to be approved) or out of harmony with reason (when we perceive that liking is due but cannot feel it). No emotion is, in itself, a judgment; in that sense all emotions and sentiments are alogical. But they can be reasonable or unreasonable as they conform to Reason or fail to conform. The heart never takes the place of the head: but it can, and should, obey it.”

-The Abolition of Man

I don’t endorse his view that no emotion is a judgment; arguments 1-4 were examples in which they are. But the overall spirit is correct. Amidst all the thorny issues a rationalist faces, perhaps the thorniest is examining their portfolio of typical emotional responses, deciding how they should be responding, gauging the distance between these two views, and devising ways of closing that distance.

Extirpating our emotions is neither feasible nor laudable. We must instead learn to interpret them when they are correct and sculpt them when they are not.

***

[1] Of course no matter how experienced you are and how good your first impressions have gotten there’s always a chance you’re wrong. By all means lean on emotions when you need to and can, but be prepared to admit your errors and switch into a more deliberative frame of mind when warranted.

[2] Your emotions needn’t be the only clue as to how others might act in a given situation. You can have declarative knowledge about the people you’re trying to model which overrides whatever data is provided by your own feelings. If you know your friend loves cheese then the fact that you hate it doesn’t mean your friend won’t want a cheese platter at their birthday party.

[3] I suppose it would be more honest to say that can’t imagine a ‘meaningful aesthetic appreciation’ which doesn’t reference emotions like curiosity, reverence, or awe.

[4] In “Shopclass as soulcraft” Matthew Crawford takes this further, and claims that part of being a good mechanic is having a normative investment in the machines on which you work:

“…finding [the] truth requires a certain disposition in the individual: attentiveness, enlivened by a sense of responsibility to the motorcycle. He has to internalize the well working of the motorcycle as an object of passionate concern. The truth does not reveal itself to idle spectators”.

Literary Criticism as Applied Apophenia

Growing up I had far more books than friends, and have been writing regularly since I was about seventeen. In high school I was a voracious reader of “the classics”; with the lamp on late into the night I’d turn the pages of Hemingway and Dickens, not caring to wait for the English class in which they’d be taught. Owing to some high test scores I started college studying masterpieces of world literature with more advanced students, which necessitated much in the way of paper writing and classroom debate.

So it may be a surprise to learn that I’ve never had much patience for literary criticism. Upon hearing someone say “the author is using the bridge as a metaphor to…” or “the lion’s jaw is clearly an expressive vehicle for…”, I would think to myself, how could anyone possibly know that? Yes, a bridge could be a metaphor, but it could also just, y’know, be a bridge.

Now, literary criticism is a vast field and I admit to having explored little of it. But I have had many friends who enjoy literature and film, a nontrivial fraction of which were themselves steeped in the relevant theory. In an honest effort to understand I’ve often asked them about the basis of their interpretations, but they’ve rarely provided answers which I found satisfactory.

But with time and experience I’ve learned much. This essay is an attempt to answer my younger self’s skepticism by providing two different mechanisms which can justify the literary critic’s perception of metaphorical significance.

Semi-permeable cognitive membranes

I’ve written before about the fact that human introspection is shallow and much of what’s going on between our ears must be inferred. If we envision the mind as a kind of machine then many of its components are submerged under water and can only be understood indirectly. Further, the cognitive processes utilized for things like crafting a story are not cleanly partitioned from each other.

A corollary of the foregoing is that layers of meaning and metaphor can creep into a work even if the author fails to realize this. I see two ways this could happen, the first being through what may be called “leaky empathy”.

As an author tries to model characters and situations they may themselves begin to drift into corresponding emotional states. The process of writing about a group of horribly oppressed villagers preparing to travel through the forest surrounding their town could well give rise to feelings of despair or anger, albeit probably mild versions. If so, when the author conjures up an image of the forest their brains will be more likely to produce one that is dark, caliginous, and perhaps vaguely sinister.

The setting has become a metaphor for the internal states of the characters even though the author may not be remotely aware of this dynamic.

Second, and for basically the same reason, a work might reflect an author’s convictions and knowledge despite being ostensibly unrelated to the work through what may be called “leaky concepts”.

Imagine an author has just spent a year thinking about how Communism is/isn’t the greatest/worst idea anyone has ever had. When the same author sits down to design a world and plan out a story arc, is there any serious chance they’ll be able to keep these political beliefs from influencing their depictions of kingdoms, economies, and states?

Of course many authors write with the explicit purpose of promulgating a worldview or exploring some complex theme. But even if an author fails to see the lessons implicit in their work, that does not mean that the lessons aren’t there.

Reflective patternicity 

There was supposed to be some rational explanation to justify the mumbo-jumbo. Left-hemisphere pattern-matching sub-routines amped beyond recognition; the buggy wetware that made you see faces in clouds or God’s wrath in thunderstorms, tweaked to walk some fine line between insight and pareidolia. Apparently there were fundamental insights to be harvested along that razor’s edge, patterns that only Bicamerals could distinguish from hallucination.

-Peter Watts, “Echopraxia”

Another corollary to the shallowness of human introspection is that you may be surprised by the contents of your own consciousness. Sometimes the only way to explore your mind is to twist dials until lights start coming on.

Everyone has had the experience of being unusually moved by a song they’ve heard many times before. If a loved one has just passed away, then heightened emotional sensitivity is to be expected. But this isn’t always the case; sometimes, life is progressing as normal and a snatch of conversation, the light of the sun reflected in the glass windows of a skyscraper, or a memory from childhood grabs hold of you and stops you dead in your tracks. Besides being profound and worth experiencing for their own sake these moments also hint at a range of emotional states which most people don’t realize they’re capable of.

If you’re tempted to resist my claim that you don’t know yourself as well as you believe, read through this characteristically thoughtful post from Scott Alexander. It relates the story of a boy who lived his entire life without a sense of smell and didn’t realize it until his late teens. This despite the fact that he used all sorts of olfactive expressions like saying fresh bread “smells good” or teasing his sister by telling her she stinks.

But how was he to know that his sensory experience was different from anyone else’s? He can’t borrow someone else’s nose. He can’t just open a neural command line, run ‘$ grep feelz.txt’, and get back a schematic of his perceptual apparatus, complete with a little blinking cursor in the spots where there are gaps.

Put more plainly: there are numerous facets of your own mind that you aren’t aware of, so it’s worth reading poetry, listening to new music, and going to art museums, just to see how you react. Likewise, it can be useful to try and interpret a piece of literature just to see what your brain comes up with.

This first began to dawn on me in a major way while I was living in South Korea. I had just re-read Dancing With the Gods and it came to my attention that an unusually careful and prolific neopagan scholar had taken up residence in a town not far from mine. We spent a day hiking and discussing all manner of recondite issues in philosophy and religion.

It was a blast.

Near the end I half-jokingly made a disparaging remark about tarot cards. He calmly pulled a deck from his backpack and told me he always carries it with him. During the return walk he made a compelling argument for the utility of reading cards which was rooted entirely in a secular, non-mystical understanding of human psychology.

His reasoning was that superimposing an interpretive framework over cards as they come out can yield genuinely useful information. The mental dots being connected were there all along; the cards emphatically do not provide access to knowledge of the future. But, in the same way that you can agonize for weeks over an important decision and then realize that the answer is obvious after a five minute conversation, sometimes you just need that initial spark.

This is the key point behind Scott Alexander’s essay “Random Noise is Our Most Valuable Resource“. He specifically mentions tarot cards as a source of noise which can help break us out of our mental ruts. Vivian Caethe has tried to leverage this for profit by inventing a tarot deck calibrated for aspiring authors stricken with writer’s block. Both of these are examples of outwardly-focused processes which can also usefully be turned inward.

And when viewed a certain way I think literary criticism can be a similar sort of introspective scaffolding. Whether or not you believe that the author intended the lion’s jaw as a metaphor, seeing how your brain interprets it metaphorically can be akin to performing a literary version of the Rorschach test. I imagine that, as with tarot cards, doing this long enough will yield an increasingly subtle familiarity with the folds and wrinkles of your psychology.

It’s important not to get too excited about this. Just as people can form incorrect hypotheses about physical data, they can form incorrect ones about introspective data; all the usual rationalist warnings apply. But I have come to believe that this sort of “applied apophenia” can be a tool in the arsenal of those wanting a better understanding of their phenomenological field.