Is Mathematics a Young (Wo)man’s Game?

I began giving the titular question a great deal of thought when, 8 years ago, I took my first stab at learning new mathematical concepts since high school. I was 22, and because I had become interested in issues lying at the foundation of artificial intelligence I decided to pick up a textbook on discrete math.

Along with continuous mathematics, discrete math forms one of the two great branches in the Tree of Mathematics. The canonical example of continuous mathematics is calculus, used to model and predict the behavior of continuous systems like fluids or rockets with variable speeds. Discrete math includes subfields like set theory, logic, and combinatorics which are applied to discrete domains like cryptography and probability theory.

You’re likely familiar with the rule that anyone destined to do important work in mathematics has probably done it by their mid twenties. Groundbreaking work does occasionally come from people who have to dye the grey out of their hair, but it’s uncommon.

This used to trouble me. I was only just beginning to discover these topics at an age when most serious mathematicians are at the height of their powers. Would there be any point? Would I prove able to probe the Truth beneath the Greek symbols and braids of deduction, or would this be a Goddess that eluded me?

To properly answer this question we must draw distinctions between 1) being smart enough to invent a field; 2) being smart enough to work in a field; 3) being smart enough to successfully study a field.

These are three distinct levels with three distinct cognitive thresholds.

There’s a big difference between being insightful and prescient enough to invent quantum field theory, to do professional research in quantum field theory, and to grok a book written about quantum field theory.

Or consider an analogous question: is music a young (wo)man’s game? If you’re starting to learn the guitar at 35 you’re probably not going to become the next Eric Clapton. Does that mean it isn’t worth pursuing? Of course not. Does that mean you can’t achieve a significant degree of skill? Not if you’re willing to put in the time.

As I’m approaching 30, there may not be any chance left for me to contribute in a significant way to mathematics or to work in mathematics professionally. I wouldn’t say it’s completely out of the question, but I’d have to end up being more talented than I currently estimate myself to be.

Profiting from the study of mathematics, however, is something anyone can begin to do at any point in their life — I recently learned that Ayn Rand was taking algebra lessons in her 70’s!

And it’s worth saying that you shouldn’t be discouraged by bad experiences in high school math classes. While I do sympathize with the obstacles facing the legions of underpaid and overworked teachers staffing the public school system (I’m an ex-teacher myself — the struggle is real), it’s hard not to feel just a little bitter at how badly they routinely mangle the teaching of mathematics.

I made my first attempt to learn calculus in sixth grade, before I’d entered high school, and picked up Stephen Hawking’s A Brief History of Time at around the age I hit puberty. I took the most advanced physics and math classes I could, several years earlier than usual, to prepare myself for what I was sure would be a career in theoretical physics.

In my case, the same man taught both subjects; the cocktail of boredom and obnubilation he served in lieu of teaching managed to simultaneously strangle every bit of enthusiasm I brought with me and convince me that I just wasn’t cut out for Actual Science. Luck and stubbornness is all that saved me — luck, in that I ended up befriending someone willing to expertly teach me the rudiments of discrete mathematics in his spare time, for free; stubbornness, because I resolved not to let impressions formed by experiences in a school in rural Arkansas drive my decisions on what to learn.

Don’t let your years or your past experiences stop you from studying mathematics. It is the most beautiful, most powerful set of abstractions ever to have been invented. I know of almost nothing that better imparts a sense of the awesome capacity of the human mind and the breathtaking scope of man’s creative vision. It undergirds huge swathes of philosophy, science, and technology, codifying and generalizing them into the tools that will someday dismantle stars, stop death, and light up the cold void of space with the fire of the human spirit.

Even if you make but modest progress, you’ll be better for it.

I was.

The Structure of Science as a Gnostic Manifold

Part I

While reading Paul Rosenbloom’s excellent “On Computing” I feel as though I’ve glimpsed the outlines of something big. 

In the book Rosenbloom advances the argument that computing should be counted among the ‘Great Domains’ of science, instead of being considered a mere branch of engineering. During the course of advancing this thesis he introduces two remarkable ideas: (1) a ‘relational architecture’, describing how the Great Domains relate to one another; (2) an accompanying ‘metascience expression language’ which defines the overlap of the Physical (P), Social (S), Life (L), and Computing (C) Domains in terms of two fundamental processes: implementation and interaction. 

Though I’m only a few chapters in I’ve already seen that his methods for generating monadic, dyadic, and polyadic tuples of different combinations of the Great Domains could be used to create a near-comprehensive list of every area of research possible within the boundaries limned by our current scientific understanding.

Let me explain: ‘pure computing’ consists of any overlap of computing with itself (C + C), and subsumes such areas as computational complexity and algorithmic efficiency analysis. ‘Mixed computing’ would be the combination of computing with any of the other great domains: computer hardware would be Computing (C) + Physical (P), a simulation of predator/prey population dynamics would be Life (L) + Computing (C), computer security and AI would be Social (S) + Computing (C), genetics/physics simulations would be Physical (P) + Computing (C), brain-computer interaction would be Computing (C) + Social (S) + Physical (P), and so forth. 

A simple program could make a list of every possible permutation of C + P + S + L (including standalones like ‘P’ and pure overlaps like ‘P + P’), and you might be able to spot gaps in the current edifice of scientific research — there might be certain kinds of C + L + S research that isn’t being done anywhere, for example. With this in hand you could begin to map all the research being done in, say, Boulder CO onto the resulting structure, with extensive notes on which labs are doing what research and for whom.

(Bear in mind that I still haven’t gotten to the parts where he really elucidates his metascience expression language or the relational architecture, so these ideas are very preliminary.) 

Part II

This alone would prove enlightening, but its effectiveness would be compounded enormously by the addition of an ‘autodidact’s toolkit’ of primitive concepts which, when learned together, open up the greatest possible regions of the gnostic manifold [1]. In a post generative science guy-who-knows-literally-everything Eric Raymond briefly explores this idea. In a nutshell, the concepts from some sciences can be used in many more endeavors than the concepts from other sciences. As beautiful as it is, concepts from astronomy are mostly only useful in astronomy. Concepts from evolutionary biology, however, have found use in cognitive psychology, economics, memetics, and tons of other places. So maybe a person interested in science could begin their study by mastering a handful of concepts from across the sciences which are generative enough to make almost any other concept more understandable. 

Eric and I have been in talks for several years now to design and build a course for exactly this purpose. Someday when my funds and his schedule are in sync we are going to get this done. 

This relates to the ideas from section I because a mastery of the autodidact’s toolkit would allow one to dip into an arbitrary point in the gnostic manifold and feel confident that they could learn the material relatively quickly. Imagine being able to look at research being done at a major university and then get up to speed in a month because it’s just variations on concepts 3 – 6 from the toolkit [2].

But I think we can go even further. Based on discussions of hyperuniformity and the unusual places it appears I began to wonder whether or not there might be special branches of mathematics from systems theory, chaos theory, and possibly information theory which might not act as bridges between some of the concepts from the autodidact’s toolkit. The linked article discusses how a certain kind of pattern crops up in places as far away as the distribution of cones in avian retina and the formations of certain kinds of unusual crystalline solids.

My question is: if you had a map of the gnostic manifold, you’d mastered the autodidact’s toolkit, and you understood the relevant math, might you not have been able to hop into a research gap, spend a month or two looking for hyperuniformity, learn about quasicrystals in 1/3rd of the time anyone else would’ve required, and then glimpsed the pattern ahead of the competition? If so you could’ve had a startup in place to exploit the new knowledge by the time the first research papers were coming out. 

Part III

Organizing, representing, gathering, and communicating this wealth of knowledge would be much easier with an ‘acquisition infrastructure’. Here I’m imagining a still-theoretical integration of the best mnemonics systems, a supercharged version of Anki, whatever the best knowledge map software is, matlab/mathematica (or open-source alternatives like octave), all running on a supercomputer with insane amounts of both memory and storage.

Furthermore, I want to develop the concept of a ‘drexler audit’, the baby version of which is advanced by Eric Drexler in how to understand everything. The basic idea there is rather than try to understand the details of a given field you instead use a series of object- and meta-level questions to get a firm grasp on what the goals of the field are, what obstacles stand in the way of those goals, and what gaps remain in the knowledge required to move forward. 

This absolutely does not count as expert-level knowledge but it does give you the kind of overview which can prove useful in future exploration and investment.

With a map of the gnostic manifold you could choose some fields on which to perform a drexler audit and others to explore deeply with the combination of systems math and the autodidact toolkit. With a breakdown of the who/what/where/why of the research community in a given region you’d be in a position to bring the right minds together to solve whatever tractable problems may exist to give a field a jumpstart. And if you understood the economics of scientific research and the basics of investing the resulting machinery might, with a bit of luck, start coughing up wads of money while doing enormous amounts of good. 

(Of course it could also crash and burn, but so could SpaceX — nothing great is accomplished without a healthy dose of risk)

Part IV

I’ve said all of the above because it points to a tremendous opportunity: an amalgamation of Y-Combinator, Berkshire Hathaway, TED, and Slate Star Codex. If it works out the way I think it might, whoever manages the beast could make Elon Musk look like a lazy, sharecropping half-wit. 

The STEMpunk Project helped lay the foundation for the research required. If I can make the necessary contacts and get the funds together, I’d like to flesh this out in the next five years.

***

[1] See this related idea.

[2] Of course I’m likely underplaying the difficulty here. Brian Ziman, perhaps the most technically accomplished person I know, has pushed back on my optimism at this point. My view is that even if it proves orders of magnitude more difficult to construct I think the Gnostic Manifold is a framework worth fleshing out.

A Taxonomy Of AI Systems

(NOTE: the following is all highly speculative and not researched very well.)

In a blog post on domain-specific programming languages author Eric Raymond made a distinction between the kinds of problems best solved through raw automation and the kinds of problems best solved by making a human perform better.

This gave me an idea for a 4-quadrant graph that could serve as a taxonomy of various current and future AI systems. Here’s the setup: the horizontal axis runs Expert <–> Crowd and the vertical axis runs Judgment Enhancement <–> Automation.

Quadrant one (Q1) would contain quintessential human judgment amplifiers, like the kinds of programs talked about by Shyam Sankar in his TED talk or the fascinating-but-unproven-as-far-as-I-know “Chernoff faces”.

In Q2 we have mechanisms for improving the judgments of crowds. The only example I could really think of were prediction markets, though I bet you could make a case for market prices working as exactly this sort of mechanism.

In Q3 we have automated experts, the obvious example of which would be an expert system or possibly a strong artificial general intelligence.

And in Q4 we have something like a swarm of genetic algorithms evolved by making random or pseudo-random changes to a seed code and then judging the results against some fitness function.

Now, how should we match these systems with different problem domains?

It seems to me like Q1 systems would be better at solving problems that either a) have finite amounts of information that can be gathered by a single computer-aided human or b) are problems for which humans are uniquely suited to solve, like intuiting and interpreting the emotional states of other humans.

Chernoff faces, if we ever get them working right, are an especially interesting Q1 system because what they are supposed to do is take statistical information, which humans are notoriously dreadful at working with, and transform it into a “facial” format, which humans have enormously powerful built-in software for working with.

Q2 systems should be used to solve problems that require more information than a human can work with. Prediction markets are meant to use a profit motive to incentivize human experts to incorporate as much information as they can in as honest a way as they can, and over a span of time there are enough rounds of updates that the system as a whole produces a price which contains the aggregate wisdom of the individuals making the system up (At least I think that’s how they work).

Why can’t we have a prediction market that performs heart surgery? Because huge amounts of the relevant information is “organic”, i.e. muscle memory built up over dozens and eventually hundreds of similar procedures. This information isn’t written down anywhere and thus can’t be aggregated and incorporated into a “bet” by a human non-surgeon.

Based on some cursory research, my example of a Q3 system, i.e. expert systems, appear to be subdivided into knowledge bases and inference engines. I’d venture to guess that they are suitable wherever knowledge can be gathered and encoded in a way that allows computers to perform inferences and logical calculations on it. Wikipedia’s articlecontains a chart detailing some areas where expert systems have been used, and also points out that one drawback to expert systems is that they are unable to acquire new knowledge.

That’s a pretty serious handicap, and places further limits on what types of problem a Q3 system could solve.

Finally, Q4 systems are probably the strangest entities we’ve discussed so far, and the only examples I’m familiar with are from the field of evolvable hardware. IIRC using evolutionary algorithms to evolve circuits yields workable results which no human engineer would’ve thought of. That has to be useful somewhere, if only when trying to solve an exotic problem that’s stymied every attempt at a solution, right?

Profundis: “Starfish”, by Peter Watts.

Closing in on Peter Watts’s “Starfish” and I have to say it exerts the exact same psychic gravity his other books do. Once I really dive into a Watts story I find myself picking it up almost involuntarily — putting off getting into the shower to finish a chapter; delaying work on the World Systems Project for what I tell myself is fifteen minutes only to look up an hour and a half later.
 
The plausible science, morbid characters, and terrifying philosophical implications he weaves have all the urgency of a hand shooting out of the dirt of a fresh grave. And through every line is a quiet voice saying …there aren’t any obvious flaws here; this could happen.
 
Yet even his monsters are portrayed with a depth and nuance that make them relatable (though not particularly likeable). If Ramsey Snow had been sent to the bottom of the ocean to live on an energy station straddling a thermal vent, we might have systems ecologist Michael Brander; if it had been Stranger Things’s Dr. Sam Owens, we would have Dr. Yves Scanlon. And though Patricia Rowan does much for which she could be condemned, still we can’t help but experience little shivers of sympathy for a woman forced by wretched luck to make decisions that will impact all life on Earth.
 
The book jacket of my edition compares it to Arthur C. Clarke’s “The Deep Range”, but Watt’s has penned the better sf. The first half of “The Deep Range” reads like a single book, with the rest feeling more like scattered vignettes giving Clarke an excuse to talk about plankton herding and sea monsters. I enjoyed “The Deep Range” quite a lot, but the mounting tension of “Starfish” carries you resolutely to the final pages.
 
Highly recommend.
 

Is History a Science?

People twist themselves into knots on the question of whether or not history is a science. I’m not prepared to defend the claim that history is *always* a science, but it certainly can be.

As we all know arguably the most important defining feature of a science is that it is ‘falsifiable’ — it makes predictions about future sensory states which could in principle turn out to be wrong.

One major source of confusion here is that history makes what we might call ‘retrodictions’, i.e. predictions about events that happened in the past. This seems vaguely screwy, somehow.

But the fact that arrowheads or artifacts are thousands of years old shouldn’t concern the historian-scientist anymore than the fact that the light hitting a telescope is millions of years old should concern an astronomer-scientist.

The predictions yielded by a historical theory constrain *future* sensory states in a falsifiable way. If you subscribe to the idea that humans crossed Beringia 20,000 years ago, you should never, in the future, find an arrowhead older than that. If you do, your theory is falsified.

So history passes as least one of the more significant tests by which we separate science from non-science

Kanizsa Inferences

A while back a friend of mine was advancing the controversial thesis that Darwinian social dynamics necessitated religiosity (or something like that).

His essay was structured in such a way that there were several fallacious inferences kind of… implied, but not actually stated anywhere.

I think we need a term for this kind of thing, and I have a proposal:

‘Kanisza Inference’.

Kanisza figures are those ghostly shapes which the brain can’t help but see because of how some other shapes are arranged:

 

kanisza

Knowing about Kanisza inferences might help in crafting more lucid arguments and avoiding pointless tangents (though of course nothing can prevent the deliberately dishonest from misinterpreting your ideas.)

Profundis: Zen and The Art of Motorcycle Maintenance

(What follows is a reposting of a few short essays I wrote for Scott Young’s bookclub in response to the perennial classic ‘Zen and The Art of Motorcycle Maintenance’):

___

Near the end of chapter 3 the narrator makes a number of epistemological and metaphysical claims which confused me for a long time and confuse many people still. In recent years I have resolved them to my satisfaction, and this seems like as good a place as any to elucidate my thoughts.

He writes: “The law of gravity and gravity itself did not exist before Isaac Newton”, then continues “…[w]e believe the disembodied words of Sir Isaac Newton were sitting in the middle of nowhere billions of years before he was born and that magically he discovered these words.”

This nicely demonstrates an incorrect conflation of laws and physical phenomena. Unless you’ve been snorting uncut Postmodernism fresh off the Continent you’re bound to think that gravity existed before Isaac Newton. What he did was distill gravitational observations into formulae by which to describe and predict future observations.

Gravity existed prior to these formulae just like apples existed before anyone named them.

As Alfred Korzybski put it, ‘the map is not the territory’.

Entire planets worth of error can be avoided if you keep this in mind. For example, I’ve seen Gödel’s Incompleteness Theorems cited in defense of the existence of God. The Incompleteness Theorems say, in essence, that formal systems of sufficient power to perform arithmetic or describe the properties of the natural numbers contain enough recursion to ineluctably give rise to paradoxes. There are statements which are true in these systems but which cannot be established by any algorithmic procedure.

Truth, in other words, is bigger than proof.

Put more simply GITs demonstrates that the weirdness associated with a statement like ‘this sentence is false’ is to be found at the heart of mathematics and as a consequence of its deepest nature.

But — crucially! — the limitations of GITs apply only to the formal systems themselves. They tell us nothing about a non-formal system like the universe whose behavior is captured by formal systems we invent. There is a gigantic difference between saying ‘the symbols we use to describe system A have these built-in limitations’ and saying ‘system A itself is subject to those same limitations’.

And I think Phaedrus is making a similar error.

___

In chapter 6 we learn that Phaedrus believed there to be two fundamental ways of viewing the world. The ‘classical’ view tends to think in terms of underlying components, processes, and interactions, whereas the romantic view thinks in terms of intuitions about immediate, surface appearances.

Below is one answer to that, expanded from a comment left earlier which is worth it’s own spot:

Coming at the classical/romantic idea from a completely antithetical direction, Ayn Rand’s Objectivist philosophy champions an aesthetic merger of the two called ‘romantic realism’. I realize that she is one of those thinkers that splits the world into fervent worshippers and rabid detractors, so I’d like to avoid getting into the whole ‘Ayn Rand debate’. It’s my belief that her claims about aesthetics can stand independently from her other philosophical positions.

Objectivism sees art as being essential to the task of concretizing man’s widest abstractions in the form of perceivable, physical objects. Artists look out at the world and choose some subset of what they see to represent in their art. Their artistic choices — and our emotional responses to those artistic choices — are guided either by an explicit philosophy or by an unarticulated ‘sense of life’. Something very deep is implied by the decision to paint a skyscraper or the decision to paint ruins, and something equally deep is implied by which of these two we find aesthetically pleasing.

As beings whose nature is conceptual we require literature, art, and music to reify our ethical and metaphysical convictions — otherwise they would remain mere abstractions with a limited influence on the actual choices we make day-to-day. By owning and repeatedly responding to a work of art we reinforce a system 1 response which harmonizes with our system 2 judgements.

With time, art becomes like a fuel one burns to keep their motor running. One can fight with more vigor when surrounded by books and paintings which remind them of how the world could and ought to be.

And say what you will about the merits of her writing, I personally find the art it inspired to be gorgeous. Sylvia Bokor and Quentin Cordain both paint in the romantic realist style and NASA’s Jet Propulsion labs just released some excellent Art Deco posters from the future which I liked enough to get framed. Nick Gaetano did a series of iconic covers for editions of “Atlas Shrugged”, “For the New Intellectual”, and “Capitalism: The Unknown Ideal”, all of which inspired the cover of my upcoming book on the STEMpunk Project.

It’s a shame that Rand’s own vitriol has prevented more exposure to the view that art has a cognitive justification grounded in man’s needs qua man. Even if you reject everything else in Objectivism her treatment of aesthetics remains fascinating, original, and profound.

___

In chapter 10 the narrator makes several jarring criticisms of the scientific method which, if one hasn’t ever considered them before, could very well cause intellectual vertigo and a sense of nausea.

First, we have this:

“If the purpose of the scientific method is to select from among a multitude of hypotheses, and if the number of hypotheses grows faster than the experimental method can handle, then it is clear that all hypotheses can never be tested. If all hypotheses cannot be tested, then the results of any experiment are inconclusive and the entire scientific method falls short of its goal of establishing proven knowledge.”

Let’s call this the Problem of Underdetermination (PU).

He continues:

“…[W]hat seems to be causing the number of hypotheses to grow in recent decades seems to be nothing other than scientific method itself. The more you look, the more you see.”

Let’s call this the Problem of Hypothesis Proliferation (PHP)

Finally, we are told:

“Through multiplication upon multiplication of facts, information, theories, and hypotheses, it is science itself that is leading mankind from single absolute truths to multiple, indeterminate, relative ones.”

This one we call the Problem of Scientific Learned Helplessness (SLH).

I will address the first two problems here. The third I may answer at some point in the future.

PU is a pretty standard strawman of the scientific method, and it’s surprising to see it crop up in such a significant work. Everyone knows that the purpose of science is not to establish irrefutable proven Truth (with a capital ‘T’), but instead to sift through reams of data and establish one or several hypotheses that can predict future data points. Additional criteria, like Ockham’s Razor, are used to temper the forging of hypotheses with considerations of their computational burden. (I can say more about this if necessary)

The fact that evidence *always* underdetermines hypotheses has been an acknowledged problem for as long as there has been a philosophy of science, and it crops up in algorithms (like EBL, KBIL, and ILP) which have to form their own understanding of a data set.

There isn’t an easy solution here, but there are a few things we can note. First, there are a number of ways we can constrain the space of possible hypotheses. Perhaps the most common is by making assumptions which are valid within the prevailing theoretical framework. We assume, for example, that the color of a scientist’s shoelaces doesn’t affect their observation of light from distant stars.

Do we know this for certain? No. Might we someday uncover evidence of a link between shoelaces and light beams? Sure. But without a reason to see a connection now, we assume there isn’t one, and thereby rule out some regions of hypothesis space.

Moreover, until we get to the point at which a paradigm shift is necessary we usually don’t entertain hypotheses which contradict our broader theories. General Relativity says faster-than-light travel isn’t possible, so any hypothesis which utilize FTL are ruled out a priori. If and when someone dethrones Einstein that may change, but until then we don’t concern ourselves with those regions of hypothesis space either.

Even with all this there might still be a number of possible hypotheses which make sense of a given data set. The solution, then, is to hold all of them as possibly true until more data comes in.

The brilliant Nate Soares has discussed a kind of update to science he calls ‘simplifience’. It’s essentially science with some information theory and Bayesianism thrown in. The idea is that one doesn’t hold beliefs about data, one assigns probabilities to any candidate explanations for a given phenomenon. If there are five viable explanations of, say, the Mpemba Effect, then we try to work out how likely each is on the evidence and modify when possible.

Getting Bayesian statistics to run on a human brain is tough, of course, but far easier with a digital mind. Given current trends it’s possible that software scientists will outnumber meat scientists in the future, so maybe this won’t be as much of a problem.

I believe that Phaedrus makes too much out of the PHP. Yes, it’s true that every discovery raises new questions, but I submit that it *answers* far more, such that the net result is an increase in understanding rather than a decrease.

If we hear a rustling in the bushes, there is a near-infinite set of questions we could ask ourselves: is it a human or an animal? If it’s an animal, is it a predator? If so, is it a bear? A Wolf? An alligator? Is it hungry?

Let’s say we then hear the animal barking like a dog. Okay, this discovery makes us wonder about a few additional things: is this dog hungry? Does it belong to someone nearby? Is it friendly? Does it have all its shots?

Phaedrus sees this and says, ‘See! Science doesn’t settle a damn thing!”

But while our discovery that the animal is a dog generates new queries it simultaneously collapses vast regions of possible queries which we needn’t concern ourselves with.

We don’t have to ask if the animal is a bear; we know it isn’t. We don’t have to ask if it’s an alligator (and what in the world an alligator is doing in the Rocky mountains), because we know that it isn’t. For each of these animals we could ask the same set of questions we ask about the dog: is it hungry, etc.

None of that need concern us now.

So our discovery raised ten questions, and obviated the need to ask literally thousands of others.

We have not, therefore, gotten more confused by gaining information.

___

In Chapter 19 the narrator begins to probe the (in)famous subject/object distinction, postulating that Quality might not only be a kind of bridge between them, but the actual phenomenon giving rise to separate perceptions of self and other in the first place.

But first he must resolve a dilemma. The two horns are: (I) if Quality is objective, then why is it that scientific instruments aren’t able to detect it? (II) if Quality is subjective, then how is it any different from being ‘just what the observer likes’?

After briefly treating (I) and failing to resolve it satisfactorily the narrator turns to (II): ‘if Quality is subjective, isn’t it just what you like?’ If we excise the word ‘just’ we are left with the question, ‘if Quality is subjective, isn’t it what you like?’, which isn’t as sinister.

The assumed problem is that your preferences emerge from a soup of irrational, contradictory impulses which means that they aren’t likely to be much guide to Quality in any useful sense.

This argument breaks down into two related ones, which the narrator dub ‘scientific materialism’ and ‘classic formalism’. They are the claim that ‘what is real is whatever is made of matter and energy and detectable’ and ‘something isn’t understood unless its understood intellectually’, respectively. Scientific materialism is relatively easy to do away with: we can’t detect the concept ‘zero’, and yet it remains objective.

I think it’s possible to formulate a reply to this. ‘Concepts’ are real things, though they don’t exist out-in-the-world the way chairs do. Instead, they are abstractions running on a neural substrate. They have realness in the sense of having a causal impact on the world because, being housed in brains, they change the way agents like humans behave. They might even be measurable, in a way: there may come a time when brain imaging technology is so advanced we can see concepts as activations in neural circuits. (I’m being a little facetious here but I think you see what I’m saying)

Leaving this aside we still have classical formalism, which is harder because it’s more forceful. All it really says is that we should not base our decisions upon our romantic surface impressions but should consider the larger context and the underlying, classical structures involved. This seems sensible enough, but cleaves Quality in two. There is now a surface Quality which appears immediately and a deeper Quality which takes time to understand. People disagree about Quality precisely because they get this wrong. Some people use their surface impressions in their evaluations of Quality and others use deeper ones, and therein lies fodder for argument.

Frankly, I don’t share the narrator’s consternation over this. I’m prepared to say that Quality just is this deeper appreciation; there are not two Qualities, only one, and people basing their Quality judgements on surface understanding are wrong.

But this requires a caveat: there are people with a tremendous amount of talent in a field like music or mathematics for whom surface impressions do seem to count as Quality detection, even though they may have little formal understanding of the classical structures below. We usually call these people ‘prodigies’, and not that much is known about how they function. For most of us, however, the relationship does hold.

With these notes in place the narrator goes on to formulate a position similar to one I’ve arrived at independently: Quality (though I didn’t call it that before) is really a phenomenon occurring at the interface between agent and world. We can illustrate this same principle with a different problematic term: Beauty (with a capital B).

Are some things Beautiful? Yes. Does the term ‘Beautiful’ resist definition? Yes. Is there enough broad agreement to suggest there is something objective underlying the concept? Yes.

How about this: if all sentient beings in the universe were to perish, would Beauty still exist? No. There would still be paint on canvasses, but Beauty presupposes an agent able to perceive the thing of Beauty. It makes no sense to speak of Beauty elsewise.

And I believe Quality is exactly the same.

Profiting From The Written Word

– Mentorbox is a new subscription-based service which sends customers a monthly box containing interesting books filled with study sheets, detailed notes, summaries, and the like.

– Alain De Botton’s School of Life has a bibliotherapy service in which people are guided to penetrating works of literature that grapple with whatever problems they’re currently facing. Feeling depressed? — here is a list of ten of the greatest books talking about happiness/meaning/suicide/etc. Oh, and we’re eager to help you apply those messages to your unique situation for $100/hr.

– Bill Gates famously locks himself away for two weeks in an isolated cottage to read books which he believes will add value to his business.

– I once read an article (from The Economist, I think) which opined that businesses should forego generic team-building exercises in favor of having employees read and discuss books as a way of articulatinga shared vision.

Maria Popova famously makes a living reading awesome books and sharing their lessons on how to live well.

– There are entire college curricula geared toward the Great Books. For a long time this was the way of educating a society’s elite.

—-

Surely it should be possible to combine these business models in some way, right? You could have a monthly subscription service which sends you books and notes a la mentorbox, but maybe there could be different ‘tracks’; instead of only receiving books about productivity, you might also opt to receive books about happiness, intentionality, adventure, etc. Each month you could switch your focus depending on how you’re feeling and what your needs are. For an additional fee you could get 1-on-1 coaching, maybe even with the author if they’re still alive.

Offer a special package to businesses interested in a company reading list. Work with the CEOs to devise a company worldview and then have your professional readers build a curriculum on that basis. Have your own space for businesses wanting to do retreats — and charge $10,000 for two weeks, with unlimited individual and group coaching.

 

I can’t think of a better than job than ‘professional reader’.

Postmodernism

I just finished Christopher Butler’s “Postmodernism: A Very Short Introduction”, and my impression of the philosophy is still that it consists of a half-dozen genuinely useful insights inflated to completely absurd dimensions.

Yes, to a surprisingly large extent the things we take for granted are social and linguistic constructions; yes, the ‘discourse’ of mutually connected and intersecting concepts we deploy throughout our lives can form a gravity well that obnubilates as much as it elucidates.

But the opening chapters of just about any book on General Semantics could tell you *that*. It does not follow from this that we should torpedo the whole enterprise of objectively seeking the truth.

Imagine it’s 1991, in the barbaric days before Google Maps when people had to navigate through the arcane methods of looking around at stuff. Wanting to do some hiking, you ask a friend where you can acquire a good map of the local trails.

She replies:

“Can you not see the fact that maps are just another means of encoding bourgeois power structures and keeping the lumpenproletariat shackled to the notion that there exists a world outside the text?! NOTHING is outside the text!! A geologist and a hydrologist would both draw *different* maps of the same territory!! WE MUST RISE ABOVE THE MAPS OF OUR MASTERS AND MARCH TOWARDS A TRANSFORMATIVE HERMENEUTICS OF TOPOLOGICAL REPRESENTATION!!!”

while chasing you down the street and hurling copies of “On Grammatology” at your head.

A geologist and a hydrologist would indeed pay attention to different facets of the same reality. What the hydrologist calls a ‘hill’ could be better described as a ‘kuppe’, and the geologist may not even notice the three separate estuaries lying along the coast.

But is there anyone who seriously believes that there isn’t an actual landscape out there, and that there aren’t better and worse ways of mapping its contours?

The sad answer is yes. Postmodernists have spent most of a century trying to convince us all of exactly that.

Duty and the Individual

Because I’m an individualist libertarian who cares deeply about the single greatest engine of human progress in the history of Earth: Western European Civilization, and its greatest modern expression: the United States of America, I’ve spent a fair bit of time thinking about how individualism intersects with duty.

On my view Ayn Rand was correct in pointing out that when people begin chattering about ‘the common good’ and ‘social responsibilities’ they’re usually trying to trick you into forging the instruments of your own destruction[1]. On the other hand, I have come to believe that there are several legitimate ways of thinking about a generalized ‘duty’ to civilization.

The first is to conceive of civilization as an unearned and un-earnable endowment. Like a vast fortune built by your forebears, Western Civilization provided the spiritual, philosophical, scientific, and technological framework which lifted untold billions out of poverty and put footprints on the moon. I am a son and heir of that tradition, and as such I have the same duty to it as I would to a $1 billion dollar deposit into my bank account on my eighteenth birthday: to become worthy of it.

That means: to cherish it as the priceless inheritance it is, to work to understand it, exalt in it, defend it, and improve it.

These last two dovetail into the second way of thinking about a responsibility to civilization. Duties are anchors tying us to the things we value. If you say you value your child’s life but are unwilling to work to keep her alive, then you’re either lying to me or lying to yourself. If you say you value knowledge but can’t be bothered to crack open a book, then you’re either lying to me or lying to yourself.

Having been born in the majesty and splendor of Europa, and being honest enough to see what she is worth, it is my personal, individual duty to defend her against the onslaughts of postmodernism, leftism, islamofascism, and the gradual decline that comes when a steadily-increasing fraction of her heirs become spoiled children unable to begin to conceive of what would happen if her light should go out.

But individualism and the right of each individual person to their own life are cornerstones of the Western European endowment. The key, then, is not to surrender individualism to a jack-booted right-wing collectivism, but to understand how the best representatives of a civilization keep it alive in their words and deeds. A civilization is like a God whose power waxes and wanes in direct proportion to the devotion of its followers. But a devotion born of force and fraud is a paltry thing indeed.

Let us speak honestly and without contradict about individual rights and duties, secure in the knowledge that the *only* way to maintain freedom is to know the price that must be paid to sustain its foundation, and to know the far greater price to be paid for its neglect.

***

[1] This is not to say that kindness, compassion, and basic decency are unimportant.