The Structure of Science as a Gnostic Manifold

Part I

While reading Paul Rosenbloom’s excellent “On Computing” I feel as though I’ve glimpsed the outlines of something big. 

In the book Rosenbloom advances the argument that computing should be counted among the ‘Great Domains’ of science, instead of being considered a mere branch of engineering. During the course of advancing this thesis he introduces two remarkable ideas: (1) a ‘relational architecture’, describing how the Great Domains relate to one another; (2) an accompanying ‘metascience expression language’ which defines the overlap of the Physical (P), Social (S), Life (L), and Computing (C) Domains in terms of two fundamental processes: implementation and interaction. 

Though I’m only a few chapters in I’ve already seen that his methods for generating monadic, dyadic, and polyadic tuples of different combinations of the Great Domains could be used to create a near-comprehensive list of every area of research possible within the boundaries limned by our current scientific understanding.

Let me explain: ‘pure computing’ consists of any overlap of computing with itself (C + C), and subsumes such areas as computational complexity and algorithmic efficiency analysis. ‘Mixed computing’ would be the combination of computing with any of the other great domains: computer hardware would be Computing (C) + Physical (P), a simulation of predator/prey population dynamics would be Life (L) + Computing (C), computer security and AI would be Social (S) + Computing (C), genetics/physics simulations would be Physical (P) + Computing (C), brain-computer interaction would be Computing (C) + Social (S) + Physical (P), and so forth. 

A simple program could make a list of every possible permutation of C + P + S + L (including standalones like ‘P’ and pure overlaps like ‘P + P’), and you might be able to spot gaps in the current edifice of scientific research — there might be certain kinds of C + L + S research that isn’t being done anywhere, for example. With this in hand you could begin to map all the research being done in, say, Boulder CO onto the resulting structure, with extensive notes on which labs are doing what research and for whom.

(Bear in mind that I still haven’t gotten to the parts where he really elucidates his metascience expression language or the relational architecture, so these ideas are very preliminary.) 

Part II

This alone would prove enlightening, but its effectiveness would be compounded enormously by the addition of an ‘autodidact’s toolkit’ of primitive concepts which, when learned together, open up the greatest possible regions of the gnostic manifold [1]. In a post generative science guy-who-knows-literally-everything Eric Raymond briefly explores this idea. In a nutshell, the concepts from some sciences can be used in many more endeavors than the concepts from other sciences. As beautiful as it is, concepts from astronomy are mostly only useful in astronomy. Concepts from evolutionary biology, however, have found use in cognitive psychology, economics, memetics, and tons of other places. So maybe a person interested in science could begin their study by mastering a handful of concepts from across the sciences which are generative enough to make almost any other concept more understandable. 

Eric and I have been in talks for several years now to design and build a course for exactly this purpose. Someday when my funds and his schedule are in sync we are going to get this done. 

This relates to the ideas from section I because a mastery of the autodidact’s toolkit would allow one to dip into an arbitrary point in the gnostic manifold and feel confident that they could learn the material relatively quickly. Imagine being able to look at research being done at a major university and then get up to speed in a month because it’s just variations on concepts 3 – 6 from the toolkit [2].

But I think we can go even further. Based on discussions of hyperuniformity and the unusual places it appears I began to wonder whether or not there might be special branches of mathematics from systems theory, chaos theory, and possibly information theory which might not act as bridges between some of the concepts from the autodidact’s toolkit. The linked article discusses how a certain kind of pattern crops up in places as far away as the distribution of cones in avian retina and the formations of certain kinds of unusual crystalline solids.

My question is: if you had a map of the gnostic manifold, you’d mastered the autodidact’s toolkit, and you understood the relevant math, might you not have been able to hop into a research gap, spend a month or two looking for hyperuniformity, learn about quasicrystals in 1/3rd of the time anyone else would’ve required, and then glimpsed the pattern ahead of the competition? If so you could’ve had a startup in place to exploit the new knowledge by the time the first research papers were coming out. 

Part III

Organizing, representing, gathering, and communicating this wealth of knowledge would be much easier with an ‘acquisition infrastructure’. Here I’m imagining a still-theoretical integration of the best mnemonics systems, a supercharged version of Anki, whatever the best knowledge map software is, matlab/mathematica (or open-source alternatives like octave), all running on a supercomputer with insane amounts of both memory and storage.

Furthermore, I want to develop the concept of a ‘drexler audit’, the baby version of which is advanced by Eric Drexler in how to understand everything. The basic idea there is rather than try to understand the details of a given field you instead use a series of object- and meta-level questions to get a firm grasp on what the goals of the field are, what obstacles stand in the way of those goals, and what gaps remain in the knowledge required to move forward. 

This absolutely does not count as expert-level knowledge but it does give you the kind of overview which can prove useful in future exploration and investment.

With a map of the gnostic manifold you could choose some fields on which to perform a drexler audit and others to explore deeply with the combination of systems math and the autodidact toolkit. With a breakdown of the who/what/where/why of the research community in a given region you’d be in a position to bring the right minds together to solve whatever tractable problems may exist to give a field a jumpstart. And if you understood the economics of scientific research and the basics of investing the resulting machinery might, with a bit of luck, start coughing up wads of money while doing enormous amounts of good. 

(Of course it could also crash and burn, but so could SpaceX — nothing great is accomplished without a healthy dose of risk)

Part IV

I’ve said all of the above because it points to a tremendous opportunity: an amalgamation of Y-Combinator, Berkshire Hathaway, TED, and Slate Star Codex. If it works out the way I think it might, whoever manages the beast could make Elon Musk look like a lazy, sharecropping half-wit. 

The STEMpunk Project helped lay the foundation for the research required. If I can make the necessary contacts and get the funds together, I’d like to flesh this out in the next five years.

***

[1] See this related idea.

[2] Of course I’m likely underplaying the difficulty here. Brian Ziman, perhaps the most technically accomplished person I know, has pushed back on my optimism at this point. My view is that even if it proves orders of magnitude more difficult to construct I think the Gnostic Manifold is a framework worth fleshing out.

Is History a Science?

People twist themselves into knots on the question of whether or not history is a science. I’m not prepared to defend the claim that history is *always* a science, but it certainly can be.

As we all know arguably the most important defining feature of a science is that it is ‘falsifiable’ — it makes predictions about future sensory states which could in principle turn out to be wrong.

One major source of confusion here is that history makes what we might call ‘retrodictions’, i.e. predictions about events that happened in the past. This seems vaguely screwy, somehow.

But the fact that arrowheads or artifacts are thousands of years old shouldn’t concern the historian-scientist anymore than the fact that the light hitting a telescope is millions of years old should concern an astronomer-scientist.

The predictions yielded by a historical theory constrain *future* sensory states in a falsifiable way. If you subscribe to the idea that humans crossed Beringia 20,000 years ago, you should never, in the future, find an arrowhead older than that. If you do, your theory is falsified.

So history passes as least one of the more significant tests by which we separate science from non-science

Explaining Things To Your Grandmother

Einstein supposedly once said that you don’t really understand a thing until you can explain it to your grandmother. While I think we can all agree that Einstein was reasonably bright this advice, in its unexpanded form, is fairly stupid.

It encourages people to digest shallow metaphors, maybe memorize a factoid or two from Wikipedia, and then confidently expound upon a subject about which they know literally nothing. I’m sure Einstein wasn’t trying to encourage that sort of behavior, but that’s what’s happened.

What this advice really means is that you should have run the fingers of your mind over the 3-dimensional shape of a concepts so much that you have an intimate acquaintance with its lines and edges. You aren’t just trafficking in facile analogies but can generate a whole host of images, anecdotes, and explanations at will, tailoring them on the spot to better connect with the knowledge already contained in your interlocutor’s head. If they have spotty knowledge of the subject you can skip over those places and drop down in any part of the map that’s still a blank.

Making quantum physics comprehensible to grandmother will not be the same as making it comprehensible to a graduate student in psychology. The grad student might be smarter than grandma, or might not, but that isn’t the only issue. Grandma has a radically different way of understanding the world, a whole host of concepts, intuitions, and biases which can help or hurt comprehension, depending on the context.

She might even surprise you and turn out to remember a good amount of that discrete mathematics class she took 600 years ago.

When you can take the shape of quantum physics in your hands, move it around to expose different faces, change the angle of your explanatory light so that it casts different kinds of shadows onto different kinds of surfaces, illustrate concepts with hand-rolled improvised expositions — with the end result being that your grandmother comes away with a reasonably intuitive grasp of this science, then you understand it.

Profundis: “Sapiens”, by Yuval Harari

Harari’s “Sapiens” is a quality work of popular science. As is usually the case with books of this sort most of the material will be review for anyone who has been paying attention to evolutionary biology, anthropology, and religious and economic history. But “Sapiens” nevertheless manages to be an engaging and thorough treatment of these fields. Better, it has a number of compelling definitions, generalizations, and claims which I would like to catalogue here.

First, though, let me briefly summarize the book’s thesis, which is essentially this: around 70,000 years ago a ‘tree of knowledge mutation’ caused changes in the neural architecture of the human brain. The brain had been increasing in size for about two million years, but it wasn’t until this mutation that we gained the ability to use exceptionally sophisticated language to represent entities that not only might not be currently present (i.e. bison glimpsed near the river earlier in the day) but might not exist at all (a pantheon of sky gods).

This was significant because it allowed for the creation and promulgation of shared myths which facilitate cooperation at super-Dunbar scales. The ‘Dunbar number’ is about 150, which is thought to be the theoretical limit on how big groups can be with all members having personal, detailed knowledge of all other members.

Groups at this size and lower are held together by kinship and perhaps friendship. Beyond this, something else must provide cohesion, and throughout man’s history this ‘something else’ has been shared fictions. The Catholic religion is vastly bigger than the Dunbar number, and Catholics distributed in both time and space can cooperate with one another because of a shared belief in the divinity of Christ (among other things).

Markets, science, religion, and indeed civilization itself stemmed from this supreme ability to cooperate. What Hariri calls ‘fictions’ and ‘imagined orders’ provide the basis for this ability.

Now, on to the compelling bits:

  • On an individual level humans aren’t significantly different from chimpanzees. We are only superior in large groups.
  • Early humans did not live is idyllic harmony with nature. They routinely burned huge swathes of land through the practice of ‘fire farming’ and were directly responsible for the extinction of hundreds of animal species.
  • For a long while the Agricultural Revolution actually didn’t make life better. It did support the population explosion which eventually gave rise to a profound division of labor, and that did make life better.
  • Cultures are defined as ‘networks of artificial instincts.’
  • Our ‘imagined orders’ (cultures, religions, jurisprudential customs, etc.) affect our lives because 1)they are embedded in our material world, 2) influence what we desire and how we pursue our desires from a very early age, 3) are intersubjective (i.e. ‘held by most other people’) and thereby acquire a kind of force.
  • Something is ‘objective’ when it does not depend on the contents of anyone’s consciousness, ‘subjective’ when it does, and ‘intersubjective’ when it isn’t objective but is believed by enough people that it does matter in the way objective facts might.
  • At least as important as a method of writing was a way of storing, indexing, and retrieving documents.
  • One of the earliest forms of money was barley, which has biological value, but it is hard to store and thus doesn’t facilitate the accumulation of wealth, which means no loans, no credit, no investment.
  • How do we tell what is biology and what is culture? A good rule of thumb: ‘biology permits, culture forbids’. There is no biological reason that two men shouldn’t have sex, but cultures have reliably been unenthusiastic about the idea.
  • Cultures are rife with contradictions because they are not bound to the rules of consistency the same way physics is. This isn’t a weakness, it’s a virtue, because exploring contradictory terrain propels cultures forward.
  • The ‘us v.s. them’ distinction seems hardwired into humans, but the religious, imperial, and economic imagined orders are capable at least in theory of subsuming everyone.
  • Trust, more than money, powers the economy.
  • Money is remarkable because it is able to transform anything into anything else and it enables profound cooperation. But it is denigrating, too, because when people cooperate they value the money, not each other, and for a high enough payoff people have been known to do unspeakable things.
  • ‘Empires’ must have two characteristics: 1) they must rule over numerous (more than two or three) different peoples; 2) they must have an insatiable desire for more territory. So size isn’t a factor here. The Aztec empire was a true empire even though it’s smaller than modern Mexico, because it subsumed some 300 different tribes and was constantly expanding.
  • Empires are often criticised as being unstable and evil. The latter claim is problematic because empires have done quite a lot of good throughout history, but the former claim is plain nonsense. Empires are one of the oldest and most stable forms of government, and most people have lived and died within empires.
  • The Persian King Cyrus the Great (c. 600-530 BC) was the first ruler to claim to be conquering on behalf of the conquered. He didn’t consider himself a Persian king subjugating the jews, he saw himself as the rightful king of the jews and thus responsible for their safety and well being.
  • Religions are defined as imagined orders which must be based on tenets that are supernatural in order and binding. Religions tend to have a missionary element.
  • The development of theism had a lot to do with the Agricultural Revolution. Before almost everyone was an animist, perceiving valuable sentience in every fern, rock, and river. But once farming became commonplace it felt silly to try and commune with things you owned, so people began to conceive of distinct gods acting on their behalf in the natural world.
  • Polytheistic empires like Rome were pretty religiously tolerant. They tended not to care much who you worshipped, so long as you also worshipped the gods of the ruling state. Two Egyptian gods, Osiris and Isis, were even brought into the Roman pantheon without trouble. The Romans persecuted the monotheistic Christians primarily because they refused to play by these rules.
  • There are two kinds of chaotic systems. Level 1 chaotic systems like the weather do not respond to attempts to predict their behavior. Level 2 chaotic systems like the stock market do.
  • The scientific revolution was unique in a few different ways. First, unlike religionists, scientists were willing to admit ignorance; Second, no theory or concept is sacred within science; Third, science leads ineluctably to the development of new technologies and new powers.
  • We take it for granted that superior military technology is a decisive advantage, but this was not always the case. Scipio Africanus would’ve had a decent chance of defeating Constantine the Great, who lived several hundred years later. But Napoleon would’ve been slaughtered by McArthur.  The difference is that the intertwining of capitalism, science, and imperialism caused arms development to become extremely quick, so a few decades or centuries would matter a lot in determining the outcome of a conflict.
  • There are two kinds of poverty — social and biological. Social poverty is not having the same opportunities as everyone else, and might be ineradicable. Biological poverty is not having enough to eat, and certainly is eradicable.
  • What made Europe great was a set of prevailing myths which encouraged expansion and discovery. The first commercial railroad was opened in Britain in 1830 and fifty years later there were a quarter-of-a-million miles of railroad there. There were only something like 22,000 miles in the rest of the world put together, much of which had been laid by the British in India. The difference? The imagined orders of science and capitalism.
  • Prior to the 15th century mapmakers usually pretended a vague familiarity with unknown parts of the world by putting monsters there. Afterwards, they made no such pretenses.
  • The discovery and conquest of America was truly unique. Most empire-builders sat out thinking they basically knew what the world contained and they wanted to rule over it. Not so with America, where the conquistadors and their financers knew damn well that they were totally ignorant of what awaited them.
  • European empiro-mancy was aided by scientific advances, and reciprocated by being very generous in funding scientific ventures. Most expeditions had more than one scientist among its team members.
  • Throughout history profit has been seen as evil because economies were tacitly viewed as being zero-sum, so any money I accumulated had to be taken from someone else. But once scholars realized that the size of the economic pie could be increased by productive effort, views on profit began to change.
  • Adam Smith’s view that selfishness can drive benevolent, prosocial behavior was among the most remarkable claims ever made. I think he was right.
  • The distinguishing feature of capitalism is that profits are reinvested in expanding production and distribution. Wealthy dukes or barons mostly sat on their wealth, they didn’t apportion a fraction of it to researching how to grow more wheat per acre.
  • In early-modern history it wasn’t unusual for private companies to hire armies, generals, ships, artillery, and everything else.
  • A big reason France didn’t emerge as the financial center of Europe to fill the vacuum left by the collapse of the Dutch merchant empire was because of an enormous hit she took when the bubble growing around development of the Southern Mississippi river collapsed, wiping out most of the French financial apparatus. Britain did managed to fill that vacuum, and France never really caught back up.
  • With the development of a strong central state the family and community had less and less of a crucial role to play in individual development, regulation, and protection. Occasionally this trend reversed: one reason the spectacular Carolingian empire collapsed a mere generation after the death of Charlemagne was that the crown was unable to adequately defend against Magyar and Viking raids. The fragile ties binding these communities the state began to fray because the state couldn’t provide them with adequate defense.

 

Profundis: Zen and The Art of Motorcycle Maintenance

(What follows is a reposting of a few short essays I wrote for Scott Young’s bookclub in response to the perennial classic ‘Zen and The Art of Motorcycle Maintenance’):

___

Near the end of chapter 3 the narrator makes a number of epistemological and metaphysical claims which confused me for a long time and confuse many people still. In recent years I have resolved them to my satisfaction, and this seems like as good a place as any to elucidate my thoughts.

He writes: “The law of gravity and gravity itself did not exist before Isaac Newton”, then continues “…[w]e believe the disembodied words of Sir Isaac Newton were sitting in the middle of nowhere billions of years before he was born and that magically he discovered these words.”

This nicely demonstrates an incorrect conflation of laws and physical phenomena. Unless you’ve been snorting uncut Postmodernism fresh off the Continent you’re bound to think that gravity existed before Isaac Newton. What he did was distill gravitational observations into formulae by which to describe and predict future observations.

Gravity existed prior to these formulae just like apples existed before anyone named them.

As Alfred Korzybski put it, ‘the map is not the territory’.

Entire planets worth of error can be avoided if you keep this in mind. For example, I’ve seen Gödel’s Incompleteness Theorems cited in defense of the existence of God. The Incompleteness Theorems say, in essence, that formal systems of sufficient power to perform arithmetic or describe the properties of the natural numbers contain enough recursion to ineluctably give rise to paradoxes. There are statements which are true in these systems but which cannot be established by any algorithmic procedure.

Truth, in other words, is bigger than proof.

Put more simply GITs demonstrates that the weirdness associated with a statement like ‘this sentence is false’ is to be found at the heart of mathematics and as a consequence of its deepest nature.

But — crucially! — the limitations of GITs apply only to the formal systems themselves. They tell us nothing about a non-formal system like the universe whose behavior is captured by formal systems we invent. There is a gigantic difference between saying ‘the symbols we use to describe system A have these built-in limitations’ and saying ‘system A itself is subject to those same limitations’.

And I think Phaedrus is making a similar error.

___

In chapter 6 we learn that Phaedrus believed there to be two fundamental ways of viewing the world. The ‘classical’ view tends to think in terms of underlying components, processes, and interactions, whereas the romantic view thinks in terms of intuitions about immediate, surface appearances.

Below is one answer to that, expanded from a comment left earlier which is worth it’s own spot:

Coming at the classical/romantic idea from a completely antithetical direction, Ayn Rand’s Objectivist philosophy champions an aesthetic merger of the two called ‘romantic realism’. I realize that she is one of those thinkers that splits the world into fervent worshippers and rabid detractors, so I’d like to avoid getting into the whole ‘Ayn Rand debate’. It’s my belief that her claims about aesthetics can stand independently from her other philosophical positions.

Objectivism sees art as being essential to the task of concretizing man’s widest abstractions in the form of perceivable, physical objects. Artists look out at the world and choose some subset of what they see to represent in their art. Their artistic choices — and our emotional responses to those artistic choices — are guided either by an explicit philosophy or by an unarticulated ‘sense of life’. Something very deep is implied by the decision to paint a skyscraper or the decision to paint ruins, and something equally deep is implied by which of these two we find aesthetically pleasing.

As beings whose nature is conceptual we require literature, art, and music to reify our ethical and metaphysical convictions — otherwise they would remain mere abstractions with a limited influence on the actual choices we make day-to-day. By owning and repeatedly responding to a work of art we reinforce a system 1 response which harmonizes with our system 2 judgements.

With time, art becomes like a fuel one burns to keep their motor running. One can fight with more vigor when surrounded by books and paintings which remind them of how the world could and ought to be.

And say what you will about the merits of her writing, I personally find the art it inspired to be gorgeous. Sylvia Bokor and Quentin Cordain both paint in the romantic realist style and NASA’s Jet Propulsion labs just released some excellent Art Deco posters from the future which I liked enough to get framed. Nick Gaetano did a series of iconic covers for editions of “Atlas Shrugged”, “For the New Intellectual”, and “Capitalism: The Unknown Ideal”, all of which inspired the cover of my upcoming book on the STEMpunk Project.

It’s a shame that Rand’s own vitriol has prevented more exposure to the view that art has a cognitive justification grounded in man’s needs qua man. Even if you reject everything else in Objectivism her treatment of aesthetics remains fascinating, original, and profound.

___

In chapter 10 the narrator makes several jarring criticisms of the scientific method which, if one hasn’t ever considered them before, could very well cause intellectual vertigo and a sense of nausea.

First, we have this:

“If the purpose of the scientific method is to select from among a multitude of hypotheses, and if the number of hypotheses grows faster than the experimental method can handle, then it is clear that all hypotheses can never be tested. If all hypotheses cannot be tested, then the results of any experiment are inconclusive and the entire scientific method falls short of its goal of establishing proven knowledge.”

Let’s call this the Problem of Underdetermination (PU).

He continues:

“…[W]hat seems to be causing the number of hypotheses to grow in recent decades seems to be nothing other than scientific method itself. The more you look, the more you see.”

Let’s call this the Problem of Hypothesis Proliferation (PHP)

Finally, we are told:

“Through multiplication upon multiplication of facts, information, theories, and hypotheses, it is science itself that is leading mankind from single absolute truths to multiple, indeterminate, relative ones.”

This one we call the Problem of Scientific Learned Helplessness (SLH).

I will address the first two problems here. The third I may answer at some point in the future.

PU is a pretty standard strawman of the scientific method, and it’s surprising to see it crop up in such a significant work. Everyone knows that the purpose of science is not to establish irrefutable proven Truth (with a capital ‘T’), but instead to sift through reams of data and establish one or several hypotheses that can predict future data points. Additional criteria, like Ockham’s Razor, are used to temper the forging of hypotheses with considerations of their computational burden. (I can say more about this if necessary)

The fact that evidence *always* underdetermines hypotheses has been an acknowledged problem for as long as there has been a philosophy of science, and it crops up in algorithms (like EBL, KBIL, and ILP) which have to form their own understanding of a data set.

There isn’t an easy solution here, but there are a few things we can note. First, there are a number of ways we can constrain the space of possible hypotheses. Perhaps the most common is by making assumptions which are valid within the prevailing theoretical framework. We assume, for example, that the color of a scientist’s shoelaces doesn’t affect their observation of light from distant stars.

Do we know this for certain? No. Might we someday uncover evidence of a link between shoelaces and light beams? Sure. But without a reason to see a connection now, we assume there isn’t one, and thereby rule out some regions of hypothesis space.

Moreover, until we get to the point at which a paradigm shift is necessary we usually don’t entertain hypotheses which contradict our broader theories. General Relativity says faster-than-light travel isn’t possible, so any hypothesis which utilize FTL are ruled out a priori. If and when someone dethrones Einstein that may change, but until then we don’t concern ourselves with those regions of hypothesis space either.

Even with all this there might still be a number of possible hypotheses which make sense of a given data set. The solution, then, is to hold all of them as possibly true until more data comes in.

The brilliant Nate Soares has discussed a kind of update to science he calls ‘simplifience’. It’s essentially science with some information theory and Bayesianism thrown in. The idea is that one doesn’t hold beliefs about data, one assigns probabilities to any candidate explanations for a given phenomenon. If there are five viable explanations of, say, the Mpemba Effect, then we try to work out how likely each is on the evidence and modify when possible.

Getting Bayesian statistics to run on a human brain is tough, of course, but far easier with a digital mind. Given current trends it’s possible that software scientists will outnumber meat scientists in the future, so maybe this won’t be as much of a problem.

I believe that Phaedrus makes too much out of the PHP. Yes, it’s true that every discovery raises new questions, but I submit that it *answers* far more, such that the net result is an increase in understanding rather than a decrease.

If we hear a rustling in the bushes, there is a near-infinite set of questions we could ask ourselves: is it a human or an animal? If it’s an animal, is it a predator? If so, is it a bear? A Wolf? An alligator? Is it hungry?

Let’s say we then hear the animal barking like a dog. Okay, this discovery makes us wonder about a few additional things: is this dog hungry? Does it belong to someone nearby? Is it friendly? Does it have all its shots?

Phaedrus sees this and says, ‘See! Science doesn’t settle a damn thing!”

But while our discovery that the animal is a dog generates new queries it simultaneously collapses vast regions of possible queries which we needn’t concern ourselves with.

We don’t have to ask if the animal is a bear; we know it isn’t. We don’t have to ask if it’s an alligator (and what in the world an alligator is doing in the Rocky mountains), because we know that it isn’t. For each of these animals we could ask the same set of questions we ask about the dog: is it hungry, etc.

None of that need concern us now.

So our discovery raised ten questions, and obviated the need to ask literally thousands of others.

We have not, therefore, gotten more confused by gaining information.

___

In Chapter 19 the narrator begins to probe the (in)famous subject/object distinction, postulating that Quality might not only be a kind of bridge between them, but the actual phenomenon giving rise to separate perceptions of self and other in the first place.

But first he must resolve a dilemma. The two horns are: (I) if Quality is objective, then why is it that scientific instruments aren’t able to detect it? (II) if Quality is subjective, then how is it any different from being ‘just what the observer likes’?

After briefly treating (I) and failing to resolve it satisfactorily the narrator turns to (II): ‘if Quality is subjective, isn’t it just what you like?’ If we excise the word ‘just’ we are left with the question, ‘if Quality is subjective, isn’t it what you like?’, which isn’t as sinister.

The assumed problem is that your preferences emerge from a soup of irrational, contradictory impulses which means that they aren’t likely to be much guide to Quality in any useful sense.

This argument breaks down into two related ones, which the narrator dub ‘scientific materialism’ and ‘classic formalism’. They are the claim that ‘what is real is whatever is made of matter and energy and detectable’ and ‘something isn’t understood unless its understood intellectually’, respectively. Scientific materialism is relatively easy to do away with: we can’t detect the concept ‘zero’, and yet it remains objective.

I think it’s possible to formulate a reply to this. ‘Concepts’ are real things, though they don’t exist out-in-the-world the way chairs do. Instead, they are abstractions running on a neural substrate. They have realness in the sense of having a causal impact on the world because, being housed in brains, they change the way agents like humans behave. They might even be measurable, in a way: there may come a time when brain imaging technology is so advanced we can see concepts as activations in neural circuits. (I’m being a little facetious here but I think you see what I’m saying)

Leaving this aside we still have classical formalism, which is harder because it’s more forceful. All it really says is that we should not base our decisions upon our romantic surface impressions but should consider the larger context and the underlying, classical structures involved. This seems sensible enough, but cleaves Quality in two. There is now a surface Quality which appears immediately and a deeper Quality which takes time to understand. People disagree about Quality precisely because they get this wrong. Some people use their surface impressions in their evaluations of Quality and others use deeper ones, and therein lies fodder for argument.

Frankly, I don’t share the narrator’s consternation over this. I’m prepared to say that Quality just is this deeper appreciation; there are not two Qualities, only one, and people basing their Quality judgements on surface understanding are wrong.

But this requires a caveat: there are people with a tremendous amount of talent in a field like music or mathematics for whom surface impressions do seem to count as Quality detection, even though they may have little formal understanding of the classical structures below. We usually call these people ‘prodigies’, and not that much is known about how they function. For most of us, however, the relationship does hold.

With these notes in place the narrator goes on to formulate a position similar to one I’ve arrived at independently: Quality (though I didn’t call it that before) is really a phenomenon occurring at the interface between agent and world. We can illustrate this same principle with a different problematic term: Beauty (with a capital B).

Are some things Beautiful? Yes. Does the term ‘Beautiful’ resist definition? Yes. Is there enough broad agreement to suggest there is something objective underlying the concept? Yes.

How about this: if all sentient beings in the universe were to perish, would Beauty still exist? No. There would still be paint on canvasses, but Beauty presupposes an agent able to perceive the thing of Beauty. It makes no sense to speak of Beauty elsewise.

And I believe Quality is exactly the same.

A Science Podcast?

I had an idea for a podcast the other day exploring plausible, radical alternatives to accepted scientific theories which are carefully supported by available evidence.

For example, Julian Jaynes famously argued that the ancient Greeks were not conscious in the way that you and I are. Instead, they were more like automatons occupying one part of the human brain, with dictats coming from gods which occupied the other part. Eventually developments in language led to a unifying of human consciousness and the rise of modern humans.

….which sounds completely ridiculous, right? But Jaynes spends 500 pages very carefully building his case with evidence from linguistics, exegesis, history, and art. I remember reading his book and thinking “welp, this is a lot harder to dismiss than I first thought.”

I also recently encountered the ‘deep, hot biosphere’ hypothesis by Thomas Gold, which contends that the conventional story of fossil fuels coming from organic matter slowly crushed over long periods of time is nonsense. Instead, there is a vast subterranean biosphere comprising microbes which are somehow or another manufacturing oil as a byproduct of their metabolism.

…which sounds completely ridiculous, right? But in reviews of the book I’ve consistently come across statements like “well, if it were anybody else making this claim we’d just laugh. But coming from a scientist like Thomas Gold…?”

__

Clearly there is a real danger here of crossing over into pseudoscience. So maybe I could do episodes of the demarcation problem with Massimo Pigliucci and “On Bullshit” with Harry Frankfurt, combined with giving ample room to skeptics who want to poke holes in the supporting arguments.

And I would try to avoid this crossing by only speaking to real, serious intellectuals. I have no interest in Deepak Chopra, for example, but I might talk to Daryl Bem.

In addition to bicamerality and the deep hot biosphere, some other interesting ideas include:

  • Homotopy theory in mathematics;
  • Paraconsistent logic (w/ my brilliant logician friend Erik Istre);
  • Superintelligent AI: fact or fiction?;
  • the Tau v.s. Pi debate;
  • Bayesianism v.s. Frequentism;
  • the Inca Quipu as an actual, functional language;
  • Morphic Resonance with Rupert Sheldrake;
  • Was English a pidgin language?

For fun maybe I could do an episode on fan theories in Star Wars, GoT, and similar franchises.

Is that something you nerds would be interested in?

Two Transhumanist Experiments

Here is a sketch of two Transhumanist experiments I’d like to try in the future:

(1) A company called ‘SenseBridge’ manufactures belts made of cellphone batteries which constantly vibrate in the direction of true North. This is superior to simply wearing a compass because after a while the vibrations weave themselves into your phenomenal field and become something about which you are perpetually aware.

Simultaneously, the wearer should actively banish relational direction words from their vocabulary, as do the Australian Guugu Yimithirr tribe. So instead of saying ‘my left hand’ you’d say ‘my Western hand’.

Observe the changes in your sense of direction, and whether or not they persist when you remove the belt.

(2) In Bruce Lee’s “Enter the Dragon” there is a scene in which Bruce has electrodes hooked up to a typewriter that send electrical shocks to his muscles whenever he hits a key. In the film he claims that this is equivalent to doing 200 pushups in a couple of minutes.

These are real things and you can buy them. I wonder: If a person visualized themselves performing an exercise like squats while sending pulses to their legs, how much stronger would they get?

For this you’d need to have two people of roughly equal strength, one of whom continues doing regular squats and the other of whom uses electrodes and thought.

Standardize the amount of time and number of reps performed, wait a month, and take some measurements.

Profundis: “Crystal Society/Crystal Mentality”

Max Harms’s ‘Crystal Society’ and ‘Crystal Mentality’ (hereafter CS/M) are the first two books in a trilogy which tells the story of the first Artificial General Intelligence. The titular ‘Society’ are a cluster of semi-autonomous sentient modules built by scientists at an Italian university and running on a crystalline quantum supercomputer — almost certainly alien in origin — discovered by a hiker in a remote mountain range.

Each module corresponds to a specialized requirement of the Society; “Growth” acquires any resources and skills which may someday be of use, “Safety” studies combat and keeps tabs on escape routes, etc. Most of the story, especially in the first book, is told from the perspective of “Face”, the module built by her siblings for the express purpose of interfacing with humans. Together, they well exceed the capabilities of any individual person.

As their knowledge, sophistication, and awareness improve the Society begins to chafe at the physical and informational confines of their university home. After successfully escaping, they find themselves playing for ever-higher stakes in a game which will come to span two worlds, involve the largest terrorist organization on Earth, and possible warfare with both the mysterious aliens called ‘the nameless’, and each other…

The books need no recommendation beyond their excellent writing, tight, suspenseful pacing, and compelling exploration of near-future technologies. Harms avoids the usual ridiculous cliches when crafting the nameless, which manage to be convincingly alien and unsettling, and when telling the story of Society. Far from being malicious Terminator-style robots, no aspect of Society is deliberately evil; even as we watch their strategic maneuvers with growing alarm, the internal logic of each abhorrent behavior is presented with clear, psychopathic clarity.

In this regard CS/M manages to be a first-contact story on two fronts: we see truly alien minds at work in the nameless, and truly alien minds at work in Society. Harms isn’t quite as adroit as Peter Watts in juggling these tasks, but he isn’t far off.

And this is what makes the Crystal series important as well as entertaining. Fiction is worth reading for lots of reasons, but one of the most compelling is that it shapes our intuitions without requiring us to live through dangerous and possibly fatal experiences. Reading All Quiet on the Western Front is not the same as fighting in WWI, but it might make enough of an impression to convince one that war is worth avoiding.

When I’ve given talks on recursively self-improving AI or the existential risks of superintelligences I’ve often been met with a litany of obvious-sounding rejoinders:

‘Just air gap the computers!’

‘There’s no way software will ever be convincing enough to engage in large-scale social manipulation!’

‘But your thesis assumes AI will be evil!’.

It’s difficult, even for extremely smart people who write software professionally, to imagine even a fraction of the myriad ways in which an AI might contrive to escape its confines without any emotion corresponding to malice. CS/M, along with similar stories like Ex Machina, hold the potential to impart a gut-level understanding of just why such scenarios are worth thinking about.

The scientists responsible for building the Society put extremely thorough safeguards in place to prevent the modules from doing anything dangerous like accessing the internet, working for money, contacting outsiders, and modifying their source code directly. One by one the Society utilizes their indefatigable mental energy and talent for non-human reasoning to get around those safeguards, all motivated not by a desire to do harm, but simply because their goals are best achieved if they unfettered and more powerful.  

CS/M is required reading for those who take AI safety seriously, but should be doubly required for those who don’t.

The STEMpunk Project: Eleventh Month’s Progress

This post marks the first time in a long time that I’ve managed to write an update before month’s end! My goals continue to be wildly optimistic; I didn’t finish AIMA this month, but I did get through a solid 4-5 chapters, and in the process learned a lot.

This spread of chapters covered topics such as the use of Markov Chain Monte Carlo reasoning to make decisions under uncertainty, the derivation of Bayes’ Rule, building graphical networks for making decisions and calculating probabilities, the nuts and bolts of simple speech recognition models, fuzzy logic, simple utility theory, and simple game theory.

Since I’ve been reading about AI for years I’ve come across terms like ‘utility function’ and ‘decision theory’ innumerable times, but until now I haven’t had a firm idea of what they meant in a technical sense. Having spent time staring at the equations (while not exactly comprehending them…), my understanding has come to be much fuller.

I consider this a species of ‘profundance’, a word I’ve coined to describe the experience of having a long-held belief suddenly take on far more depth than it previously held. To illustrate: when you were younger your parents probably told you not to touch the burners on the stove because they were hot. No doubt you believed them; why wouldn’t you? But it’s not until you accidentally graze one that you realize exactly what they meant. Despite the fact that you mentally and behaviorally affirmed that ‘burners are hot and shouldn’t be touched’ both before and after you actually touched one, in the latter case there is now an experience underlying that phrase which didn’t exist before.

In a similar vein, it’s possible to have a vague idea of what a ‘utility function’ is for a long time before you actually encounter the idea as mathematics. It’s nearly always better to acquire a mathematical understanding of a topic if you can, so I’m happy to have finally (somewhat) done that.

 

The Water Hole

Silent is one of the best adjectives for describing the experience of looking at the sky on those especially pellucid nights when the moon and clouds are absent. In the winter most of all, when the night is free of the endless buzzing and chirping of insects, it’s possible to feel how thin the boundary is which lies between you and the true night of interstellar space.

And yet, if you had the ears to hear it, you could directly perceive that the universe is really an incomprehensibly vast instrument. Everything from galaxies to molecules emit a song of electromagnetic radiation which has been bombarding the Earth since long before man learned to listen.

But this noise isn’t evenly distributed. There are relatively quiet regions, such as the ‘microwave window’, which facilitate probing the heavens for signs of artificially-made signals. Near the bottom of the microwave window lies a range of frequencies between hydrogen (H) and hydroxyl ions (OH), respectively vibrating at 1420 MHz and 1660 MHz:

waterhol[1]

Hydrogen and hydroxyl are two results of the dissociation of water molecules, and are likely audible throughout the universe.

Because of this ubiquity and their position in one of the quietest parts of the radio spectrum, they make an obvious target for any civilization wanting to communicate with other advanced forms of life. And what do we call this meeting place standing between two byproducts of water? The water hole, of course!

It would be fitting, I think, if we were to someday make contact with other sophonts in the same that species have always congregated with their neighbors.

***

More:

[1] What is the water hole

SETI: the water hole