Maps Of Inner Worlds

Artist and filmmaker John Koenig is inventing a bunch of words to better capture various higher-order emotions. He calls it “The Dictionary of Obscure Sorrows”. Here, ‘sorrows’ doesn’t have quite the traditional meaning, instead denoting:

1.  an unspoken intensity of feeling.
2. a spark of transcendence that punctuates the flatlining banality of everyday life.
3. a healthy kind of ache—like the ache in your muscles after hard exercise—that reminds you that your body exists.

Koenig  says that he has chosen to focus on emotions towards the negative, or at least bittersweet, end of the spectrum because positive ones tend to evaporate when we begin to inspect them.

I don’t know if he knows it or not, but I’m pretty sure the fact that it is easier to grab and examine negative emotions has a basis in neurophysiology. Don’t cite me here because it’s been a long, long time since I’ve read this research, but if I recall correctly the nervous system has fairly sharp and distinguishable modes corresponding to negative emotions but only a generalized mode for the warm glow of positive emotions.

Why in this subjective landscape is happiness a relatively uniform river flowing amongst sharply-distinguished nations of misery and melancholy?

If I had to venture some armchair evolutionary psychology, I’d suggest that it’s because negative emotions are more important for survival. When you’re happy things in life are probably going pretty well, and there just isn’t much need to have tools you can use to pick those feelings apart.

If you have reason to be sad, miserable, or afraid, however, then having a way to parse these emotions and find their source could be advantageous.

Besides just being a beautiful little project, I think this might have actual research relevance, because all the natural languages I’m familiar with are fairly impoverished with respect to the introspective frameworks they provide.

Rationality, reflectivity, and secular mysticism would be easier to teach if we had a shared language for certain kinds of complex internal experiences.

For example, here is a coined word for an emotion I previously had to try to describe circuitously:

gnossienne

n. a moment of awareness that someone you’ve known for years still has a private and mysterious inner life, and somewhere in the hallways of their personality is a door locked from the inside, a stairway leading to a wing of the house that you’ve never fully explored—an unfinished attic that will remain maddeningly unknowable to you, because ultimately neither of you has a map, or a master key, or any way of knowing exactly where you stand.

I have had this happen to me a handful of times throughout my life and it has always been an experience so powerful it borders on the religious. I was never able to capture exactly what it felt like, but now that I have a word for it, I can try cultivate it.

Okay then, what are some neologisms that might be useful to an aspiring rationalist?

How about a word for what happens when an important piece of information simply fails to makes its way up to the level of your conscious awareness?

agnosis

n. A mental event during which something you should have considered simply fails to occur to you. Not like a thought you’re actively flinching away from, just a bubble that burst well below the surface.

A related idea is when you do manage to avoid agnosis but then you miss some obvious corollary:

implicasia

n. Also known as implication blindness. Occurs when you fail to consider one or more alternatives or possible outcomes of a situation.

Or, do you ever hear a parent, sibling, teacher, or spouse in your head, even years after they’re no longer a part of your life? What should we call that?

Soulshatter

n. A simulation of a significant person that you carry around with you. It can be a rich sub-personality that you regularly interact with or just a disembodied voice chiming in here and there with advice, admonishment, or commentary.

See also: Tulpa

Why would this matter? For the same reason that words always matter: like inventing a handle you can use to break off and carry around pieces of fog, words limn the contours of experiences, thoughts, concepts etc., giving shape to the nebulous and making otherwise hard-to-pin-down things easier to teach, aim towards, or avoid.

Processes Of Optimization.

In the beginning was the Bang, and for ages thereafter the universe did nought but sample randomly from the same distribution in the form of star and galaxy formation. And though the stars burned bright in the void, they had but a small influence on the speed with which the universe searched possibility space.

For the birth of stars did make the birth of planets more likely, which did make life more likely. And thus did each act as a gatekeeper to new regions of possibility space.

And lo, with the first self-replicators came the possibility of new organisms being created when mistakes occurred in the replication process. Eons later sex allowed existing DNA to be combined into novel configurations, and thus could possibility space be explored more quickly.

For verily is evolution a stupid process and its recursion weak, and it doth wobble after a drunkard’s fashion through possibility space with no insight, foresight, or intelligence.

And then there were brains, and with them the ability to improve upon evolution’s work. For some brains are able to plan for future goals and to imagine counterfactual situations, abilities which evolution possesses not. 

But alas, nervous systems never evolved much introspective depth, and had but the tiniest ability to recursively self-improve.

And then a small set of brains invented Science, which could accumulate many many more insights than any brain could in the span of its life. It was an age of optimism and plenty, and there was much rejoicing and proliferation of telescopes and gene sequencing and iphones throughout the land.

But even unto the present day Science has not learned enough to do anything more than weakly turn any optimization process back on itself.

And lo, from the cackling, structured madness of genetics, history, and culture did the universe cough up a series of sages, deep of insight, quick of thought, and usually possessed of tremendous social awkwardness.

After much study the sages warned that there might one day be a strong recursive process that could be a greater source of discontinuity than any that had come before it.

And tho did Einstein proclaim compound interest to be the greatest among the forces of heaven and earth, this was only partly true. For surely it is strong recursion which holdeth the greatest promise and the deepest peril.

Thus should ye heed this dire proclamation: work swiftly and work thoroughly, before the AI goeth ‘FOOM’.

Whither Discontinuity?

I’m writing a series of posts clarifying my position on the intelligence explosion hypothesis and today I want to discuss discontinuity.

This partially addresses the ‘explosion’ part of ‘intelligence explosion’. Given the fact that most developments in the history of the universe have not been discontinuous, what reason do we have to suspect that an AI takeoff might be?

Eliezer identifies the following five sources of discontinuity:

Cascades - Cascades occur when ‘one thing leads to another’. A (possibly untrue) example is the evolution of human intelligence.

It is conceivable that other-modeling abilities in higher primates became self-modeling abilities, which allowed the development of complex language, which allowed for the development of politics, which put selection pressure on the human ability to outwit opponents in competition for food and mates, which caused humans to ‘fall up the stairs’ and quickly become much smarter than the next smartest animal.

Cycles – Cycles are like cascades but the output hose is connected to the input hose. It’s possible for businesses or even individual people to capture enormous parts of a market by investing large fractions of their profits into infrastructure and research. Of course this isn’t the sort of extreme discontinuity we’re interested in, but it’s the same basic idea.

Insight – An insight is something like the theory of evolution by natural selection which, once you have it, dissolves lots of other mysteries which before might’ve looked only loosely connected. The resultant gain in knowledge can look like a discontinuity to someone on the outside who doesn’t have access to the insight.

Recursion – Is the turning of a process back on itself. An AI that manages to produce strong, sustained recursive self-improvements could rapidly become discontinuous with humans.

Magic – Magic is a term of art for any blank spaces in our maps. If something smarter than me turns its intelligence to the project of becoming smarter, then there should be results not accounted for in my analysis. I should expect to be surprised.

Any one of these things can produce apparent discontinuities, especially if they occur together. A self-improving AI could produce novel insights, make use of cascades and cycles, and might be more strongly recursive than any other known process.

Should We Care About Motives?

Imagine you have two people, one of whom is a known sociopath and the other of whom is an altruist, and both decide to build their own organic gardening communes.

They each have the same amount of money and a suitable plot of land, and both are able to complete their communes at around the same time. Moreover, let’s stipulate that in addition to the usual building inspections, special attention was paid to insuring that the sociopath’s commune has no hidden torture chambers or anything sordid like that.

Neither person stands to make any significant financial gains from this endeavor; the sociopath, however, has built his commune because the plot of land was desired by a hated rival and for complicated legal reasons the only thing he could get a permit to build was an commune, while the altruist just wanted to provide a place for people who don’t own shoes to raise free-range carrots or something.

Should we care why these two people built their communes? Isn’t the fact that the world now has more quality fruits and vegetables than it did before all that matters ?

In short: no, because motives are epistemically relevant.

It’s not like the altruist or the sociopath built their communes and then killed themselves. Both will go on to act in the future, and knowing why each of them did a certain thing, even when the outcome was the same in every way, will allow us to better predict what their future behavior will be.

It just isn’t feasible to partition off a subset of your motives indefinitely. Ted Bundy might be able to build a handful communes without torture chambers. But if he built a hundred, I’d lay long odds that you’d eventually find a torture chamber on one of them.

And, well, avoiding torture chambers seems like as good a reason as I can think of for trying to discover and track the underlying motives of other people.

 

Takeoff Speed II: Recalcitrance in AI pathways to Superintelligence.

I’m writing a series of posts clarifying my position on the intelligence explosion hypothesis. Last time I took a look at various non-AI pathways to Superintelligence and concluded that the recalcitrance profile for most of them was moderate to high.

This doesn’t mean it isn’t possible to reach Superintelligence via these routes, but it does indicate that doing so will probably be difficult even by the standards of people who think about building Superintelligences all day long.

AI-based pathways to Superintelligence might have lower recalcitrance than these alternatives, because of a variety of advantages a software mind could have over a biological one.

These advantages have been discussed at length elsewhere, but relevant to the present discussion is that software minds could have far greater introspective access to their own algorithms than humans do.

Of course programmers building such a mind might fear an intelligence explosion and endeavor to prevent this sort of deep introspection. But in principle an AI with such capabilities could become smart enough to start directly modifying and improving its own code.

Humans can only do a weak sort of introspection, and therefore can only do a weak sort of optimization to their thinking patterns. So far, anyway.

At a futurist party recently I was discussing these ideas with someone and they asked me what might happen if a recursively self-improving AI hit diminishing returns on each optimization. Might an intelligence explosion just sort of… fizzle out?

The answer is yes, that might happen. But so far as I can tell there isn’t any good reason to assume that that will happen, and thus the safest bet is to act as though it probably will happen and start thinking hard about how to steer this runaway process in a direction that leads to a valuable future.

Takeoff Speed, I: Recalcitrance In Non-AI Pathways to Superintelligence

I’m writing a series of posts clarifying my position on the Intelligence Explosion hypothesis. Though I feel that the case for such an event is fairly compelling, it’s far less certain how fast the ‘takeoff’ will be, where ‘takeoff’ is defined as the elapsed time from having a roughly human-level intelligence to a superintelligence.

Once we’ve invented a way for humans to become qualitatively smarter or made machines able to improve themselves should we expect greater-than-human intelligence in a matter of minutes or hours (a ‘fast takeoff’), over a period of weeks, months or years (a ‘moderate takeoff’), or over decades and centuries (a ‘slow takeoff’)? What sorts of risks might each scenario entail?

Nick Bostrom (2014) provides the following qualitative equation for thinking about the speed with which intelligence might explode:

Rate of Improvement = (optimization power) / (recalcitrance)

‘Recalcitrance’ here refers to how amenable a system might be to improvements, a value which varies enormously for different pathways to superintelligence.

A non-exhaustive list of plausible means of creating a superintelligence includes programming a seed AI which begins an improvement cascade, upgrading humans with smart drugs or computer interfaces, emulating a brain in a computer and then improving it or speeding it up, and making human organizations vastly superior.

These can broadly be lumped into ‘non-AI-based’ and ‘AI-based’ pathways, each of which has a different recalcitrance profile.

In the case of improving the human brain through drugs, genetic enhancements, or computers, we can probably expect the initial recalcitrance to be low because each of these areas of research are inchoate and there is bound to be low-hanging fruit waiting to be discovered.

The current generation of nootropics is very crude, so a few years or a decade of concerted, well-funded research might yield classes of drugs able to boost the IQs of even healthy individuals 20 or 30 points.

But while it may be theoretically possible to find additional improvements in this area, the brain is staggeringly complicated with many subtle differences between individuals, so in practice we are only likely to get so far in trying to enhance it through chemical means.

The same basically holds for upgrading the human brain via digital prosthetics. I don’t know of any reason that working memory can’t be upgrade with the equivalent of additional sticks of RAM, but designing components that the brain tolerates well, figuring out where to put them, and getting them where they need to go is a major undertaking.

Beyond this, the brain and its many parts interact with each other in complex and poorly-understood ways. Even if we had solved all the technical and biological problems, the human motivation system is something that’s only really understood intuitively, and it isn’t obvious that the original motivations would be preserved in a radically-upgraded brain.

Perhaps, then, we can sidestep some of these issues and digitally emulate a brain which we speed up a thousand times.

Though this pathway is very promising, no one is sure what would happen to a virtual brain running much faster than it’s analog counterpart is supposed to. It could think circles around the brightest humans or plunge into demented lunacy. We simply don’t know.

Finally, there appears to be a very steep recalcitrance gradient in improving human organizations, assuming you can’t also modify the humans involved.

Though people have figured out ways of allowing humans to cooperate more effectively (and I assume the role the internet has played in improving the ability to coordinate on projects large and small is too obvious to need elaboration), it’s difficult to imagine what a large-scale general method for optimizing networks of humans would even look like.

None of the above should be taken to mean that research into Whole Brain Emulation or Human-Computer interaction isn’t well worth doing. It is, but many people make the unwarranted assumption that the safest path to superintelligence is to start with a human brain because at least then we’d have something with recognizably human motivations which, conversely, would also understand us.

But the difficulties adumbrated may make it more likely that some self-improving algorithm crosses the superintelligence finish line first, meaning our research effort should be focused on machine ethics.

Perhaps more troubling still, it isn’t trivial to assume that we can manage brain upgrades, digital, chemical, or otherwise, in a precise enough manner to ensure that the resulting superintelligence is benevolent or even sane.

Peripatesis: Controlling A God, Hannibal Leaves Italy.

‘Peripatesis’ is a made-up word related to the word ‘peripatetic’, which is an adjective that means ‘roaming’ or ‘meandering’. I’ve always liked to think of knowledge as a huge structure through which a person could walk, sprint, dive, climb, or fly in as straightforward or peripatetic a fashion as they like.

Here’s are my recent wanderings and wonderings:

Bostrom, N., Superintelligence, p. 127-144

In the sprawling chapter 9 Bostrom discusses and finds problems with several proposed means of controlling a Superintelligence. These include boxing it, setting up tripwires, and building our preferences into its motivational system.

I plan on touching on these topics substantially in the future, so that’s all I’ll say about them for now.

Goldsworthy, A., The Fall of Carthage, p. 234- 244

Though Hannibal only received reinforcements on one occasion in 215, but his brother Hadsdrubal crossed the alps via the same path he did in 207 and his brother Mago landed in Genoa in 205.

Unfortunately for Hannibal, neither brother managed to accomplish much before being killed (or in Mago’s dying en route to Carthage of a wound sustained during combat). In 203 Hannibal received orders to evacuate and come to the defense of Carthage, which was being menaced by Roman invaders in North Africa.