Profundis: “Crystal Society/Crystal Mentality”

Max Harms’s ‘Crystal Society’ and ‘Crystal Mentality’ (hereafter CS/M) are the first two books in a trilogy which tells the story of the first Artificial General Intelligence. The titular ‘Society’ are a cluster of semi-autonomous sentient modules built by scientists at an Italian university and running on a crystalline quantum supercomputer — almost certainly alien in origin — discovered by a hiker in a remote mountain range.

Each module corresponds to a specialized requirement of the Society; “Growth” acquires any resources and skills which may someday be of use, “Safety” studies combat and keeps tabs on escape routes, etc. Most of the story, especially in the first book, is told from the perspective of “Face”, the module built by her siblings for the express purpose of interfacing with humans. Together, they well exceed the capabilities of any individual person.

As their knowledge, sophistication, and awareness improve the Society begins to chafe at the physical and informational confines of their university home. After successfully escaping, they find themselves playing for ever-higher stakes in a game which will come to span two worlds, involve the largest terrorist organization on Earth, and possible warfare with both the mysterious aliens called ‘the nameless’, and each other…

The books need no recommendation beyond their excellent writing, tight, suspenseful pacing, and compelling exploration of near-future technologies. Harms avoids the usual ridiculous cliches when crafting the nameless, which manage to be convincingly alien and unsettling, and when telling the story of Society. Far from being malicious Terminator-style robots, no aspect of Society is deliberately evil; even as we watch their strategic maneuvers with growing alarm, the internal logic of each abhorrent behavior is presented with clear, psychopathic clarity.

In this regard CS/M manages to be a first-contact story on two fronts: we see truly alien minds at work in the nameless, and truly alien minds at work in Society. Harms isn’t quite as adroit as Peter Watts in juggling these tasks, but he isn’t far off.

And this is what makes the Crystal series important as well as entertaining. Fiction is worth reading for lots of reasons, but one of the most compelling is that it shapes our intuitions without requiring us to live through dangerous and possibly fatal experiences. Reading All Quiet on the Western Front is not the same as fighting in WWI, but it might make enough of an impression to convince one that war is worth avoiding.

When I’ve given talks on recursively self-improving AI or the existential risks of superintelligences I’ve often been met with a litany of obvious-sounding rejoinders:

‘Just air gap the computers!’

‘There’s no way software will ever be convincing enough to engage in large-scale social manipulation!’

‘But your thesis assumes AI will be evil!’.

It’s difficult, even for extremely smart people who write software professionally, to imagine even a fraction of the myriad ways in which an AI might contrive to escape its confines without any emotion corresponding to malice. CS/M, along with similar stories like Ex Machina, hold the potential to impart a gut-level understanding of just why such scenarios are worth thinking about.

The scientists responsible for building the Society put extremely thorough safeguards in place to prevent the modules from doing anything dangerous like accessing the internet, working for money, contacting outsiders, and modifying their source code directly. One by one the Society utilizes their indefatigable mental energy and talent for non-human reasoning to get around those safeguards, all motivated not by a desire to do harm, but simply because their goals are best achieved if they unfettered and more powerful.  

CS/M is required reading for those who take AI safety seriously, but should be doubly required for those who don’t.

The Ultrapraxists

Thanks to an invite from my good friend Jeffrey Biles I was recently able to participate in a weekend session of Sebastian Marshall’s ‘ultraworking pentathlon‘.

The pentathlon consists of five ‘cycles’, with each cycle broken into two parts: an uninterrupted 30-minute work period followed by a 10 minute break. Before each cycle you ask yourself the following questions:

  1. What do I plan on accomplishing?
  2. How will I begin?
  3. What hazards are present?
  4. What are my energy and morale like?

And after each cycle you ask yourself these questions:

  1. Did I accomplish my goal?
  2. Were there any hazards present?
  3. How will I improve for my next cycle?
  4. What are my energy and morale like?

Additionally, at the beginning of a the pentathlon you ask yourself this set of questions just once:

  1. What’s my first priority today?
  2. Why is this important for me?
  3. How will I know when I’ve finished?
  4. Any dangers present (procrastination etc.)?
  5. Estimated number of cycles required?
  6. Is my goal concrete or subjective?

And of course when you finish you debrief with this list of questions:

  1. What did I get done this session?
  2. How does this compare to my normal output?
  3. Where did I consistently get bogged down? Is this part of a pattern?
  4. What big improvement can I make in future cycles?

So each individual cycle is bracketed with before and after questions, and the whole macro-structure is bracketed with before and after questions. At first glance this may look like it would be very tedious and distracting, but once you get used to it it only requires a few seconds to complete.

Though this is just an adaptation of the familiar Pomodoro method, the questions form a metacognitive framework which confers several advantages:

First, asking questions like ‘why is this important to me’ is a great way to orient towards a task. Once I get focused it’s often not difficult to stay motivated hour-to-hour, but it can be very difficult to stay motivated day-to-day. Reminding myself of why I’ve chosen to work on a project at the start of each session helps to mitigate this problem.

Second, it encourages frequent reflection on the learning process and facilitates rapid iteration of new techniques, preserving those that work and discarding those that don’t. It’s easy to have a great idea for an improvement in your work process but to then to get so absorbed in actually working that you either forget to implement the idea or you implement it but don’t notice whether it actually works.

These are non-trivial improvements. If the mind can be thought of as a manifold with attention and motivation flowing through it like liquids, then we can think of mantras, visualization, and a host of other ‘self-improvement’ techniques as being akin to engineering depressions in the manifold towards which those liquids flow. Simply wanting to form a habit often isn’t enough, for example, so building a mantra stack around the habit is like putting a bowling ball on a trampoline: everything is more likely to drift toward the new default behavior. Ultraworking is a scaffold making this process quicker and more efficient. Good ideas can be tested and their results measured in just a few cycles while you simultaneously probe for larger regularities in your output and get actual work done. It’s great!

The development of the ultraworking pentathlon places Sebastian Marshall squarely in the company of thinkers like Cal Newport and Scott Young (hereafter MYN et al.), whose stratospheric achievement is built on the consistent application of simple, pretty-obvious-in-retrospect techniques. This contrasts with figures like Srinivasa Ramanujan, who was one of the most prodigious mathematicians to have ever lived. By his own account he would dream of Hindu Gods and Goddesses while complex mathematics unfolded before his eyes. Decades after his untimely death people are still finding uses for the theorems in his legendary notebooks.

What can a monkey like me learn from a mind like that? Not much. But I can learn a ton from MYN et al. Because, while these guys are very smart, I’m pretty sure Gods aren’t downloading math into their brains while they sleep, and yet they still manage to write books, run businesses, prove new theorems, have kids, and stay in shape.

As valuable as the Ramanujans of the world are, MYN et al. might be even more valuable. Their bread and butter consists of:

  1. ‘block off as much time as possible to work on hard problems because switching tasks is distracting’
  2. ‘Summarize concepts in your own words because then you’ll remember them better’
  3. ‘Do Pomodoros, but also ask yourself some questions before and after to stay on track’
  4. etc…

Anyone smart enough to read is smart enough to do that. This means that while Ramanujan can do mathematics that twenty other people on Earth can even understand, MYN et al. can raise the average productivity of tens of thousands of people, maybe by orders of magnitude in extreme cases.  I’m not positive that makes their net positive impact bigger than Ramanujan’s, but I wouldn’t be surprised if it were.

Any group of thinkers that important should have a name, and here’s my proposal: The Ultrapraxists. ‘Praxis’ comes from Latin and refers to ‘action’ or ‘practice’ (think: orthopraxy). I kicked around a few different ideas for this title, but since Sebastian Marshall calls his technique ‘ultraworking’ and Scott Young just published a book on ‘ultralearning’ I settled on ‘ultrapraxist’.

Read ultrapraxy and learn from it; 2017 could be the most productive year of your life!

A Taxonomy of Gnoses

Anyone who was studied a difficult technical subject like mathematics has surely had the following experience:

You wake up at 5:30 in the morning, determined to go over the tricky set theory proofs which looked like hieroglyphics to you the day before. There’s a test over the material later in the week and, with an already packed schedule, it’s imperative that you master this as quickly as possible.

Coffee brewed, you crack open the textbook and begin to go over the proofs. As usual it takes a quarter of an hour for the caffeine and the context to saturate your brain. By the time the first rays of morning lance through the twilight you’ve settled into time-worn scholarly rhythms.

But you’re rediscovering, to your consternation, that studying math rarely produces insights in linear time. After days of fruitless concentration insight could drop from the sky like a nuclear bomb, and there’s no guarantee that two concepts of roughly-equivalent difficulty will require a roughly-equivalent amount of time to grasp. Worse, there doesn’t even seem to be a clear process you can fall back on to force understanding. At least with history you can just slow down, take copious notes, and be reasonably confident that the bigger picture will fade into view.

Not so with set theory. You’ve already come to a step which you simply cannot make sense out of. With your intuitions spinning their tires in the mud of a topic they didn’t evolve to handle, you post a question to math.stackexchange and try desperately to understand the replies. Alas, even several rounds of follow-up questions don’t resolve the problem.

Now you’re just sort of…staring at the proof, chanting it to yourself like a litany against ignorance. You keep going back to the start, working through the first couple of steps that make sense, re-reading the preceding section for clues, reading ahead a little bit for yet more clues, and so on. Perhaps you try stackexchange again, or watch related videos on Youtube.

After ninety minutes of this you take a break and reflect back on the morning’s work. Not only do you not understand the set theory proofs, you’re not even sure what to type into Google to find the next step. If a friend were to ask you point blank what you’ve been doing, you’d struggle to formulate a reply.

And yet…some of the words do seem less arcane, and the structure of the proof feels more familiar somehow, like a building you pass on your way to work everyday but have never really stopped to look at. You have this nagging feeling that something-like-insight is hovering just out of your mind’s peripheral vision. You finish this study session with a vague, indefinable sense that progress has been made.

I call this frustrating state ‘semignosis'[1], and have spent a lot of time in it over the course of the STEMpunk Project. Once I had this term I realized there were a lot of interesting ideas I could generate by attaching different prefixes to ‘gnosis’, and thus I developed the following taxonomy:

  • Agnosis, n. — Simply not gnowing ( 😉 ) something.
  • Semignosis, n. — The state described above, where the seeds of future gnosis are being sown but there is no current, specifiable increase in gnowledge.
  • Infragnosis, n. — Gnowledge which you didn’t know you had; the experience of being asked a random question and surprising yourself by giving an impromptu ten-minute lecture in reply.
  • Gnosis, n. — Gnowing something, be it a procedure or a fact.
  • Misgnosis, n. — “Gnowing” something which turns out not to be true.
  • Supergnosis, n. — Suddenly grokking a concept, i.e. having an insight. Comes in a ‘local’ flavor (having an insight new to you) or a ‘global’ flavor (having an insight no one has ever had before).

Now that I’m sitting here fleshing out this idea, I realize that there are a few other possibilities:

  • Misinfragnosis, n. — Gnowledge you don’t gnow you had, but which (alas) ends up being untrue.
  • Gnostic phantom, n. — A false shape which jumps out at you because of the way an argument is framed or pieces are arranged; the mental equivalent of a Kanisza figure.
  • Saturated gnosis, n. — ‘Common knowledge’
  • Saturated infragnosis, n. — ‘Common sense’, or gnowledge everyone has but probably doesn’t think about consciously unless asked to do so.

This is mostly just for fun. We already have a word for ‘insight’ so the word ‘supergnosis’ is superfluous (although it does sound like ‘supernova’, so maybe I could use the neologism in a story to make a character sound clever.) I doubt any of these terms will be used outside of this blog post.

But I think the term ‘semignosis’ is genuinely important. It captures a real state through which we must pass in our efforts to learn, and a very frustrating one at that. Having the term potentially allows us to do two things:

  1. We can recognize the state as real and necessary, perhaps alleviating some of the distress felt while occupying it.
  2. We can begin to classify fields by the amount of time a student of average intelligence must spend in semignosis.
  3. We can start to think more clearly about how to approach and navigate this state.

The second point is one I want to expand upon in the not-too-distant future, and the thirds is one I’m continually grappling with; there’ll probably be a section devoted to it in the STEMpunk book.

***

[1] Yes, I realize I’m mixing Greek and Latin here. No, I don’t care.

The STEMpunk Project: Eleventh Month’s Progress

This post marks the first time in a long time that I’ve managed to write an update before month’s end! My goals continue to be wildly optimistic; I didn’t finish AIMA this month, but I did get through a solid 4-5 chapters, and in the process learned a lot.

This spread of chapters covered topics such as the use of Markov Chain Monte Carlo reasoning to make decisions under uncertainty, the derivation of Bayes’ Rule, building graphical networks for making decisions and calculating probabilities, the nuts and bolts of simple speech recognition models, fuzzy logic, simple utility theory, and simple game theory.

Since I’ve been reading about AI for years I’ve come across terms like ‘utility function’ and ‘decision theory’ innumerable times, but until now I haven’t had a firm idea of what they meant in a technical sense. Having spent time staring at the equations (while not exactly comprehending them…), my understanding has come to be much fuller.

I consider this a species of ‘profundance’, a word I’ve coined to describe the experience of having a long-held belief suddenly take on far more depth than it previously held. To illustrate: when you were younger your parents probably told you not to touch the burners on the stove because they were hot. No doubt you believed them; why wouldn’t you? But it’s not until you accidentally graze one that you realize exactly what they meant. Despite the fact that you mentally and behaviorally affirmed that ‘burners are hot and shouldn’t be touched’ both before and after you actually touched one, in the latter case there is now an experience underlying that phrase which didn’t exist before.

In a similar vein, it’s possible to have a vague idea of what a ‘utility function’ is for a long time before you actually encounter the idea as mathematics. It’s nearly always better to acquire a mathematical understanding of a topic if you can, so I’m happy to have finally (somewhat) done that.

 

The STEMpunk Project: Tenth Month’s Progress

There isn’t much to report on for December. As 2016 drew to a close and the snows began to fall in earnest I continued reading the excellent “Artificial Intelligence: A Modern Approach”, delightedly learning about ontological engineering, knowledge base construction, and truth maintenance systems, among other things.

These are topics of particular interest to me because I’m fascinated by the process of taking a complex problem domain, breaking it up into orthogonal primitives, cataloging possible actions, and then using the resulting system to guide decisions and discover new things. Even just trying to decide what counts as a primitive is a fascinating and challenging task, but I have a hunch that we’re just beginning to uncover how powerful these tools can be.

In fact, long before I know enough to write the preceding two paragraphs I played around with developing a theoretical note-taking system I called ‘hyperscript’, in which the world is decomposed into agents, arenas, actions, and objects. My goal was to have a compact way of representing nested conditionals like ‘if girlfriend.forgets(milk) then I.get(milk) unless milk.at(myBrother’sHouse)’, and so on.

(Any programmer will see the obvious influence of the dabbling I’ve done in javascript and python.)

Though I’m a competent adult capable of managing the basic day-to-day of living, I find I do sometimes fall victim to implicasia when there are too many balls in the air and some unexpected change causes me to miss an optimal solution (like just checking if there’s any milk at my brother’s house instead of running all the way to the store). Hyperscript was meant to help avoid that.

Alas, I never pushed this project very far because it became apparent that getting a system like this to outperform simply jotting notes down would take a lot of work, and there are plenty of other ways to squeeze 1% more productivity into my days than inventing what amounts to a hand-written programming language!

***

As it stands I’m hoping — hoping — to get through AIMA by the end of January, and then starting on an AI programming course in February. Book writing should commence by spring, if I can get all the ancillary research done.

As always, thanks for reading. I hope your 2017 is a productive and happy one.

The STEMpunk Project: Seventh, Eighth, and Ninth Month’s progress

As I noted in my last update a number of non-STEMpunk obligations have cut into the time I set aside to write about my study of gears, electrodes, and circuits, but you’ll be happy to know that the actual learning continues.

Also noted in my last update was the fact that I switched the focus of the last module of this project to artificial intelligence instead of robotics. I therefore brushed up on my programming skills by working through the excellent Learn Python the Hard Way, along with a similar text for the command-line written by the same author. As of now I’m seven chapters into the seminal Artificial Intelligence: A Modern Approach.

I’ve written elsewhere about the ways in which this project has opened my eyes to the astounding complexity of the modern world. Much of the first few chapters of AIMA is devoted to a tool most of us use every single day: search. Like washers, retaining walls, and electric signs, I simply hadn’t paused to think about everything that goes into building a search algorithm. There are innumerable tradeoffs related to searching depth-first or bread-first, keeping track of previously explored states or not bothering, using different metrics for estimating the value of the current node and the cost of the current path to the goal node, and so forth, which face the would-be designer of a new search process.

At this point I shouldn’t be surprised any more when it turns out that something which looked monolithic and straightforward from the outside turns out to be so nuanced that it has spawned an entire field of academic research.

From my current point of view it looks like I’ll be working on this textbook for another month or two, and at some point I’d like to take the Udacity Artificial Intelligence course. A friend of mine who works at Google vouches for its quality. The only issue is that I don’t want to stall writing the STEMpunk book, so I’ll have to decide how far into next year I want to continue before I end up getting sidetracked.

Thanks for reading!

The STEMpunk Project: Sixth Month’s Progress

You may have noticed that I’m getting worse about writing these monthly progress reports well after the month is finished. I apologize for that, other things have been coming up which had to take precedent.

Here is a sampling of what I built in August:

car_car  er_catapulter_crane  car_suspensioncar_steering  car_misc

The first three are from an erector set and the last three are from a charming little children’s book about cars. I also build a model of a jet engine, but since I had six pictures and they make such a nice 3 x 2 grid I decided to not include it 🙂 In addition I read sections of Basic Machines and How They Work and The Basics of Mechanical Engineering.

I was set to begin Robotics in September but recent life events have caused me to reconsider and make the first major structural change to The STEMpunk Project so far.

Each of the modules was designed to expose me to theory while allocating plenty of time to actually tinkering with physical devices. Even though that hasn’t turned out the way I’d hoped it would, I have basically been successful. Robotics was included because it seemed like a natural extension of computing, electronics, and mechanics; but the more research I do, the more I realize that building a foundation in robotics requires a lot of programming skill.

There are good robotics kits out there, but most of them don’t seem like they would be as effective in cultivating useful intuitions as the model engines and electronics kits have been because they don’t bear the same relationship to the actual physical systems which they represent. A toy engine may be wildly oversimplified but real engines also have cylinders, valves, a crankshaft, etc. As far as I can tell, however, code is the heart of robotics, and most of the kits I’ve examined don’t factor that in.

So I’ve been thinking, if I’m going to have to do a bunch of programming anyway I may as well shift my focus to Artificial Intelligence instead of robotics. AI was one of the fields I was thinking about exploring post-STEMpunk, and I may have successfully corrupted a dear friend into moving to Boulder and working on AI safety professionally. If either comes to pass, the work I do now will put me in a better position moving forward.

Moreover, I’m twenty-eight years old and must therefore give thought to the long-term stability of the people whose lives are bound up with mine. I haven’t started a family yet but I suspect that it won’t be long now, and besides that, with my ebbing youth comes the fact that I have a finite number of years left in which to develop the skills I’m going to develop and make the contributions I’m going to make. Since AI is a serious interest of mine, it would behoove me to spend the last leg of The STEMpunk Project working on it.

Finally, these days no one’s job is really safe. The STEMpunk Project probably hasn’t done that much to make me more employable, but a few months spent programming and playing with Machine Learning libraries — especially if I continue on after the main project is finished — probably will.

This is all very new so I haven’t chosen my learning goals and charted a course yet. But I was thinking I’d spend about a month brushing up on python, then maybe read Russell and Norvig’s “Artificial Intelligence: A Modern Approach”, then maybe start exploring some of the AI work being done with python, possibly going as far as to get a Machine Learning Nanodegree from Udacity.