Peripatesis: Chess after Deep Blue; Tarot and Randomness; The Sequences in Lojban; Sovereign Startups

Max Gladstone, author of The Craft Sequence, says:

“to what extent is a computer capable of placing the correct moves in a Go game, or a chess game, actually performing the activity humans reflexively describe as “playing go”? A professional chess player develops patience, mental endurance, and profound mental habits required to bend her omnivore-scavenger brain to the profoundly non-omnivore-scavenger activity of staring at a game board for several hours at a time, oblivious to any potential predators creeping up behind. These are additional “rules” to the game as played by humans—or at least, they’re constraints to which human players are subjected. “Learning to play chess,” for a human, is really “learning how to navigate human embodied cognition in such a way as to win a chess game.” Is a hydraulic car-moving robot stronger than a champion weightlifter? On paper it can move more weight. But I suspect we use the word “strong” to mean different things in different contexts.”

Of course it’s a moot point whether or not machines are using processes comparable to the ones humans use if the humans are consistently losing. But it does lead to another insight:

Though it’s probably safe to say that a human will never again be the best chess player on Earth, that doesn’t mean learning to play chess is pointless, anymore than exercise became pointless after we invented cranes. Likewise, learning a foreign language will still be an edifying experience long after some machine learning startup solves the problem of automated translation.

In truth, as machines take over progressively greater segments of the economy it’ll probably become more important for people to keep playing chess, learning languages, and lifting weights, because we’ll be less and less able to look to our jobs to stimulate our minds and our bodies.

***

An enterprising writer by the name of Vivian Caethe has launched a kickstarter for her “writer’s block Tarot” deck. As she tells it, doing a reading of a Tarot deck whenever she came to a sticky point in a story did a lot to help her move the plot and the characters in the story along.  Eventually it occurred to her that she could reshape the entire Tarot deck with the writer’s craft in mind, and thus did “The Fool” in the traditional deck become “The Protagonist” in the writer’s deck.

Tarot decks are usually divided into the Major Arcana, which deals with large scale concepts and themes, and the Minor Arcana, which handles smaller ones. This division is mimicked in the writer’s block Tarot deck as well; cards in the Major Arcana focus on sweeping aspect of a hero’s quest, while the Minor Arcana are concerned with details of a hero’s life

This reminded me of a Slate Star Codex post on the value of random noise. While a lot of people accept that shaking up your usual routine might be a good way of having more creative insights, Scott also posits that it might also help one be more correct. It’s easy, when you’re stuck in a cognitive rut, to simply fail to think of strong arguments against a position you hold. Or, upon hearing genuinely good arguments, to round them off to their nearest cartoon version without realizing you’re doing so. By surrounding yourself with extremely bright contrarians, having Markov chains randomly rearrange sentences, or doing Tarot readings for a story you’re writing, you leverage “noise” to increase the chances that you’ll successfully break out of the rut you’re in.

***

There’s an anecdote in Cal Newport’s “Deep Work” wherein a bright Ivy-leaguing Jewish man named Adam Marlin starts taking his faith seriously, rising at dawn every day to decipher as many pages as possible from the Talmud. He notices that not only does his ability to focus improve dramatically, but that he is soundly beaten in contests of intellect by those who began this training earlier in life, even when they aren’t particularly well educated.

Since I don’t have any particular interest in taking my study of ancient religious texts this far, I thought it’d be interesting to replicate this exercise with secular materials. An ideal source would be something voluminous but which still imparted valuable lessons. How about using a condensed version of Yudkowsky’s The Sequences?

But that still leaves the problem of translating the materials into a language which has native speakers. As the point is to develop profound skill in the art of concentration, anyone who speaks the language into which we translate The Sequences will be at a linguistic advantage and thus won’t reap as much reward from the exercise.

This means we need a language that has no native speakers, a consistent, logical grammar, and ideally some following in the rationalist community.

The obvious choice would be Lojban, of course!

Hopefully you’ll one day be able to join me at sunrise in a daily ritual of reading The Sequences in Lojban as we both develop superfocus.

***

The Montreal-based company Sui Generis is trying to make it possible to open a new country like you might open a new company.  While the vision of the founder Guillaume Dumas comes across as a little breathless and naive — “we’re going to build corporate socialist states based on FUN!” — I think the underlying idea is interesting and potentially fruitful.

There are many examples of highly successful microstates like Singapore and Hong Kong, and I’ve long thought that someone should try and design a mechanism whereby interested parties can carve out small amounts of territory in which to found their own nation states.

This is at once a nifty metapolitical solution to the problem of good governance and a means of applying the innovation-generating powers of the market to those problems. Instead of multiple sides presenting interminable arguments for and against communism, libertarianism, monarchism, monetarism, Austrianism, etc., we can just let entrepreneurs configure their states however they want and then compete with each other for a tax base.

The STEMpunk Project: Designing a Computer System

(Note: my knowledge of computers and computer construction is increasing rapidly. The following shouldn’t be taken as anything other than provisional.)

As part of the Computing section of The STEMpunk Project I wanted to design and build my next PC. Unfortunately, because I did fairly well this past year my annual and completely voluntary donation to Uncle Sam is going to be a bit larger than I expected, so I’m going to defer the actual building part until later in the year when I can afford to buy the components.

I’m still going to design the system though, as I believe this to be a useful exercise for the aspiring techie.

To get a feel for how this process works I did two things: first, I read a couple of books on DIY PC building, making note of the components the authors chose for various “budget”, “mainstream”, and “extreme” systems, and then I tried to analyze the makeup of a few systems with which I am familiar.

The results are summarized in this chart, which adumbrates two systems from Robert and Barbara Thompson’s excellent book “Building the Perfect PC”, the Macbook Pro upon which I’m writing this post, and “The Great Beast”, a system built for Eric Raymond by the good folks at TekSyndicate:

PDF of chart (which I’ll embed directly as soon as I figure out how to do that).

So then, how will I build my own system?

Case:

There are a billion different options for PC cases, with all sorts of stylistic variations. You have your steampunk cases, which range from fairly minimalist to exuberantly Baroque:

steampunk_case_argos_mod_ultimate_gaming-_pc2c5b7e3684b0ae9157caad263a6e18062absinthetics-tower-case-650-80

 

 

 

 

 

There are gorgeous cases made out of wood:

wood_159442-438x wood_Mission woodil_fullxfull.690714334_9z0i

 

 

 

 

 

Cases with insane paint jobs:

PMs2dLYK3atj.840x0.Vdef9Kkm

7z2IFFbRDVcU.840x0.Vdef9Kkm

Fb9gJWebcxqp.840x0.Vdef9Kkm

 

 

 

And all manner of custom-built oddities in the shape of musical instruments, anime characters, spacecraft, and so on.

I have this vision of a pure glass case custom built in the shape of a pyramid, etched with runes or other cool, arcane-looking symbols. Maybe someday I’ll be able to afford it, but for now I’ll probably just use an NZXT H630 , the same model that cages The Great Beast (though Raymond’s was black and I prefer the glossy white version):61VvqlEAQwL._SL1500_

 

 

 

Internals

I’d like to go ahead and build an “extreme” system based one the $1500 gaming build described in this blog post. The way I see it, my needs are reasonably similar to the ones that motivated the author’s choice in components, and while I plan on using my system more for design, editing, and visualization than gaming, I’d like to leave the option open.

The motherboard he chose is an ASUS Z170A, upon which is mounted a formidable Intel Core i5-6600K CPU and a CM Hyper 212 EVO cooler. The whole apparatus is powered by an EVGA SuperNOVA G2 750 Watt power supply.

“Kingston” was a name in RAM manufacturing that repeatedly came up, and this system will utilize their HyperX Fury 16 GB offering. I don’t plan on using a RAID configuration for storage, and I like feeling like I have plenty of room to expand into, so I’ll probably install a few terabytes worth of Seagate Barracuda XT2 hard drive.

In the graphics card department the Gigabyte GTX 980 Ti should be able to handle anything I’m likely to throw at it, and the Crystal Sound 3 integrated audio card is more than enough for my purposes. I will probably spring for some decent 2.1 speakers from Logitech, like their Z623 model.

Using brandname mice, keyboards, and displays doesn’t matter all that much to me. The ergonomic USB keyboard that I’ve been using for a year should suffice, but I have been thinking about giving optical trackballs a try as they reduce wrist strain and extraneous motion. I don’t know the first thing about trackballs, but because I have seen Logitech products endorsed all over the place I might as well try out their Trackman Marble Trackball mouse.

And while one monitor is much the same as another, I hate toggling between windows on a crowded screen, so I am willing to buy some extra screen real estate. This means that whatever n00b pwnage or data visualizations I might happen to be involved in will reach my eyeballs via two (or possibly three) 20 inch monitors.

There you have it, an outline of what will hopefully be the machine powering my future endeavors.

 

 

Profundis: Steve Jobs

Walter Isaacson’s biography of Apple’s late CEO is a powerful, unflinching look into the life and mind of one of the great technological visionaries of modern history.

In 2016 it can be hard to remember that Macintosh and Microsoft computers are just machines for acting on and storing bits, but that’s because Jobs successfully made his devices a fashion statement, all while doing remarkable work in the markets for portable music players, digital music, smart phones, and tablet computing.

And something I didn’t know before reading Isaacson’s book was that Jobs was also the CEO of Pixar for a time, overseeing the development of films like Toy Story which were completely revolutionary at the time.

But the above shouldn’t be taken to mean that Isaacson shies away from Jobs’ dark side. Despite being an adopted child himself and struggling with the concomitant abandonment issues, Jobs also had an illegitimate daughter that he spent years neglecting. There’s compelling evidence that he deliberately swindled Apple’s cofounder Steve Wozniak in the very early days of their joint venture, and he gained some notoriety for his habit of blatantly taking credit for other people’s ideas. His manners were atrocious, he had body odor because he believed (incredibly!) that a diet of fruits and vegetables obviated the need to bathe, and though we’ll never know for sure whether he would have survived if he had listened to his doctors from the outset, there is a real chance that his stubborn insistence on trying to treat his cancer through diet contributed to his early demise.

Nevertheless, he stood a Titan at the intersection of the humanities and technology, and cast a shadow in which anyone that wants to work in the same space will be forced to stand.

When I read biographies like this it’s with an eye toward traits and habits that I can use to cultivate greatness in myself. There are a few things about Jobs that stand out as especially worthy of emulation:

  1. He had an astonishing devotion to craftsmanship and cared about every detail of Apple products down to the screws used in the cases and the layouts of circuits on the internal components.
  2. His attention to detail was almost preternatural. Isaacson relays a story wherein Jobs was reviewing an advertisement that was getting ready for shipment, and he noticed that the agency in charge of production had removed two frames from the ad which caused the changing images on the screen to be ever so slightly out of time with the accompanying music. He ordered the frames reinserted, and the commercial was better for it.
  3. He understood that computers are devices meant to be used by humans, and not just programmer humans. The haptic feedback of a touchscreen, the sleek aesthetics of the Apple product line, the ruthless simplicity, the aggressively intuitive user interfaces, and everything else that make Apple distinctive, exist in no small part because of his grasp of this fact.
  4. He cared passionately about art, artists, and design. Just because a computer is a tool doesn’t mean it should be quotidian, ugly, or poorly made. The Jobs family spent weeks agonizing over the decision to buy a common household appliance because for Steve it was a priority to be surrounded by things he could admireA dishwasher may not be a sculpture, but it’s also not an entirely different thing, either. Since reading this biography I have come to better appreciate the fact that there is a continuum, not a barrier, between things that are meant to be contemplated or admired and things that are meant to be used.
  5. Much of the success he had as Apple’s CEO was due to his ability to spot the highest-return opportunities available and to narrow the list down to just a handful of great products. Shortly after returning to Apple as the CEO, he gutted huge swathes of the product line and re-oriented Apple towards building just a handful of great products for specific niches.

Like many captivating personalities Jobs was a bundle of contradictions. He was a Zen enthusiast that obsessed over the tiniest aspects of the products made by his company — the most valuable on Earth as of this writing. His callous willingness to hurt and even betray the people closest to him was legendary, but so was his profound understanding of what consumers wanted and needed when they interacted with their devices.

Jobs set out to build a truly great company that “made a dent in the universe”. There’s simply no denying that he was a man of many flaws; but for those of us who are still alive in a world that feels as though it’s lost its swagger and its sense of the possible, Jobs’ life is a testament to what can be accomplished through focus, drive, and a fanatical devotion to excellence.

 

The STEMpunk Project: Structure

In trying to decide how best to structure this project I ran into some problems that weren’t as salient in the other large-scale learning endeavors with which I am familiar. Describing these problems and my strategies for solving them might prove useful to others wanting to plan out there own projects.

To start with I had no idea in what order I should learn various concepts or perform smaller hands-on projects. While I tried reaching out to a number of engineering professors and field experts for advice, the only response I ever received was “I’m sorry, how do I know you?”

Further, I wanted to make sure that I didn’t spend all my time simply reading theory. A lot of what The STEMpunk Project is about is getting better at making and doing stuff; I’m already reasonably good at thinking.

The solution I eventually settled on was iterating between theory and practice in a specific way. All four sections of the project have the following cadence: first, there’s an applied component centered primarily on toy projects like building model engines. This is then followed by a theoretical component that involves reading books and watching lectures. Finally comes a more serious set of applied projects, like wiring an electrical panel or rebuilding a lawn mower engine.

The only minor break in this pattern is the computing module I’m currently working on because I’m not starting with toy projects. Instead, the first component is designing and building a real system*, the second is making a virtual computer from the NAND gate up (thus digesting large amounts of theory), and the last is studying computer repair, networking, and security. That still roughly corresponds to applied, theoretical, applied.

Another major problem is that the lack of guidance means there is a lot of uncertainty over the life of the project. How, after all, is a novice supposed to calculate the length of time required to learn basic electrical theory if he has no frame of reference?

I coped by deliberately marking out where the uncertain places are and trying to leave myself plenty of time to complete them. I only have a vague idea, for example, of how the robotics portion is going to unfold, and in particular how long it’ll take me to learn whatever programming is involved.

Another thing I might have done is planned out a few different alternatives for each module, perhaps with rankings like “easy, “moderate”, and “difficult”. Or I could’ve set the modules up with branches which forked depending on whether or not I’d made a certain amount of progress, so that if I had managed to do then I would do B, and if not then instead I would do C.

In the end I decided against this approach, based on the knowledge that I have a proclivity towards getting too caught up in the planning stage. I can’t recommend that everyone follow my lead, but it made more sense for me to pick an arbitrary date (March 1st) and just start the damn thing, even if that means I wind up biting off a little more than I can chew.

As of yet I haven’t decided what to do if I find that I am running out of time. Would it be wiser to expand the time frame for the module or call it quits and move on? More than likely it’ll hinge on whether or not I think continuing will be a productive use of time. If I can tell that I’m up against a subject I don’t understand at all and another week isn’t going to make much of a difference, I’ll probably just end the module there and move on.

Conclusion

So, that’s how I dealt with: 1) ignorance with regards to the optimal learning order; 2) the need to balance theory with practice; 3) the inherent uncertainty of a beginner trying to allot a reasonable amount of time to accomplish a big goal.

I suppose it’s still an open question as to whether or not this approach will prove adequate. You’ll have to follow along to find out!

___

*For financial reasons I’m not going to be building the system until later in the year, so the computing module actually is a bit of a departure from the cadence of the other three modules. But because the focus is on actual hardware, not theory, and I’m planning on building the system later, I’m still counting this module as lining up with the structure of the others.

Critique My Russian!

One of my ancillary goals for 2016 is to learn some Russian. I haven’t set any specific objectives because, next to my day job and The STEMpunk project Russian is a very minor pursuit. It would be cool, however, to maybe record a five minute conversation with a native speaker near the end of the year and maybe take a standardized test like the Common European Framework for Languages.

Anyway, I finally got around to recording and subtitling a video of myself speaking almost all the Russian sentences I’ve learned up to this point, and would love any feedback from knowledgeable Russian speakers.

The STEMpunk Project: Performing A Failure Autopsy

Background:

What follows is an edited version of an exercise I performed about a month ago following an embarrassing error cascade. I call it a ‘failure autopsy’, and on one level it’s basically the same thing as an NFL player taping his games and analyzing them later, looking for places to improve.

But the aspiring rationalist wishing to do the something similar faces a more difficult problem, for a couple of reasons:

First, the movements of a mind can’t be seen in the same way the movements of a body can, meaning a different approach must be taken when doing granular analysis of mistaken cognition.

Second, learning to control the mind is simply much harder than learning to control the body.

And third, to my knowledge, nobody has really even tried to develop a framework for doing with rationality what an NFL player does with football, so someone like me has to pretty much invent the technique from scratch on the fly.  

I took a stab at doing that, and I think the result provides some tantalizing hints at what a more mature, more powerful versions of this technique might look like. Further, I think it illustrates the need for what I’ve been calling a “Dictionary of Internal Events”, or a better vocabulary for describing what happens between your ears.

Process:

Performing a failure autopsy involves the following operations:

  1. List out the bare steps of whatever it was you were doing, mistakes and successes alike.
  2. Identify the points at which mistakes were made.
  3. Categorize the nature of those mistakes.
  4. Repeatedly visualize yourself making the correct judgment, at the actual location, if possible.
  5. (Optional) explicitly try to either analogize this context to others where the same mistake may occur, or develop toy models of the error cascade which you can use to template onto possible future contexts.

In my case, I was troubleshooting an air conditioner failure[1].

The garage I was working at has two five-ton air conditioning units sitting outside the building, with two wall-mounted thermostats on the inside of the building.

Here is a list of the steps my employee and I went through in our troubleshooting efforts:

  1. Notice that the right thermostat is malfunctioning.
  2. Decide to turn both AC units off[2] at the breaker[3] instead of at the thermostat.
  3. Decide to change the batteries in both thermostats.
  4. Take both thermostats off the wall at the same time, in order to change their batteries.
  5. Instruct employee to carry both thermostats to the house where the batteries are stored. This involves going outside into the cold.

 

The only non-mistakes were a) and c), with every other step involving an error of some sort. Here is my breakdown:

*b1) We didn’t first check to see if the actual unit was working; we just noticed the thermostat was malfunctioning and skipped straight to taking action. I don’t have a nice term for this, but it’s something like Grounding Failure.

*b2) We decided to turn both units off at the breaker, but it never occurred to us abruptly cutting off power might stress some of the internal components of the air conditioner. Call this “implication blindness” or Implicasia.

*b3) Turning both units off at the same time, instead of doing one and then the other, introduced extra variables that made downstream diagnostic efforts muddier and harder to perform. Call this Increasing Causal Opacity (ICO).

*d) We took both thermostats off the wall at the same time. It never occurred to us that thermostat position might matter, i.e. that putting the right thermostat in the slot where the left used to go or vice versa might be problematic, so this is Implicasia. Further, taking both down at the same time is ICO.

*e) Taking warm thermostats outside on a frigid night might cause water to condense on the inside, damaging the electrical components. This possibility didn’t occur to me (Implicasia).

Interventions:

So far all this amounts to is a tedious analysis of an unfolding disaster. What I did after I got this down on paper was try and re-live each step, visualizing myself performing the correct mental action.

So it begins with noticing that the thermostat is malfunctioning. In my simulation I’m looking at the thermostat with my employee, we see the failure, and the first thought that pops into my simulated head is to have him go outside and determine whether or not the AC unit is working.

I repeat this step a few times, performing repetitions the same way you might do in the gym.

Next, in my simulation I assume that the unit was not working (remember that in real life we never checked and don’t know), and so I simulate having two consecutive thoughts: “let’s shut down just the one unit, so as not to ICO” and “but we’ll start at the thermostat instead of at the breaker, so that the unit shuts down slowly before we cut power altogether. I don’t want to fall victim to Implicasia and assume an abrupt shut-down won’t mess something up”.

The second part of the second thought is important. I don’t know that turning an AC off at the breaker will hurt anything, but the point is that I don’t know that it won’t, which means I should proceed with caution.

As with before I repeat this visualization five times or so.

Finally, I perform this operation with both *d) and *e), in each case imagining myself having the kinds of thoughts that would have resulted in success rather than failure.

Broader Considerations:

The way I see it, this error cascade resulted from impoverished system models and from a failure to invoke appropriate rationalist protocols. I would be willing to bet that lots of error cascades stem from the same deficiencies.

Building better models of the systems relevant to your work is an ongoing task that combines learning from books and tinkering with the actual devices and objects involved.

But consistently invoking the correct rationalist protocols is a tougher problem. The world is still in the process of figuring out what those protocols should be, to say nothing of actually getting people to use them in real time. Exercises like this one will hopefully contribute something to the former effort, and a combination of mantras or visualization exercises is bound to help with the latter.

This failure autopsy also provides some clarity on the STEMpunk project: the object level goals of the project correspond to building richer system models while the meta level goals will help me develop and invoke the protocols required to reason about the problems I’m likely to encounter.

Future Research:

While this took the better part of 90 minutes to perform, spread out over two days, I’m sure it’s like the first plodding efforts of a novice chess player analyzing bad games. Eventually it will become second nature and I’ll be doing it on the fly in my head without even trying.

But that’s a ways off.

I think that if one built up a large enough catalog of failure autopsies they’d eventually be able to collate the results into something like a cognitive troubleshooting flowchart.

You could also develop a toy model of the problem (i.e. solving problems in a circuit that lights up two LEDs, reasoning deliberately to avoid Implicasia and changing one thing at a time to avoid ICO.)

Or, you could try to identify a handful of the causal systems around you where error cascades like this one might crop up, and try to preemptively reason about them.

I plan on exploring all this more in the future.

Notes:

[1] I’m not an HVAC technician, but I have worked with one and so I know enough to solve some very basic problems.

[2] Why even consider turning off a functioning AC? The interior of the garage has a lot of heavy machinery in it and thus gets pretty warm, especially on hot days, and if the ACs run continuously eventually the freon circulating lines will frost over and the unit will shut down. So, if you know the units have been working hard all day it’s often wise to manually shut one or both units down for ten minutes to make sure the lines have a chance to defrost and then manually turn them back on.

[3] Why even consider shutting off an AC at the breaker instead of the thermostat? The same reason that you sometimes have to shut an entire computer down and turning it back on when troubleshooting. Sometimes you have no idea what’s wrong, so a restart is the only reasonable next step.

Peripatesis: The Kasparov Window; A Taxonomy Of AI Systems

The world is a buzz with discussion of Google’s AlphaGo program beating Korean Go player Lee Sedol, considered by many to be one of the best players on Earth, in the first three matches they played. Lee was able to take the fourth match, however, by deliberately playing lower probability moves that managed to confuse the AI.

In a facebook post Eliezer Yudkowsky coined the term “Kasparov Window” to describe a range of systems with superhuman abilities that nevertheless have flaws that human players can discover and exploit. Pondering this concept, I had a different idea:

Say you had a way of measuring how “unintuitive” a given move is for a human player. That is, if a move is minimally unintuitive it can reasonably be assumed that even a novice player would make it in the same situation, and if a move is maximally unintuitive it can be reasonably assumed that not even an expert player would make the move in the same situation.

Using this measure, might it be possible to calibrate AI systems to gradually introduce more and more unintuitive moves into a game? If so, it seems like you might be able to train the best human players to become even better by getting them to think way outside the box.

And if you used a similar technique with something like an automated theorem prover, you might also be able to get skilled human mathematicians to produce proofs that a human normally wouldn’t be able to produce because such a proof simply wouldn’t occur to them.

One problem with these scenarios is that it may be feasible to train humans in this way but it may just not be possible to extend the range of what counts as “intuitive for a human” very far. So, Lee Sedol learns to make some unintuitive moves but the quality of his gameplay only increases very slightly.

Another problem is that the training may turn out to be feasible, but AI technology simply progresses so rapidly that there just isn’t any point.

***

(NOTE: the following is all highly speculative and not researched very well.)

In a recent blog post on domain-specific programming languages author Eric Raymond made a distinction between the kinds of problems best solved through raw automation and the kinds of problems best solved by making a human perform better.

This gave me an idea for a 4-quadrant graph that could serve as a taxonomy of various current and future AI systems. Here’s the setup: the horizontal axis runs Expert <–> Crowd and the vertical axis runs Judgment Enhancement <–> Automation.

Quadrant one (Q1) would contain quintessential human judgment amplifiers, like the kinds of programs talked about by Shyam Sankar in his TED talk or the fascinating-but-unproven-as-far-as-I-know “Chernoff faces”.

In Q2 we have mechanisms for improving the judgments of crowds. The only example I could really think of were prediction markets, though I bet you could make a case for market prices working as exactly this sort of mechanism.

In Q3 we have automated experts, the obvious example of which would be an expert system or possibly a strong artificial general intelligence.

And in Q4 we have something like a swarm of genetic algorithms evolved by making random or pseudo-random changes to a seed code and then judging the results against some fitness function.

Now, how should we match these systems with different problem domains?

It seems to me like Q1 systems would be better at solving problems that either a) have finite amounts of information that can be gathered by a single computer-aided human or b) are problems for which humans are uniquely suited to solve, like intuiting and interpreting the emotional states of other humans.

Chernoff faces, if we ever get them working right, are an especially interesting Q1 system because what they are supposed to do is take statistical information, which humans are notoriously dreadful at working with, and transform it into a “facial” format, which humans have enormously powerful built-in software for working with.

Q2 systems should be used to solve problems that require more information than a human can work with. Prediction markets are meant to use a profit motive to incentivize human experts to incorporate as much information as they can in as honest a way as they can, and over a span of time there are enough rounds of updates that the system as a whole produces a price which contains the aggregate wisdom of the individuals making the system up (At least I think that’s how they work).

Why can’t we have a prediction market that performs heart surgery? Because huge amounts of the relevant information is “organic”, i.e. muscle memory built up over dozens and eventually hundreds of similar procedures. This information isn’t written down anywhere and thus can’t be aggregated and incorporated into a “bet” by a human non-surgeon.

Based on some cursory research, my example of a Q3 system, i.e. expert systems, appear to be subdivided into knowledge bases and inference engines. I’d venture to guess that they are suitable wherever knowledge can be gathered and encoded in a way that allows computers to perform inferences and logical calculations on it. Wikipedia’s article contains a chart detailing some areas where expert systems have been used, and also points out that one drawback to expert systems is that they are unable to acquire new knowledge.

That’s a pretty serious handicap, and places further limits on what types of problem a Q3 system could solve.

Finally, Q4 systems are probably the strangest entities we’ve discussed so far, and the only examples I’m familiar with are from the field of evolvable hardware. IIRC using evolutionary algorithms to evolve circuits yields workable results which no human engineer would’ve thought of. That has to be useful somewhere, if only when trying to solve an exotic problem that’s stymied every attempt at a solution, right?