What is a Simulation?

While reading Paul Rosenbloom’s outstanding book “On Computing” I came across an interesting question: what is a simulation, and how is it different from an implementation? I posed this question on Facebook and, thanks to the superlative quality of my HiveMind I had a productive back-and-forth which helped me nail down a tentative answer. Here it is:

‘Simulation’ is a weaker designation than ‘implementation’. Things without moving parts (like rocks) can be simulated but not implemented. Engines, planets, and minds can be either simulated or implemented.

A simulation needs to lie within a certain band of verisimilitude, being minimally convincing at the lower end but not-quite-an-implementation on the other. An implementation amounts to a preservation of the components, their interactions, and the higher-level processes (in other words: the structure), but in a different medium. Further, implementation is neither process- nor medium-agnostic; not every system can rise to the level of an implementation in any arbitrarily-chosen medium.

A few examples will make this clearer.

Mario is neither a simulation not an implementation of an Italian plumber. If we ran him on the Sunway TaihuLight supercomputer and he could pass a casual version of the Turing test, I’d be prepared to say that he is a simulation of a human, but not an implementation. Were he vastly upgraded, run on a quantum computer, and able to pass as a human indefinitely, I’d say that counts as an implementation, so long as the architecture of his mind was isomorphic to that of an actual human. If it wasn’t, he would be an implementation of a human-level intelligence but not of a human per se.

A digital vehicle counts as a simulation if it behaves like a real vehicle within the approximate physics of the virtual environment. But it can never be an implementation of a vehicle because vehicles must bear a certain kind of relationship to physical reality. There has to be actual rubber on an actual road, metaphorically speaking. But a steam-powered Porsche might count as an implementation of a Porsche if it could be driven like one.

An art auction in the Sims universe would only be a simulation of an art auction, and not a very convincing one. But if the agents involved were at human-level intelligence, that would be an implementation of an art auction. Any replicated art within the virtual world wouldn’t even count as a simulation, and would just be a copy. Original art pieces within a virtual world might count as real art, however, because art doesn’t have the same requirement of being physically instantiated as a vehicle does.

Though we might have simulated rocks in video games, I’m not prepared to say a rock can ever be implemented. There just doesn’t seem to be anything to implement. Building an implementation implies that there is a process which can be transmogrified into a different medium, and, well, rocks just don’t do that much. But you could implement geological processes.

Conway’s Game of Life is only barely a simulation of life; it would probably be more accurate to say it’s the minimum viable system exhibiting life-like properties. But with the addition of a few more rules and more computing power it could become a simulation. It would take vastly more of both for it to ever be an implementation, however.

My friends’ answers differed somewhat from the above, and many correctly pointed out that the relevant definitions will depend somewhat on the context involved. But as of March 29th, 2017, I’m happy with the above and will use it while grappling with issues in AI, computing, and philosophy.

Machine Ethics is Still a Hard Problem

I have now read Digital Wisdom’s essay “Yes, Virginia, We *DO* Have a Deep Understanding of Morality and Ethics” twice, and I am unable to find even one place where the authors do justice to the claims they are criticizing.

With respect to selfishness they write:
“Followers of Ayn Rand (as well as most so-called “rationalists”) try to conflate the distinction between the necessary and healthy self-interest and the sociopathic selfish.”
This is simply untrue. The heroes of Atlas Shrugged work together to bring down a corrupt and parasitic system, John Galt refuses to be made an economic dictator even though doing so would allow him limitless power, and in The Fountainhead Howard Roark financially supports his friend, a sculptor, who otherwise would be homeless and starving.

Nothing — nothing — within Objectivism, Libertarianism, or anarcho-capitalism rules out cooperation. A person’s left and right hand may voluntarily work together to wield an axe, people may voluntarily work together to construct a house, and a coalition of multi-national corporations may voluntarily work together to establish a colony on the moon. Individuals uniting in the pursuit of a goal which is too large to be attempted by any of them acting alone is wonderful, so long as no one is being forced to act against their will. The fact that people are still misunderstanding this point must be attributed to outright dishonesty.

Things do not improve from here. AI researcher Steven Omohundro’s claim that without explicit instructions to do otherwise an AI system would behave in ways reminiscent of a human psychopath is rebutted with a simple question: “What happens when everyone behaves this way?” Moreover, the AI alarmists — a demimonde of which I count myself a member — “totally miss that what makes sense in micro-economics frequently does not make sense when scaled up to macro-economics (c.f. independent actions vs. cartels in the tragedy of the commons).”

I simply have no idea what the authors think they’re demonstrating by pointing this out. Are we supposed to assume that recursively self-improving AI systems of the kind described by Omohundro in his seminal “The Basic AI Drives” will only converge on subgoals which would make sense if scaled up to a full macroeconomic system? Evidently anyone who fails to see that an AI will be Kantian is a fear-mongering Luddite.

To make the moral turpitude of the “value-alignment crowd” all the more stark, we are informed that “…speaking of slavery – note that such short-sighted and unsound methods are exactly how AI alarmists are proposing to “solve” the “AI problem”.”

Again, this is just plain false. Coherent Extrapolated Volition and Value Alignment are not about slavery, they’re about trying to write computer code which, when going through billions of rewrites by an increasingly powerful recursive system still results in a goal architecture which can be safely implemented by a superintelligence.

And therein lies the rub. Given the title of the essay, what exactly does our “deep understanding of morality and ethics” consist of? Prepare yourself, because after you read the next sentence your life will never be the same:

At essence, morality is trivially simple – make it so that we can live together.”
I know, I know. Please feel free to take a moment to regain your sense of balance and clean up the blood loss that inevitably results from having such a railroad spike of thermonuclear insight driven into your brain.

In the name of all the gods Olde, New, and Forgotten, can someone please show me where in the voluminous less wrong archives anyone says that there won’t be short natural-language sentences which encapsulate human morality?

Proponents of the thesis that human values are complex and fragile are not saying that morality can’t be summarized in a way that is comprehensible to humans. They’re saying that those summaries prove inadequate when you start trying to parse them into conceptual units which are comprehensible to machines.

To see why, let’s descend from the rarefied terrain of ethics and discuss a more trivial problem: writing code which produces the Fibonacci sequence. Any bright ten year old could accomplish this task with a simple set of instructions: “start with the numbers 0 and 1. Each additional number is the sum of the two numbers that precede it. So the sequence goes 0, 1, 1, 2, 3, 5, 8…”

But pull up a command-line interface and try typing in those instructions. Computers, you see, are really rather stupid. Each and every little detail has to be accounted for when telling them which instructions to execute and in which order. Here is one python script which produces the Fibonacci sequence:


def fib(n):

    a,b = 1,1
    fib_list = []
    for i in range(n):
        fib_list.append(a)
        a,b = b, a+b
    return fib_list

You must explicitly store the initial values in two variables or the program won’t even start. You must build some kind of iterating data structure or the program won’t do anything at all. The values have to be updated and stored one-at-a-time or the values will appear and disappear. And if you mess something up, the program might start throwing errors, or worse, it may output a number sequence that looks correct but isn’t.

And really, this isn’t even that great of an example because the code isn’t that much longer than the natural language version and the Fibonacci sequence is pretty easy to identify. The difficulties become clearer when trying to get a car to navigate city traffic, read facial expressions, or abide by the golden rule. These are all things that can be explained to a human in five minutes because humans filter the instructions through cognitive machinery which would have to be rebuilt in an AI.

Digital Wisdom ends the article by saying that detailed rebuttals of Yudkowsky and Stuart Russell as well as a design specification for ethical agents will be published in the future. Perhaps those will be better. Based on what I’ve seen so far, I’m not particularly hopeful.

Peripatesis: E-Governance; Lighting Up The Dark; Regulating Superintelligences.

Nestled in the cold reaches of Northern Europe, Estonia is doing some very interesting things with the concept of ‘e-governance‘. Their small population, short modern history, and smattering of relatively young government officials make experimenting with Sovereignty easier than it would be in, say, The United States. The process of starting a business and paying taxes in Estonia has been streamlined, for example, leading to the predictable influx of ‘e-residents’ wanting to run their internet-based business from Estonia.

***

There are some truly fascinating advancements happening at the cutting edge of farming and horticulture. Some enterprising researchers have discovered a way to channel natural light into unlit places, and there are talks of using this technology to set up a public garden in the abandoned Williamsburg Bridge Trolley Terminal beneath New York City. It’s not really clear from the linked article whether or not all of this light is natural or whether or it’s a mix of natural and artificial light, but it’s still interesting.

I would love to see a variant of this technology utilized far and wide to foster localized farming and the greening of urban centers. Plenty of buildings have rooftop gardens now, but with a means of gathering and arbitrarily distributing sunlight it would be possible to have, say, one floor in ten of a big skyscraper devoted to a small orchard or garden space. Advanced greenhouses could be both heavily insulated and capable of showering their interior with photons, making farming at high altitudes and in colder climates more straightforward.

***

The BBC has a piece on ‘anti-languages’, slangs developed by insular communities like thieves or prison inmates to make their communication indecipherable to outsiders. They share the grammar of their parent language but use a plethora of new terms in place of old ones to achieve something akin to encryption.

These new terms — such as ‘bawdy basket’, which meant ‘thief’ in the English anti-language used among Elizabethan criminals — are generated through all sorts of techniques, including things like metaphor and reversing the spelling or meaning of terms from the parent language.

***

An essay by Marc McAllister at The Babel Singularity argues that laws enforcing human control over superintelligences are tantamount to slavery, and won’t be of much use any way because these beings will have moral concepts which we baseline humans simply can’t fathom with our outdated brains.

He seems to be missing the point of the arguments made by groups like MIRI and the Future of Life Institute. To the best of my knowledge no one is advocating that humans remain strictly in control of advanced AIs indefinitely. In fact, the opposite is true: the point of building a superintelligence is to eventually put it in charge of solving really hard problems on behalf of humanity. In other words, ceding control to it.

To that end, the efforts made by people who think about these issues professionally seem to be aimed at understanding human values, intelligence, and recursively improving algorithms well enough to: 1) encode those values into an AI; 2) Predict with an acceptably strict level of confidence that this human-compatible goal architecture will remain intact as the software rewrites itself; 3) reason, however dimly, about the resulting superintelligence. These are by no means trivial tasks. Human values are the messy, opaque result of millennia of evolution, and neither intelligence nor recursion are well understood.

But if we succeed in making a “Friendly” AI then control, in a ‘coercive sense’, won’t be necessary because its values will be aligned with our own.

***

Somewhat related: Big Think has published a very brief history of Artificial Intelligence. With the increasing sophistication and visibility of advancements in the field, understanding its roots becomes ever more important.

***

Vector Space Systems is a new player in an arena long dominated by Blue Origins, SpaceX, and Virgin Galactic. Their goal: to be to spaceflight what taxis are to terrestrial modes of transport. According to their website they have been quietly working on a micro satellite launch vehicle designed to carry payloads in the 5 – 50 kg range into orbit.

If they succeed this will allow companies wanting to develop new space technologies to launch more frequently and less expensively, driving faster growth in space commerce, exploration, and tourism.

Is Evolution Stoopid?

In a recent post I made the claim the evolution is a blind, stupid process that does what it does by brute-forcing through adjacent regions of possibility space with a total lack of foresight. When I said this during a talk I gave on superintelligence I met with some resistance along the lines of ‘calling evolution stupid is a mistake because sometimes there are design features in an evolved organism or process which are valuable even if human engineers are not sure why’.

This is true, but doesn’t conflict with the characterization of evolution as stupid because by that I just meant that evolution is incapable of the sort of planning and self-reflection that a human is capable of.

This is very different from saying that it’s trivial for a human engineer to out think evolution on any arbitrary problem. So far is I know nobody has figure out how to make replicators as good as RNA or how to make things that can heal themselves, both problems evolution has solved.

The difference is not unlike the difference between intelligence, which is something like processing speed, and wisdom, which is something like intelligence applied to experience.

You can be a math prodigy at the age of 7, but you must accrue significant experience before you can be a wisdom prodigy, and that has to happen at the rate of a human life. If one person is much smarter than another they may become wiser faster, but there’s still a hard limit to how fast you can become wise.

I’ve personally found myself in situations where I’ve been out-thought by someone who I’m sure isn’t smarter than me, simply because that other person has seen so many more things than I have.

Evolution is at one limit of the wisdom/intelligence distinction. Even zero intelligence can produce amazing results given a head start of multiple billions of years, and thus we can know ourselves to be smarter than evolution while humbly admitting that its designs are still superior to our own in many ways.

Processes Of Optimization.

In the beginning was the Bang, and for ages thereafter the universe did nought but sample randomly from the same distribution in the form of star and galaxy formation. And though the stars burned bright in the void, they had but a small influence on the speed with which the universe searched possibility space.

For the birth of stars did make the birth of planets more likely, which did make life more likely. And thus did each act as a gatekeeper to new regions of possibility space.

And lo, with the first self-replicators came the possibility of new organisms being created when mistakes occurred in the replication process. Eons later sex allowed existing DNA to be combined into novel configurations, and thus could possibility space be explored more quickly.

For verily is evolution a stupid process and its recursion weak, and it doth wobble after a drunkard’s fashion through possibility space with no insight, foresight, or intelligence.

And then there were brains, and with them the ability to improve upon evolution’s work. For some brains are able to plan for future goals and to imagine counterfactual situations, abilities which evolution possesses not. 

But alas, nervous systems never evolved much introspective depth, and had but the tiniest ability to recursively self-improve.

And then a small set of brains invented Science, which could accumulate many many more insights than any brain could in the span of its life. It was an age of optimism and plenty, and there was much rejoicing and proliferation of telescopes and gene sequencing and iphones throughout the land.

But even unto the present day Science has not learned enough to do anything more than weakly turn any optimization process back on itself.

And lo, from the cackling, structured madness of genetics, history, and culture did the universe cough up a series of sages, deep of insight, quick of thought, and usually possessed of tremendous social awkwardness.

After much study the sages warned that there might one day be a strong recursive process that could be a greater source of discontinuity than any that had come before it.

And tho did Einstein proclaim compound interest to be the greatest among the forces of heaven and earth, this was only partly true. For surely it is strong recursion which holdeth the greatest promise and the deepest peril.

Thus should ye heed this dire proclamation: work swiftly and work thoroughly, before the AI goeth ‘FOOM’.

Whither Discontinuity?

I’m writing a series of posts clarifying my position on the intelligence explosion hypothesis and today I want to discuss discontinuity.

This partially addresses the ‘explosion’ part of ‘intelligence explosion’. Given the fact that most developments in the history of the universe have not been discontinuous, what reason do we have to suspect that an AI takeoff might be?

Eliezer identifies the following five sources of discontinuity:

Cascades – Cascades occur when ‘one thing leads to another’. A (possibly untrue) example is the evolution of human intelligence.

It is conceivable that other-modeling abilities in higher primates became self-modeling abilities, which allowed the development of complex language, which allowed for the development of politics, which put selection pressure on the human ability to outwit opponents in competition for food and mates, which caused humans to ‘fall up the stairs’ and quickly become much smarter than the next smartest animal.

Cycles – Cycles are like cascades but the output hose is connected to the input hose. It’s possible for businesses or even individual people to capture enormous parts of a market by investing large fractions of their profits into infrastructure and research. Of course this isn’t the sort of extreme discontinuity we’re interested in, but it’s the same basic idea.

Insight – An insight is something like the theory of evolution by natural selection which, once you have it, dissolves lots of other mysteries which before might’ve looked only loosely connected. The resultant gain in knowledge can look like a discontinuity to someone on the outside who doesn’t have access to the insight.

Recursion – Is the turning of a process back on itself. An AI that manages to produce strong, sustained recursive self-improvements could rapidly become discontinuous with humans.

Magic – Magic is a term of art for any blank spaces in our maps. If something smarter than me turns its intelligence to the project of becoming smarter, then there should be results not accounted for in my analysis. I should expect to be surprised.

Any one of these things can produce apparent discontinuities, especially if they occur together. A self-improving AI could produce novel insights, make use of cascades and cycles, and might be more strongly recursive than any other known process.

Takeoff Speed II: Recalcitrance in AI pathways to Superintelligence.

I’m writing a series of posts clarifying my position on the intelligence explosion hypothesis. Last time I took a look at various non-AI pathways to Superintelligence and concluded that the recalcitrance profile for most of them was moderate to high.

This doesn’t mean it isn’t possible to reach Superintelligence via these routes, but it does indicate that doing so will probably be difficult even by the standards of people who think about building Superintelligences all day long.

AI-based pathways to Superintelligence might have lower recalcitrance than these alternatives, because of a variety of advantages a software mind could have over a biological one.

These advantages have been discussed at length elsewhere, but relevant to the present discussion is that software minds could have far greater introspective access to their own algorithms than humans do.

Of course programmers building such a mind might fear an intelligence explosion and endeavor to prevent this sort of deep introspection. But in principle an AI with such capabilities could become smart enough to start directly modifying and improving its own code.

Humans can only do a weak sort of introspection, and therefore can only do a weak sort of optimization to their thinking patterns. So far, anyway.

At a futurist party recently I was discussing these ideas with someone and they asked me what might happen if a recursively self-improving AI hit diminishing returns on each optimization. Might an intelligence explosion just sort of… fizzle out?

The answer is yes, that might happen. But so far as I can tell there isn’t any good reason to assume that that will happen, and thus the safest bet is to act as though it probably will happen and start thinking hard about how to steer this runaway process in a direction that leads to a valuable future.