With respect to selfishness they write:
“Followers of Ayn Rand (as well as most so-called “rationalists”) try to conflate the distinction between the necessary and healthy self-interest and the sociopathic selfish.”
This is simply untrue. The heroes of Atlas Shrugged work together to bring down a corrupt and parasitic system, John Galt refuses to be made an economic dictator even though doing so would allow him limitless power, and in The Fountainhead Howard Roark financially supports his friend, a sculptor, who otherwise would be homeless and starving.
Nothing — nothing — within Objectivism, Libertarianism, or anarcho-capitalism rules out cooperation. A person’s left and right hand may voluntarily work together to wield an axe, people may voluntarily work together to construct a house, and a coalition of multi-national corporations may voluntarily work together to establish a colony on the moon. Individuals uniting in the pursuit of a goal which is too large to be attempted by any of them acting alone is wonderful, so long as no one is being forced to act against their will. The fact that people are still misunderstanding this point must be attributed to outright dishonesty.
Things do not improve from here. AI researcher Steven Omohundro’s claim that without explicit instructions to do otherwise an AI system would behave in ways reminiscent of a human psychopath is rebutted with a simple question: “What happens when everyone behaves this way?” Moreover, the AI alarmists — a demimonde of which I count myself a member — “totally miss that what makes sense in micro-economics frequently does not make sense when scaled up to macro-economics (c.f. independent actions vs. cartels in the tragedy of the commons).”
I simply have no idea what the authors think they’re demonstrating by pointing this out. Are we supposed to assume that recursively self-improving AI systems of the kind described by Omohundro in his seminal “The Basic AI Drives
” will only
converge on subgoals which would make sense if scaled up to a full macroeconomic system? Evidently anyone who fails to see that an AI will be Kantian is a fear-mongering Luddite.
To make the moral turpitude of the “value-alignment crowd” all the more stark, we are informed that “…speaking of slavery – note that such short-sighted and unsound methods are exactly how AI alarmists are proposing to “solve” the “AI problem”.”
Again, this is just plain false. Coherent Extrapolated Volition and Value Alignment are not about slavery, they’re about trying to write computer code which, when going through billions of rewrites by an increasingly powerful recursive system still results in a goal architecture which can be safely implemented by a superintelligence.
And therein lies the rub. Given the title of the essay, what exactly does our “deep understanding of morality and ethics” consist of? Prepare yourself, because after you read the next sentence your life will never be the same:
“At essence, morality is trivially simple – make it so that we can live together.”
I know, I know. Please feel free to take a moment to regain your sense of balance and clean up the blood loss that inevitably results from having such a railroad spike of thermonuclear insight driven into your brain.
In the name of all the gods Olde, New, and Forgotten, can someone please show me where in the voluminous less wrong archives anyone says that there won’t be short natural-language sentences which encapsulate human morality?
Proponents of the thesis that human values are complex and fragile are not saying that morality can’t be summarized in a way that is comprehensible to humans. They’re saying that those summaries prove inadequate when you start trying to parse them into conceptual units which are comprehensible to machines.
To see why, let’s descend from the rarefied terrain of ethics and discuss a more trivial problem: writing code which produces the Fibonacci sequence. Any bright ten year old could accomplish this task with a simple set of instructions: “start with the numbers 0 and 1. Each additional number is the sum of the two numbers that precede it. So the sequence goes 0, 1, 1, 2, 3, 5, 8…”
But pull up a command-line interface and try typing in those instructions. Computers, you see, are really rather stupid. Each and every little detail has to be accounted for when telling them which instructions to execute and in which order. Here is one python script which produces the Fibonacci sequence:
a,b = 1,1
fib_list = 
for i in range(n):
a,b = b, a+b
You must explicitly store the initial values in two variables or the program won’t even start. You must build some kind of iterating data structure or the program won’t do anything at all. The values have to be updated and stored one-at-a-time or the values will appear and disappear. And if you mess something up, the program might start throwing errors, or worse, it may output a number sequence that looks correct but isn’t.
And really, this isn’t even that great of an example because the code isn’t that much longer than the natural language version and the Fibonacci sequence is pretty easy to identify. The difficulties become clearer when trying to get a car to navigate city traffic, read facial expressions, or abide by the golden rule. These are all things that can be explained to a human in five minutes because humans filter the instructions through cognitive machinery which would have to be rebuilt in an AI.
Digital Wisdom ends the article by saying that detailed rebuttals of Yudkowsky and Stuart Russell as well as a design specification for ethical agents will be published in the future. Perhaps those will be better. Based on what I’ve seen so far, I’m not particularly hopeful.