Profundis: “Crystal Society/Crystal Mentality”

Max Harms’s ‘Crystal Society’ and ‘Crystal Mentality’ (hereafter CS/M) are the first two books in a trilogy which tells the story of the first Artificial General Intelligence. The titular ‘Society’ are a cluster of semi-autonomous sentient modules built by scientists at an Italian university and running on a crystalline quantum supercomputer — almost certainly alien […]

Peripatesis: E-Governance; Lighting Up The Dark; Regulating Superintelligences.

Nestled in the cold reaches of Northern Europe, Estonia is doing some very interesting things with the concept of ‘e-governance‘. Their small population, short modern history, and smattering of relatively young government officials make experimenting with Sovereignty easier than it would be in, say, The United States. The process of starting a business and paying taxes in […]

Is Evolution Stoopid?

In a recent post I made the claim the evolution is a blind, stupid process that does what it does by brute-forcing through adjacent regions of possibility space with a total lack of foresight. When I said this during a talk I gave on superintelligence I met with some resistance along the lines of ‘calling evolution stupid is a mistake […]

Whither Discontinuity?

I’m writing a series of posts clarifying my position on the intelligence explosion hypothesis and today I want to discuss discontinuity. This partially addresses the ‘explosion’ part of ‘intelligence explosion’. Given the fact that most developments in the history of the universe have not been discontinuous, what reason do we have to suspect that an AI takeoff […]

Takeoff Speed II: Recalcitrance in AI pathways to Superintelligence.

I’m writing a series of posts clarifying my position on the intelligence explosion hypothesis. Last time I took a look at various non-AI pathways to Superintelligence and concluded that the recalcitrance profile for most of them was moderate to high. This doesn’t mean it isn’t possible to reach Superintelligence via these routes, but it does […]

Takeoff Speed, I: Recalcitrance In Non-AI Pathways to Superintelligence

I’m writing a series of posts clarifying my position on the Intelligence Explosion hypothesis. Though I feel that the case for such an event is fairly compelling, it’s far less certain how fast the ‘takeoff’ will be, where ‘takeoff’ is defined as the elapsed time from having a roughly human-level intelligence to a superintelligence. Once […]