Music As Communicative Medium

While listening to an audiobook on music history I was exposed to the idea that the Romantic artists believed that music was the highest form of aesthetic expression because it acted directly on the mind without the intermediary steps of words or images.

This was an idea I have been struggling to formulate, and I think it could use a little fleshing out.

Here’s where I’m at:

(1) Considered within the space of possible general communicative media (think: language, mathematics, the visual arts[1]) music is  very high BANDWIDTH. It can download emotions like despair or exultation, which are surely many gigabytes in size, directly into your consciousness.

(2) But music is a very low RESOLUTION medium for all that[1]. It can only make large transfers as complete wholes. It can’t do things like proofs[2], and even something like ‘symphonie fantastique’, which tells a story, can only do so in broad strokes and only with the help of words to communicate the narrative.

___

[1] I took pains to point out that my analysis is of music compared to all other information-transfer media. A friend pointed out that if it is considered solely as an emotional medium then it is both high bandwidth and high resolution.

[2] Though there is actually at least one attempt to do that; See: David Stutz.

Peripatesis: The Kasparov Window; A Taxonomy Of AI Systems

The world is a buzz with discussion of Google’s AlphaGo program beating Korean Go player Lee Sedol, considered by many to be one of the best players on Earth, in the first three matches they played. Lee was able to take the fourth match, however, by deliberately playing lower probability moves that managed to confuse the AI.

In a facebook post Eliezer Yudkowsky coined the term “Kasparov Window” to describe a range of systems with superhuman abilities that nevertheless have flaws that human players can discover and exploit. Pondering this concept, I had a different idea:

Say you had a way of measuring how “unintuitive” a given move is for a human player. That is, if a move is minimally unintuitive it can reasonably be assumed that even a novice player would make it in the same situation, and if a move is maximally unintuitive it can be reasonably assumed that not even an expert player would make the move in the same situation.

Using this measure, might it be possible to calibrate AI systems to gradually introduce more and more unintuitive moves into a game? If so, it seems like you might be able to train the best human players to become even better by getting them to think way outside the box.

And if you used a similar technique with something like an automated theorem prover, you might also be able to get skilled human mathematicians to produce proofs that a human normally wouldn’t be able to produce because such a proof simply wouldn’t occur to them.

One problem with these scenarios is that it may be feasible to train humans in this way but it may just not be possible to extend the range of what counts as “intuitive for a human” very far. So, Lee Sedol learns to make some unintuitive moves but the quality of his gameplay only increases very slightly.

Another problem is that the training may turn out to be feasible, but AI technology simply progresses so rapidly that there just isn’t any point.

***

(NOTE: the following is all highly speculative and not researched very well.)

In a recent blog post on domain-specific programming languages author Eric Raymond made a distinction between the kinds of problems best solved through raw automation and the kinds of problems best solved by making a human perform better.

This gave me an idea for a 4-quadrant graph that could serve as a taxonomy of various current and future AI systems. Here’s the setup: the horizontal axis runs Expert <–> Crowd and the vertical axis runs Judgment Enhancement <–> Automation.

Quadrant one (Q1) would contain quintessential human judgment amplifiers, like the kinds of programs talked about by Shyam Sankar in his TED talk or the fascinating-but-unproven-as-far-as-I-know “Chernoff faces”.

In Q2 we have mechanisms for improving the judgments of crowds. The only example I could really think of were prediction markets, though I bet you could make a case for market prices working as exactly this sort of mechanism.

In Q3 we have automated experts, the obvious example of which would be an expert system or possibly a strong artificial general intelligence.

And in Q4 we have something like a swarm of genetic algorithms evolved by making random or pseudo-random changes to a seed code and then judging the results against some fitness function.

Now, how should we match these systems with different problem domains?

It seems to me like Q1 systems would be better at solving problems that either a) have finite amounts of information that can be gathered by a single computer-aided human or b) are problems for which humans are uniquely suited to solve, like intuiting and interpreting the emotional states of other humans.

Chernoff faces, if we ever get them working right, are an especially interesting Q1 system because what they are supposed to do is take statistical information, which humans are notoriously dreadful at working with, and transform it into a “facial” format, which humans have enormously powerful built-in software for working with.

Q2 systems should be used to solve problems that require more information than a human can work with. Prediction markets are meant to use a profit motive to incentivize human experts to incorporate as much information as they can in as honest a way as they can, and over a span of time there are enough rounds of updates that the system as a whole produces a price which contains the aggregate wisdom of the individuals making the system up (At least I think that’s how they work).

Why can’t we have a prediction market that performs heart surgery? Because huge amounts of the relevant information is “organic”, i.e. muscle memory built up over dozens and eventually hundreds of similar procedures. This information isn’t written down anywhere and thus can’t be aggregated and incorporated into a “bet” by a human non-surgeon.

Based on some cursory research, my example of a Q3 system, i.e. expert systems, appear to be subdivided into knowledge bases and inference engines. I’d venture to guess that they are suitable wherever knowledge can be gathered and encoded in a way that allows computers to perform inferences and logical calculations on it. Wikipedia’s article contains a chart detailing some areas where expert systems have been used, and also points out that one drawback to expert systems is that they are unable to acquire new knowledge.

That’s a pretty serious handicap, and places further limits on what types of problem a Q3 system could solve.

Finally, Q4 systems are probably the strangest entities we’ve discussed so far, and the only examples I’m familiar with are from the field of evolvable hardware. IIRC using evolutionary algorithms to evolve circuits yields workable results which no human engineer would’ve thought of. That has to be useful somewhere, if only when trying to solve an exotic problem that’s stymied every attempt at a solution, right?

Chords and Colors

I’ve been fascinated by synesthesia ever since it occurred to me that not everyone’s  senses are as promiscuous as mine.  It’s always been very visual for me. Sounds, letters, ideas, and tastes often present themselves with colors, textures, shapes, and sizes when I think about them.

I don’t experience synesthesia out-in-the-world, only in my head. In other words, when I read a book the letters on the page don’t have any color, but if I think about a letter, or listen to a song, very often it’ll have a color or a size. This happens most strongly with music, generally speaking.

But I also remember that I almost got in trouble around the age of 3 or so because I was explaining to an older girl what size and shape various swear words were

This also seems to be rooted pretty deeply in my thought process, and has fueled my poetry and music (more on that later). It’s very difficult to watch your own mind think, but I’ve noticed that a small percentage of my thought is a stream of incomplete sentences and a bigger part is images related to those sentences. Another significant chunk, however, is just weird shapes moving and changing and banging into each other. I might be trying to think about something like “justice” or “anarchy” or “rationality”, and what I see is a strange chalky prism morphing into a sphere and then shooting off to the right, leaving a colorful trail of liquid smoke. It doesn’t make any sense, but I just somehow know that what I’m seeing is a thought related to justice.

It’s like the machine language of my mind is a polychromatic rainbow, and there is a synesthetic compiler in my unconscious which has to paint speech and thoughts before my brain can do anything with them.

This is kind of crazy when I think about it. My sentences come out orderly, but between my ears it looks like Jimi Hendrix, Walt Disney, and Wassily Kandisky are locked in a room with painting supplies and the collected works of Euclid, trying to make a video game together.