This post marks the first time in a long time that I’ve managed to write an update before month’s end! My goals continue to be wildly optimistic; I didn’t finish AIMA this month, but I did get through a solid 4-5 chapters, and in the process learned a lot.
This spread of chapters covered topics such as the use of Markov Chain Monte Carlo reasoning to make decisions under uncertainty, the derivation of Bayes’ Rule, building graphical networks for making decisions and calculating probabilities, the nuts and bolts of simple speech recognition models, fuzzy logic, simple utility theory, and simple game theory.
Since I’ve been reading about AI for years I’ve come across terms like ‘utility function’ and ‘decision theory’ innumerable times, but until now I haven’t had a firm idea of what they meant in a technical sense. Having spent time staring at the equations (while not exactly comprehending them…), my understanding has come to be much fuller.
I consider this a species of ‘profundance’, a word I’ve coined to describe the experience of having a long-held belief suddenly take on far more depth than it previously held. To illustrate: when you were younger your parents probably told you not to touch the burners on the stove because they were hot. No doubt you believed them; why wouldn’t you? But it’s not until you accidentally graze one that you realize exactly what they meant. Despite the fact that you mentally and behaviorally affirmed that ‘burners are hot and shouldn’t be touched’ both before and after you actually touched one, in the latter case there is now an experience underlying that phrase which didn’t exist before.
In a similar vein, it’s possible to have a vague idea of what a ‘utility function’ is for a long time before you actually encounter the idea as mathematics. It’s nearly always better to acquire a mathematical understanding of a topic if you can, so I’m happy to have finally (somewhat) done that.