I notice that I am unmotivated.
I dabble in rationality, sometimes being very excited, confident, and ambitious about learning new skills. But I also go through long stretches of grinding Diablo III instead. I was recently referred to Agenty Duck's blog, and was once again (probably temporarily) invigorated with the autodidiactical spirit. Normally in these instances I just jot a few notes down on a digital note pad, and then delete them once I get tired of seeing them every day. This time, I wanted to jot them somewhere more permanent.
The specific post was this one, which attempts to project rationality skills on two coupled axes: easy/difficult, and quick/slow. I never met a two-axis projection I didn't like, and I like this one. What caught my attention in particular was Brianne's comments about Bayesian reasoning, which she sees as something very difficult to acquire. Strangely, this is the only rationality skill I feel I have down.
I approached rationality through a fundamental undersanding of Bayes' theorem. I was idly discussing a simple experiment with a colleague. Suppose we have a system which can be in State A and State B, which are exhaustive and mutually exclusive. The chance of being found in state A is \gamma (I use LaTeX markup because it's easy for me to read, sorry). By observing the system *once* and finding it to be in State A, what is our estimate of \gamma? I noticed that I was confused. There seemed to be no obvious way to do this according to what I knew about statistics, and yet nothing *magical* happens as we transition from few observations to many, for which the estimation is straightforward. My graduate advisor at the time just laughed and said "good luck."
It turns out that this calcuation is trivially easy using Bayes's rule (with the usual caveat that one must determine a prior). I borrowed and eventually bought Jaynes' book (I was familiar with his statistical mechanics work), and read it cover to cover. That was in 2010, and from there I found Less Wrong. From there and elsewhere, I began fitting everything I learned into a Bayesian worldview. I was uninterested at first in cognitive bias, and I still only know the basics of rational decision theory. My interest has always been in toy problems that we can solve exactly, since this is what my physics training taught me to do. It's the way I understand the world: break it up into easy pieces. I am confident in claiming that Bayes has transformed my worldview entirely.
What comes to me with great difficulty is implementation of high-level rationality, and understanding it in the context of everyday experience. I see these problems as far too complicated to solve, like trying to find the wave function of the electrons in some complicated lattice. In physics, statistical mechanics comes to the rescue, and we find that we can talk about macroscopic variables to describe a microscopic system. I'm not sure there's a parallel in rationality, and I end up ignoring the problem and moving on to an easier one.
Agenty Ducks' blog post was encouraging, then. It seems that at least some people find it relatively easy to make headway in these areas. I want to find some exercises that will lead to easy successes at first, so I'll be encouraged to continue.
If I continue this blog, it will at least force me to write down what I'm thinking. It's possible that writing will encourage me to continue learning. As a more distant possibility, I might try collecting some of the ideas written here for other people's benefit, and making a new, more organized rationality blog.
PS: Seriously, check out Agenty Duck. In a half hour of reading, I found at least three ideas that I can immediately implement. The writing is extremely lucid and concise. Bravo.
No comments:
Post a Comment