The rationalist community started with the idea of rationality as a martial art – a set of skills you could train in and get better at. Later the metaphor switched to a craft. Art or craft, parts of it did get developed: I remain very impressed with Eliezer’s work on how to change your mind and everything presaging Tetlock on prediction.
But there’s a widespread feeling in the rationalist community these days that this is the area where we’ve made the least progress. AI alignment has grown into a developing scientific field. Effective altruism is big, professionalized, and cash-rich. It’s just the art of rationality itself that remains (outside the usual cognitive scientists who have nothing to do with us and are working on a slightly different project) a couple of people writing blog posts.
Part of this is that the low-hanging fruit has been picked. But I think another part was a shift in emphasis.
Martial arts does involve theory – for example, beginning fencers have to learn the classical parries – but it’s a little bit of theory and a lot of practice. Most of becoming a good fencer involves either practicing the same lunge a thousand times in ideal conditions until you could do it in your sleep, or fighting people on the strip.
I’ve been thinking about what role this blog plays in the rationalist project. One possible answer is “none” – I’m not enough of a mathematician to talk much about the decision theory and machine learning work that’s really important, and I rarely touch upon the nuts and bolts of the epistemic rationality craft. I freely admit that (like many people) I tend to get distracted by the latest Outrageous Controversy, and so spend way too much time discussing things like Piketty’s theory of inequality which get more attention from the chattering classes but are maybe less important to the very-long-run future of the world.
Any argument in my own defense is entirely post hoc. But if I can advance such an argument anyway, it would be that this kind of thing is the endless drudgery of rationality training, the equivalent of fighting a thousand bouts and honing your reflexes. Controversial things are, at least, hard problems. There’s a lot of misinformation and conflicting interpretations and differing heuristics and compelling arguments on both sides. Figuring out what’s going on with Piketty is good practice for figuring out what’s going on with deworming etc.
Looking back on the Piketty discussion, people brought up questions like “How much should you discount a compelling-sounding theory based on the bias of its inventor?” And “How much does someone being a famous expert count in their favor?” And “How concerned should we be if a theory seems to violate efficient market assumptions?” And “How do we balance arguments based on what rationally has to be true, vs. someone’s empirical but fallible data sets?”
And in the end, I think we made a lot of progress on those questions. With the help of some very expert commenters, I resolved a lot of my confusions and changed some of my conclusions. That not only gives me a different view of Piketty, but – I hope – long-term trains my thought processes to better understand which heuristics and generators-of-heuristics are reliable in which situations.
Last year, I had a conversation with a friend over how we should think about the latest round of scientific results. I said over the past few years I’d learned to trust science more; he said he’d learned to trust science less. We argued it for a while, and in the end I think we basically had the same insights and perspectives – there are certain situations where science is very definitely trustworthy, and others where it is very definitely untrustworthy. Although I could provide heuristics about which is which, they would be preliminary and much worse than the intuitions that generated them. I live in fear of someone asking something like “So, since all the prominent scientists were wrong about social priming, isn’t it plausible that all the prominent scientists are wrong about homeopathy?” I can come up with some reasons this isn’t the right way to look at things, but my real answer would have to sound more like “After years of looking into this kind of thing, I think I have some pretty-good-though-illegible intuitions about when science can be wrong, and homeopathy isn’t one of those times.”
I think by looking at a lot of complicated cases, and checking back on them after they’re solved (which sometimes happens! Just look at the Fermi Paradox paper from earlier this week!) we can refine those intuitions and get a better idea of how to use the explicit-textbook-rationality-techniques. If this blog still has value to the rationalist project, it’s as a dojo where we do this a couple of times a week and absorb the relevant results.
This is one reason I’m so grateful for everyone’s comments. I only post a Comments Highlights thread every so often, but I’m constantly making updates based on things I read there and getting a chance to double-check which of the things I think are right or wrong. This isn’t just good individual rationality practice, it’s also community rationality practice, and so far I’m pretty happy with how it’s going.