Some Clarifications On Rationalist Blogging

1. According to the survey, only 13% of SSC commenters identify as rationalists. Almost none of the rationalists I know IRL comment on SSC. Saying “rationalist community” when you mean “SSC comments section” or vice versa will leave everybody pretty confused.

2. Not every blog by a Christian is “a Christian blog”, and not every blog by a rationalist is “a rationalist blog”. I would hope blogs by Christians don’t go around praising Baal, and I try to have some minimum standards too, but I don’t want to claim this blog is doing any kind of special “rationality” work beyond showing people interesting problems.

3. Or consider the difference between a church picnic and a monastery. Both have their uses, and the church picnic will hopefully avoid praising Baal, but there’s a limit to how Christian!virtuous it can get without any structure or barriers to entry. A monastery can do much better by being more selective and carefully planned. Insofar as SSC makes any pretensions to being “rationalist”, it’s a rationalist picnic and not a rationalist monastery.

4. Everything above applies to SSC’s engagement with effective altruism too, except 100x more.

5. I’ve been consistently skeptical of claims that rationality has much practical utility if you’re already pretty smart and have good intuitions and domain-specific knowledge. There might be exceptions for some domains too new or weird to have evolved good specific knowledge, or where the incentives are so skewed that the specific knowledge will optimize for signaling rather than truly good work (and maybe 99% of value is in domains like this, so maybe I’m not saying much). In any case, if rationality has much practical utility for your everyday life, you won’t find that practical utility here.

6. I’m even skeptical of claims that rationality can do things that ought to be trivial, like searching through the self-help corpus to figure out what works, and then exploiting those things to get a practical advantage consistently and at scale. I agree this sounds easy, but I’ve seen it founder too many times. In any case, if rationality can do this, you won’t find that here either.

7. If you want places that try harder and make bigger claims, your best starting points for rationality are Center for Applied Rationality, Less Wrong, and various AI safety stuff. For effective altruism effectivealtruism.org, 80,000 Hours, Giving What We Can, and the EA Forum. There’s also your local meetup group, and various smaller or more private efforts who are welcome to leave their pitch in the comments if they want. Obviously not everyone who tries hard succeeds, and not everyone who makes big claims can back them up.

8. I’m not renouncing or trying to distance myself from the rationalist community. I still think they’re great. It’s just an attempt to clear up some common concerns I’ve run into – both in terms of whether SSC actually is these other things, and of whether it’s falsely claiming to be these other things in a way that might mislead people. It isn’t. Sorry for any confusion this might have caused.

9. Every time I say anything more sophisticated than a fourth-grader’s strawman of an evil robot who doesn’t understand love, someone on social media interprets this as “SSC finally officially forsakes rationalism, admits rationality can never work!” If I ever want to officially forsake rationalism, or admit that being rational can never work, I promise I’ll say so in so many words. If I haven’t said this, consider that you might not know what you’re talking about.

This entry was posted in Uncategorized and tagged . Bookmark the permalink.

204 Responses to Some Clarifications On Rationalist Blogging

  1. Watchman says:

    Thanks for this, which is a useful summary of your thinking.

    One thing though: a church picnic also functions as an outreach/mission device if sone done inclusively. Since you’re clearly inclusive here (how many churches would happily gently praise Baal in return for 87% non-community-participating attendance at their picnic?), have you not considered that you are still, deliberately or not, preaching a message that promotes rationalism, if only through your examples. I know some of your writings have made me question my thinking and that as a result I am much friendlier towards the idea of rationalism than I might have been. I doubt I’m the only one here.

  2. Bugmaster says:

    If you want places that try harder and make bigger claims, your best starting points for rationality are Center for Applied Rationality, Less Wrong, and various AI safety stuff. For effective altruism…

    You know, this is the biggest problem I have with organized movements of all kinds, from atheism to various religions to gay advocacy groups — and that includes the Rationality ™ movement. None of these groups ever stop at promoting their core mission; they all acquire a set of very specific dogmas (dogmae ?) and specific stances on specific issues that one must subscribe to in order to be a member of the movement. In case of Rationality, you can’t really be Rational ™ unless you also believe in AI risk, Effective Altruism, and possibly Cryonics — otherwise, you’re being a bad rationalist, obviously. It’s not enough to just develop and promote improved (allegedly) ways of thinking; no, one must also support specific conclusions, which are assumed to be so obviously correct as to be virtually above scrutiny.

    • Murphy says:

      unless you also believe in AI risk, Effective Altruism, and possibly Cryonics

      Some of the harshest coherent arguments I’ve seen against all of those have been ones from within the rationalsphere communities.

      (Ok I’ve not heard much coherent argument against the general *concept* of EA … but against specific goals of the various EA subgroups, absolutely.)

      • Tarpitz says:

        Surely the arguments against the concept of EA are pretty much identical to (or at any rate a superset of) arguments against consequentialism?

        • Dacyn says:

          No, EAs do not have to be consequentialists. They only have to care about outcomes, and almost all (all?) moral theories care about outcomes, even if they don’t take them to have the final say.

        • Bugmaster says:

          Personally, I do agree that if one were to donate one’s money, then one should do so in ways that are likely to have a strong impact on one’s chosen cause. However, the EA movement comes with lots of additional baggage that I can’t really endorse.

          • robirahman says:

            What’s the baggage and why do you not endorse it?

          • Tarpitz says:

            I’m not sure I can get behind “agree that one should” but I’m happy to go along with “prefer if people do”. I’m sympathetic to the basic idea of EA, but that’s a far cry from seeing it as in any ultimate sense correct.

            Many hardcore EAs incidentally have preferences I don’t share, but that’s a separate matter.

          • brad says:

            What’s the baggage and why do you not endorse it?

            The fact that the AI and to a lesser extent animal stuff is put on the same stages as malaria nets and the like is off-putting.

            Re: the AI stuff in particular, without trying to actually make a truth claim but rather conveying an impression, the confluence of: raising money to save the world from a millenarian threat, unconventional sexual practices, and doctrines that are at least cousins of The Elect ring loud warning bells for me and I assume others.

          • Edward Scizorhands says:

            I have to concur with brad. When you are trying to save the world from AI threat, that is basically using up all your weirdness points at once.

      • FeepingCreature says:

        There’s a Tumblr post where somebody described rationalist blogging as a set of topics rather than opinions.

        > So if you post something like “I hate Eliezer, the Singularity is the rapture for nerds, and effective altruism is imperialism and we should have full communism instead”, rationalist Tumblr as a whole will conclude that you are a rationalist and trying to be friends, and will behave accordingly.

        https://thingofthings.wordpress.com/2016/04/14/useful-notes-on-tumblr-rationalist-culture/

      • peterispaikens says:

        Off the top of my head (I’ll admit that I haven’t read up on these issues) here’s an argument against the general concept of EA:

        EA sort of axiomatically assumes inherent “true altruism”, i.e. the notion of helping others the most as a terminal goal – while it could be easily argued that altruism as such is for most people inherently an instrumental goal that facilitates things like social cohesion and relationships within your community, reputation, self-image, etc, etc.

        In this regard EA is a way of optimizing for X while people actually want to optimize Y; and while there’s a lot of overlap between “altruism-as-done-naturally” and “true-EA-altruism”, once you attempt to really maximize EA-altruism then that rapidly brings you out of that overlapping area – shifting from “natural” altruism to “more effective” altruism reduces these results; if you switch from giving to your local community to giving to a remote poor community, you’ll get more impact but reduce the cohesion in the only community that affects you; if you take away resources from your children to save lives from malaria, you’ll save lives – if the expenditure is trivial, then that’ll be treated as a good thing from all perspectives, but if you optimize for lives saved and increase the “diversion of resources”, at some point you’ll be perceived as a bad parent by pretty much everyone.

        To summarize, EA makes sense if you actually want to optimize for the “amount of good done” and all the other things are just side effects. However, as (I’ll assert without attempting to argue that) for the vast majority of people these things are as important (if not more important) than the actual amount of good done, all these people should not pursue EA as it goes against their goals; and their goals would be better achieved by the ordinary “non-effective” altruism behavior.

        Or, possibly, by a modification of EA that uses the same principles but attempts to maximize a *different* value function than purely the amount of “good done” or lives saved, and takes intoa account things like the fact that people may want to value lives and good done differently (possibly by orders of magnitude) depending on their location and/or “tribal membership” (e.g. my family vs other families, my ethnic/religious/national/political/generation/hobby-interest/online-forum-community/whatever group vs others – IMHO it’s clear that it’d be a lie to assume that most people don’t assign major value to such differences, even if it’s currenly politically expedient to assert that you value everyone equally even if it’s not really true, “tribal” behavior is is not only popular but AFAIK innate) and doesn’t discount the “appearance of doing good”/PR/reputation effects to near zero. There are people who would genuinely like to do more effective charity, but under these constraints; and even for a “true EA” activist it would make sense to help “not-true-EA” people to do that as it’ll still drive “more effective” altruism while not attempting to do “most effective” altruism.

        • eh says:

          I agree with this.

          EA sometimes feels like virtue mugging, to coin a phrase, where you’re forced to choose between being maximallly “effective” to the detriment of your quality of life, or thinking of yourself as a hypocritical and cynical virtue signaller.

          I pretty much stopped giving to charity a year after discovering EA, because being confronted by that choice was making me feel absolutely awful, and avoiding ineffective charitable giving is the quickest way to avoid feeling guilty. I also stopped caring about the entire continent of Africa because of the cognitive dissonance of caring but doing nothing, and in general became a less empathic and worse person.

          • Scott Alexander says:

            I’ve also dealt with these issues, and came pretty close to going the same direction you did, and wrote https://slatestarcodex.com/2014/12/19/nobody-is-perfect-everything-is-commensurable/ about some of where I ended up. Curious what you think about it.

          • eh says:

            CW: this is a pretty negative look at charity and guilt.

            The Gervais Principle introduced me to the concept of status illegibility. I haven’t read Seeing Like a State yet, so perhaps I’m slightly off the mark here, but EA seems to be about making charity legible. Most people have a vague idea that giving money to e.g. World Vision will help prevent some amount of death or suffering, but GiveWell can tell you exactly why giving your money to them isn’t very effective. With a little effort they could give you an exact lives-per-dollar comparison between World Vision and the Against Malaria Foundation.

            This has really good consequences when you’re choosing a charity, and really good consequences for kids in Africa with worms and no mosquito nets, but legibility allows you to compare absolutely anything else with charity: your plane tickets to Spain become 50% of a life each, a new car costs 5 lives, the decision to have beer and pizza after work is 0.5% of a life, paying for your brother’s rehab is 2 lives minimum, keeping grandma in her nursing home is 3 lives a month. Worse, there’s no possible way to pay enough money to save everyone from every problem in the world, even if you think you’re being a maximally effective altruist, because you’re a mere mortal and don’t have infinite time, perfect judgement, or all the information you need.

            There’s a debate to be had about what you’re morally expected to do, but it’s covered better elsewhere, so I’m going to put it aside and talk about the emotional impact. How you deal with the existence of a bottomless pit of suffering in the backyard is a pretty personal reaction, but a lot of possible reactions involve deciding you don’t care about the pit, limiting your understanding of the scope of the pit, deciding that the suffering doesn’t count, or in some other way denying what’s happening.

            The key point here is that the emotional reaction for me was a result of the legibility of charity. If you’ve think you’ve got a moderately shallow pit of unquantified suffering, it’s easy to chuck $100 in and feel good about yourself, whereas a very very deep pit of suffering with a fixed dollar amount per QALY makes your $100 look like a complete joke – if you threw another 29 $100 notes in, then you would have saved a single person out of millions. It’s easier to change your values to allow you to ignore the pit than it is to confront the problem – and hey, you’re paying 3 lives for grandma every month, so how much more is she worth? 10 times? 20? 100? Is she infinitely more valuable to you than the lives that giving to the AMF would save? Does that mean those lives were valueless in the first place?

            It’s not about the placement of the bar at 100% of your income, or 50%, or 10%, or 2%. It’s that fleeing from the problem is far more effective at making you feel good than actually fixing it.

          • xaneth says:

            I think you might be using The Copenhagen Interpretation of Ethics, which I don’t think most people (especially rat-adjacent people) would endorse, even though many of them may subscribe to some portion of it in practice.

            PC Hodgell said “That which can be destroyed by the truth should be”. While your emotional reaction urges you to look away from the finitely-sized pit of despair in your backyard, in a way that it doesn’t with the bottomless version, it is in opposition to the way one should actually act in each of those cases (in fact, I think maybe they should be flipped – despair is a perfectly reasonable reaction to a bottomless pain pit, but a finitely-sized one is at least solvable, eventually). I realize that acknowledging your emotional reactions to situations and properly dealing with them is just as important a part of rationality as anything else, but I think it’s better to actually fix the bad ones in the end.

          • LesHapablap says:

            eh,

            That reminds me a bit of the bias in paying off individual debt: it is always a better idea to pay off the portion of your debt with the highest interest rate. However almost everyone, including people with experience in finance, will pay off small debts first, just to clear them off the board, even if those small debts have a lower interest rate.

          • Frederic Mari says:

            I would go further and push back on the very idea that altruism is good.

            Some cynics I know point out that charities are mostly scam (spending an inordinate % of funds raised on themselves/on further fund raising) and that charity workers can be fairly selfish/exploitative/very into virtue signalling.

            But even more brutal – is helping poor people in Africa a good idea? If you save a child from Malaria but haven’t got a solution to give him an education, a job – what have you done but create another person who will never be autonomous and thus quite susceptible to all kinds of bad politics, from tribalism to fascism to terrorism?

            Which reverses Scott’s opinion – Altruism may save more people than politics in the short run but saving the greatest quantity of people is a bizarre objective when there’s no shortage of people (and indeed I could easily argue we’re suffering from overpopulation) while politics, by trying to address the quality of life of whoever happens to be around, is the most important subject to tackle.

          • Ozy Frantz says:

            I’m pretty sure if it were your kid that was dying you would suddenly have way fewer concerns about whether saving their life is ~truly a good thing~.

          • Frederic Mari says:

            Ozy Frantz : Yes, of course. And, if my kid was dying, I would not even care where the help came from or what motivated it. I would just weep with gratefulness. Assuming I wasn’t emotionally drained and destroyed by my fears for my kid’s life.

            But that’s a personal, emotional reaction (not to mention that I feel, rightly or wrongly, pretty responsible for whatever happens to my kids. Though not as much as their mother… 🙂 )

            But that’s not a rational, God-eye level of policy setting. At that level, even if we don’t have to ignore them, we cannot let emotions dictate our conduct. At that level, human beings are just numbers, to be managed at best we can – I call that maximizing utility within constraints (including things like fundamental rights – you cannot reduce people to feeding paste, regardless of how utility maximizing that might be).

          • Aapje says:

            @Ozy Frantz

            Appealing to ingroup bias for the outgroup in the way you do requires a belief that the ingroup-outgroup distinction is bad and that ingroup bias should be extended to the outgroup, rather than the ingroup treated more like the outgroup.

            What you are doing seems to be the basic system of virtue mugging: appealing to a standard of care that is much more manageable when people restrict it’s scope*.

            * Perhaps with good reason, as care for people who are close is more often reciprocated and allows for easier prevention of exploitation.

          • brianmcbee says:

            Although I understand that feeling, wouldn’t the same apply to doing *any* research about how effective a charity is? The US has Charity Navigator, for instance, in which you can see what percentage of their money goes to fundraising, administration and the like. It seems like perfect ignorance is the only solution to this conundrum.

            EA ought (!) to be about helping you figure out how to give in connection with your personal values, not somebody else’s values.

    • deciusbrutus says:

      Also, if one is rude to the wrong people you get accused of Horrible Things until it’s easier to formally expel one than to explain once more that having a bad breakup with someone does not make them literally Hitler.

      • Jack Lecter says:

        Is this the Vassar thing, or is there another I don’t know about?

        Come to think of it, I don’t know that much about *that*. Maybe this isn’t the place to discuss it, but I can’t think where would be.

        (Aside from the usual mix of curiosity and concern, I need to know if I should start adding a disclaimer before I quote him.)

    • Jon Gunnarsson says:

      dogmas (dogmae ?)

      English plural is dogmas. If you want to go for the Latin/Greek version, it’s dogmata.

      • Bugmaster says:

        Ah, good point, thanks ! Stigma/Stigmata, Dogma/Dogmata, I should’ve figured that out.

    • meh says:

      This probably stems from the rationalist belief that a rationalist can not agree to disagree (https://en.wikipedia.org/wiki/Aumann%27s_agreement_theorem)

      • FeepingCreature says:

        A rational agent cannot agree to disagree with another rational agent. Rationalists are ofc very far from rational agents.

        • RC-cola-and-a-moon-pie says:

          Well, don’t you also have to know the same facts, and have full knowledge of the other’s beliefs? That seems to be a much more significant limiting factor than the rationality of the thinkers. People typically have learned different facts. Obviously if it were possible in practice to sort through each other’s entire inventory of assumed factual predicates then you could eventually reach a point where Aumann could come into play but no sane person would think that could ever be possible between two fully rational disagreers in the course of any single conversation. To the contrary, what happens in reality is that a rational thinker’s current beliefs channel his inquiries into areas different from the guy who started with differing beliefs, with the result that the chasm widens rather than narrows. And that is true even if the two are entirely rational.

      • kybernetikos says:

        Aumann’s agreement theorem assumes common priors whereas, I find that disagreements are usually because of differences in the priors, and disentangling currently held priors may easily take more time and effort than is rational to spend on ensuring you agree on a low importance topic.

        Rational agents with the same priors are essentially the same rational agents.

        • Even if you start with the same priors, you may get different data thereafter. You can tell me what you saw–but I may not believe you, whereas you know whether you are telling the truth.

      • peterispaikens says:

        Whenever “let’s just agree to disagree” is said, it generally means one of two things:

        1) An implied notion that at least one of the parties is arguing in bad faith (very much unlike a rational agent), and refusal to participate in that;
        2) Asserting that the expected cost of establishing common knowledge of each other’s beliefs (the requirement for Aumann’s theorem) exceeds the expected benefit of obtaining the agreement. This is a reasonable decision in many cases both for real rationalists and theoretical perfect rational agents – the fact that they *could* come to a common understanding doesn’t mean that it’s worthwhile to actually do so, especially if the “distance” (i.e. the disjoint knowledge and models of reality) is large enough and the shared agreement isn’t actually *that* important to any of them.

        • J Mann says:

          (2) is IMHO both very important and often underappreciated.

          Empirically, since rationalists can be observed disagreeing all the time, I’m pretty confident in at least one of:

          – Aumann’s whatsit does not mean rationalists can’t agree to disagree;

          – Aumann’s whatsit is wrong; and/or

          – At least one of the concepts in the phrase “(a) rationalists (b) cannot (c) agree to disagree” is understood in a way so narrow as to make the statement useless.

          I blame EY’s stupid “Bayesian judo” story. (Unless I’ve misunderstood it, and it’s meant to be a parable about how the narrator misunderstands Aumann’s whatsit).

          • thevoiceofthevoid says:

            As far as I could ever tell, the moral of that story is “Eulering people is even more fun when you’re actually right!”

          • Dack says:

            That story didn’t sit well with me. Yeah, he totally Eulered that guy,…and was so proud of it:

            I only torture them to make them stronger.

            It gives me a rather low opinion of EY.

      • pjs says:

        Under technical assumptions, and – so far as I know – even then the theorem concerns agreeing about what is true rather than what should be done (i.e. it doesn’t imply agreement about utility functions). Surely, many cases of “agreeing to disagree” involve oughts (even if sometimes a bit hidden).

      • holomanga says:

        This seems like a bit of extra complexity when the same thing is true of basically every other Tribe.

    • ShemTealeaf says:

      I think there are a wide range of “acceptable” positions on these issues, even ones that directly contradict the dogma. For instance, I think someone could be considered part of Rationality ™ even if they think that MIRI is doing more harm than good and cryonics is likely to end in horrible torture. However, if your position is more like “LOL AI is just science fiction”, it’s hard to see how your way of thinking could be compatible with Rationality ™. It’s more about your position demonstrating that you’ve thought about the issue rationally rather than that you’ve reached some particular conclusion.

      • Jack Lecter says:

        It’s an “Ideology Is Not The Movement ” thing. At least here it is, but I think on LW and the other rat/rat-adjacent blogs, too:

        “All the atheists go over by the rallying flag and get very excited about meeting each other. It starts with “Wow, you hate church too?”, moves on to “Really, you also like science fiction?”, and ends up at “Wow, you have the same undefinable habits of thought that I do!”

        They [we?] share certain undefinable (or, at any rate, undefined) habits of thought. This is probably multivariate , that is, not everyone has all the qualities in question. This place probably selects a less concentrated mix of qualities, but I still feel like it’s targeted at a particular set of traits.

        (The one I notice myself missing is something like ‘a burning drive to change the world’. I would really, really like to understand it- both by revealed preference and introspection- but all the things that actually fill me with revolutionary fire are too widespread to be useful targets for any of the current coalitions, and all the things that fill me with revolutionary compassion just kind of make me feel really sad and empty, and I leave to do something else.)

        [I hadn’t noticed any social pressure towards MIRI or Cryonics, and pretty minimal, kind of unavoidable towards EA. Maybe that just means I’ve been hanging out at the picnic and not the monastery, but I think I see enough of the monastery to judge that unlikely. Maybe it’s a meatspace/online thing? Or things are different in Berkley, or New York, or wherever Bugmaster is?

    • RC-cola-and-a-moon-pie says:

      I know I tend to be overly critical of the idea of rationalism in the sense of a “rationalist community,” but take this in a friendly, joking spirit. The thing about it is that it seems to take as its defining adjective a property that virtually everyone shares. Let’s take a poll and see who prefers to be irrational. I suppose there’s a fringe of people who reject the very idea of rational thought but among society in, say, the United States today there is not a significant group of people who don’t already think that their beliefs are rational to hold. It’s almost definitional in our society that we think our own beliefs are more rational than their opposite; otherwise we wouldn’t be holding them. It’s almost like deciding to form a “truth community,” where we all agree to believe things that are true and not to believe things that are false! Well, glad we got squared away on that important issue I guess. There’s a sense in which it could still make some sense to say that a group can call itself rationalist relative to other people insofar as the group tries to isolate and develop techniques to avoid bias and the like, but most of the things I’ve seen are trivial, the sort of thing you get just by being a decently educated person. Certainly nothing that would be adequate to entail any of the substantive claims that seem most associated with the “movement” as it is observed in the wild. I guess my sensitivity probably stems from the fact that I disagree with a number (not all, of course) of those typically “rationalist” views, but the suggestion that those like me are simply ignorant of principles fostering rationality seems kind of unfair, even if it is not an explicit dogma of the rationalist community’s stated principles. I worry that occasionally it seems like the tenor of the discussions resembles that of a group of bright, optimistic undergraduates who think they have it all figured out but haven’t been adequately exposed to deeper ideas that undermine their worldview. But now I’m the one admittedly over-generalizing. Scott Alexander, for one, doesn’t fall into that mold, as exemplified by this very post, and the discussion here is high quality.

      • soreff says:

        > Let’s take a poll and see who prefers to be irrational.

        Well… My view, after looking at all of the cognitive biases the
        mind is heir to, is that trying to compensate for all of them
        is extraordinarily difficult and time consuming. Only the
        highest-stakes decisions warrant it.

      • blacktrance says:

        Let’s take a poll and see who prefers to be irrational.

        Few would prefer to be irrational 100% of the time, but people are often put off by rationality applied to some areas of life (e.g. personal or romantic relationships). Regardless of the conclusion, the weighing of costs and benefits is called calculating, soulless, or autistic.

        Also, more generally, irrationality is the default, so even if you vaguely endorse rationality, you’ll still be irrational unless you practice with the rationalist toolbox until it becomes a habit.

    • blacktrance says:

      I’m agnostic about AI risk, not signed up for cryonics, and have mixed feelings about EA, and I’ve never been made to feel that I’m a bad rationalist for it – indeed, I feel like more of a rationalist than some people who subscribe to all three of those conclusions wholeheartedly.

    • suntzuanime says:

      I really don’t think that’s the case. Yudkowsky talked like that sometimes, but the community didn’t really follow him on that. Even atheism doesn’t seem to be a dogma, you have Catholic rationalists and people more express surprise than demand that they be drummed out. In particular “above scrutiny” doesn’t really seem to be a thing for the rationality community, except for when leftists are holding a gun to their head about specific leftist dogma.

      Now, there are specific beliefs that lots of members of the rationalist community have, if you go to a rationalist meetup there will be a lot of people there that assume you’re into EA and want to talk to you about it. And that can be disappointing as a non-EA rationalist. But it’s more a matter of “you won’t fit in” than “they’ll throw you out”.

    • Scott Alexander says:

      I think this is just a hard problem.

      I think it’s impossible to have a general goal without also developing some specifics. For example, suppose a rationalist starts telling you about Bayes’ theorem, and you walk away, saying “I thought this was just about rationality! I didn’t sign up for the specific belief that Bayes theorem can make you more rational!”

      And then someone starts talking about avoiding cognitive biases, and you say “I thought this was just about rationality! I didn’t sign up for the specific belief that learning about cognitive biases is an important part of becoming more rational!”

      Or a different problem: suppose you find out most rationalists reject young-earth creationism. You say “I thought this was just about rationality, not about promoting evolution!” But hopefully any successful rationalist movement will be correlated with being more pro-evolution.

      I realize that what you’re complaining about is several steps beyond that, and that my “it’s all on a spectrum” excuse probably doesn’t help if you feel like we’re on the wrong part of that spectrum. A month or so ago I listed this as among the most important open problems in rationality: how can we combine a focus on process with the fact that the process will produce results and those are important both as a way of judging if the process is working, and for their own sake?

      In reality, I think the answer is that the ideology is never the movement, we should accept an awkward mix of process-based and results-based ideas, while trying to be very tolerant of good rationalists who arrive at unusual results. I think we’re doing well at that given how many Christians, philosophical idealists, far-leftists, far-rightists, cishumanists, and other nonstandard people are in the community without feeling too persecuted.

      • Bugmaster says:

        I understand what you are saying, but I agree with your characterization of me:

        what you’re complaining about is several steps beyond that, and that my “it’s all on a spectrum” excuse probably doesn’t help if you feel like we’re on the wrong part of that spectrum.

        You say,

        how can we combine a focus on process with the fact that the process will produce results…

        That’s fine, but I think that the current Rationality ™ community cares too little about the process, and cares too much about evangelizing the conclusions. I do agree that Rationalists are better at this than, say, Christians — but that’s not exactly a high bar to clear, what with all the holy wars throughout the ages.

        BTW, you list the Bayes Theorem as your totally inoffensive example; but actually, reading Yudkowsky’s articles on it is what caused me to lose a significant amount of respect for him personally, and his followers tangentially. His attitude seems to be “anyone who isn’t using the Bayes Theorem for everything is obviously an idiot, especially if he’s a so-called scientist, neener neener”. Yes, I understand that there are good reasons to use the Bayes Theorem, and that it’s very important; but it’s hard not to visualize Chesterton and Einstein tag-teaming Yudkowsky out of the ring, after reading that. Once again, it’s the difference between lowercase-r rationality and evangelism — though I understand that the spectrum is wide.

        • Scott Alexander says:

          “That’s fine, but I think that the current Rationality ™ community cares too little about the process, and cares too much about evangelizing the conclusions.”

          I think there is (and should be) an AI risk community that evangelizes its conclusions. I think (contingently and unfortunately) they’re mostly the same people as the rationalists, and (necessarily and unfortunately) AI risk is both much more interesting and much easier to evangelize than rationality.

          The best solution I can think of is for AI risk people to make it very clear when they’re speaking with their AI-risk-community hat on vs. their rationalist-hat, but it’s hard to see what kind of hat someone is wearing over text on the Internet.

          I think part of the problem is that rationality is not just a process rather than a result, but a goal rather than a process. If you evangelize the goal, you sound trite (obviously we should all be more rational). If you evangelize the process, you end up sounding reductionist and dumb, because there is no single process, more of just a mindset. And if you evangelize the result, you end up, uh, where we’ve already ended up.

          My solution is just to try to apply the process to a lot of different questions very publicly, and hope people can triangulate goal, process, and result from all of that, but it’s not the sort of thing you can fit on a pamphlet.

          • Bugmaster says:

            I think (contingently and unfortunately) they’re mostly the same people as the rationalists

            This is currently the case, but it doesn’t have to be; just because one is thinking rationally, does not imply that AI risk is a foregone conclusion (since people’s priors are wildly different). Of course, if someone believes that AI risk is a foregone conclusion a priori, then it might be easy to conclude that everyone who doesn’t believe in it is irrational…

            AI risk is both much more interesting and much easier to evangelize than rationality.

            I disagree. The first part is purely subjective, and the second part is a bit esoteric. One good thing about rational thought is that it yields — or, at least it should yield — immediate benefits. If one is thinking rationally, one can — allegedly — spend one’s money more wisely, achieve better health, etc. This is not the case with AI risk, because there’s no short-term feedback that tells you, “cool, I’m now 3% closer to stopping the Singularity”.

        • Garrett says:

          I suspect that part of the problem is the way Bayes Theorem is used/referred to isn’t related to the way it should be used.

          Bayes Theorem is a formal way of reasoning about calculable probabilities under uncertainty. Eg. what is the best estimate of the probability of “heads” coming up for a given coin if I don’t know if it’s fair or not, given a sequence of coin tosses?

          To step away for a moment, let’s consider another area of math – static analysis used for constructing buildings, etc. The static analysis course I did could be summed up as “doing calculus until you believe that I-beams work”. Now – there’s a bunch of formal math behind everything. It’s nifty. And it leaves you a formal way to produce answers for practical problems. But if you present a drawing to a qualified civil engineer (and I am not), they will have an intuitive sense for whether a selected I-beam is large enough. If you ask, they’ll be able to give you a good guess. And if that isn’t rigorous enough, be able to calculate from first principles the answer. (Yes, in-practice there are well-defined tables for this sort of thing, but that’s because there are calculable answers and mass-produced products are only available in finite discrete sizes). In this case, the expressed intuitions are approximately equal to the calculable answer.

          Stepping back in: far too many of the problems of which “rationality” is applied or talked about are incalculable. If you are at a restaurant and given the option of chocolate cake or strawberry pie, how do you properly apply Bayes Theorem to that? Do people carry around a spreadsheet of their pairwise probability preferences for each dessert combination they might encounter? And if so, how do they modify the probabilities to take into account the particular restaurant or recommendations they are getting for the specific desserts? That is, how should they modify their own priors given that they are being recommended the chocolate cake by their friend who likes chocolate cake, but they are at The Strawberry Emporium? Which choice will they most likely prefer? This problem is incalculable! And yet people will have some sort of “gut instinct”. But instead of calling it a guess, they claim to apply formal terms like Bayes Theorem to it. Which dilutes the public perception of the value of Bayes Theorem and turns it from a limited-purpose, rigorous mathematical tool into a general-purpose intuitive religion.

          • Joseph Greenwood says:

            I think it’s worse than this. I’ve read most of Savage’s “The Foundations of Statistics,” where he outlines his (beautiful) axioms for expected utility theory. I’ve gone through the same proof presented more succinctly in Peter Fishburn’s “Utility Theory” (I think that’s the book where he reproduced the proof, anyway). I’ve read about Dutch-booking, and robust satisficing, and possibility theory, and Bayes’ Theorem. I’ve tried to read the Sequences a few times, and I enjoyed reading Harry Potter and the Methods of Rationality as it came out. And at the end of it all, I think that expected utility is a useful heuristic, and Bayes’ Theorem is a useful heuristic that can help people think about certain problems better–but that’s it. I think the whole apparatus of probability theory is too simplistic for a normatively rational agent, and worse, that a rational agent is not obliged to have weakly ordered preferences between decisions that he/she/it/ze/they/whatever could make.

    • robirahman says:

      Can you describe the scrutiny you think rationalists are disregarding? I agree with the median rationalist on most of their positions, but the good thing about talking to people in this context is that they (or at least I, if not everyone) will change their minds if you provide a better case for a different position.

  3. Chris Brav says:

    any pretensions to rationality at all, it’s a rationalist picnic and not a rationalist monastery.

    At least part of the confusion for everyman is the juxtaposition of the commonly used and understood word “rationality” with the very different and more specific word “rationalist”.

  4. I’m going to disagree with you, Rationality can be used to get better at something. that something is Playing video games.

    Knowing about human bias’s allows one to look at video games from a more objective point of view.

    For example, Does Spirit of the shark suck in Hearthstone? (I truly apologize if you don’t play Hearthstone)

    There are those who argue that spirit of the shark is massively overrated Some others may say that the data is flawed or something else. However reading the sequences/other rationality information will tell you that human bias’s are the problem and Spirit of the Shark actually just sucks and the people at Viscious syndicate and Hsreplay are right.

    I find this is true for a lot of different video games, and perception of what is going on rarely matches what is actually going on.

    I apologize if video games aren’t a good subdomain and if you disagree that rationality is causing me to think like this. But incredibly smart and gifted card game players are making a basic mistake that is easy to catch with simple rationality training.

    Other specific behaviors caused by rationality that may or may not have made my life better

    1. I now offer to bet money on most disagreements frequently causing the other person to back down.

    2. I stopped following the news, instead, I watch anime and hentai.

    3. I’ve shifted my viewpoint on a lot of issues from “I don’t have an opinion nor is having one of any use”

    4. i am studying computer science as a result of needing to get a job

    • Bugmaster says:

      But incredibly smart and gifted card game players are making a basic mistake that is easy to catch with simple rationality training.

      So you say, but I don’t really play Hearthstone, so I have no opinion of my own. However, “I’ve read the Sequences and therefore Spirit of the Shark sucks” is not a valid argument. You link to some articles listing a bunch of stats, but AFAICT none of them are making an argument regarding this specific card, one way or the other. So, right now, it’s just your word against that of the “smart and gifted card game players”; and if they have a good win/loss record, then I’m more inclined to trust them over you.

      • Fair, here’s what I Should have said.

        “Group 1 who is looking at 100s of thousands of games of legend ranked players is showing that putting spirit of the shark in your deck decreasees your win % by over 4% compared to playing a sharkless build of the same deck.”

        The Vicious syndicate article is making the argument pretty explicitly, and the HS replay data I think stands on its own.

        Group 2 is experts at the game who draw upon their own experiences. (or really their memory of said experiences)

        • deciusbrutus says:

          That is indeed what you said, but you used so many levels of jargon that roughly nobody else in the world could understand it as such- only rationalist hearthstone players who know how your sources get their numbers would understand it.

          And the people who disagree with the numbers even have a real reason: They are asserting that the win rate is being brought down because there are lots of people who are playing SotS suboptimally, and by extension asserting that a notworthy fraction (~5-10%) of high-level ranked games played with SotS were lost by the player making suboptimal choices.

          There could exist a card that was so hard to play that only a few people at the top ranks could do so effectively… in theory. In practice, however, there are so few potentially optimal actions in Hearthstone that at least a tenth of a percent of players will notice and adopt a working strategy, and if playing SotS correctly (as opposed to playing it as well as the average player in high-level ranked play!) made a 5% difference in win rate, someone would have figured it out, and everyone in top ranked play would be imitating them.

          Since all evidence explains both scenarios (SotS sucks as much as the data says it does; SotS is just very hard to play). now we have to form priors: What are the priors of professional Hearthstone commenters that they understand the game and meta better than their audience, who they are explaining it to?

    • Bugmaster says:

      It looks like you’d edited your comment after I posted my reply, so I’ve found something else to disagree with:

      i am studying computer science as a result of needing to get a job

      There’s nothing wrong with studying computer science (or anything else for that matter); and obviously having a job is essential (for anyone who’s not independently wealthy, that is). However, I’ve had a long career as a programmer, and in my personal — and, admittedly, completely anecdotal — experience, the people who see it as just a job rarely become very good at it. They can work reasonably well with sufficient supervision, but usually lack the ability (or the inclination) to come up with creative solutions on their own. The problem with programming, though, is that creative ideas are pretty much the core of the job. True, there’s always a need for some repetitive tasks and rote solutions; but computers are really good at stuff like that, and the whole point of programming is to automate such things away as quickly as possible.

      So, if you enjoy CS and/or programming, then that’s fantastic; but if you see it as just the means to a paycheck, I fear you might be disappointed.

      • Lancelot says:

        People who see any job as just a job rarely become very good at it.

      • I like programming and puzzle solving quite a lot. I also like lots of weird creative solutions to problems.

        Programming has been a lot of fun for me, and Given my family history is a good fit for me

      • deciusbrutus says:

        There’s a ton of programming work that is “implement this specification”.

        CNC milling machines, for example.

        • Bugmaster says:

          I’ve never programmed a CNC machine, so I’ll have to take your word for it.

          That said, I have no problem with “implement the specification” as a job… for entry-level programmers, or possibly interns. Mid-range to senior-level programmers (i.e., the well-paid ones) should be able to solve problems without someone else laying out detailed specifications for them. The really good programmers should do what the customer actually needs, not necessarily what he says he wants.

      • johan_larson says:

        The problem with programming, though, is that creative ideas are pretty much the core of the job.

        I would call this an overstatement. Yes, creativity does matter, and I could believe it matters a bit more than in other types of white collar work. Sometimes the standard solutions just won’t work, and something novel is required.

        But there is plenty of work in software development that doesn’t require much in the way of creativity. Standard solutions are often just fine. And the profession is broad enough that other virtues — tenacity, curiosity, self-discipline, sociability — have plenty of room to run.

        • Bugmaster says:

          But there is plenty of work in software development that doesn’t require much in the way of creativity.

          While this is true, such work is rapidly shrinking, as new frameworks/libraries/code-gen tools/CMSs/etc. are built to automate it away. You want to be the guy writing the tools, not the guy clicking “next” on some wizard UI, because clicking “next” doesn’t pay well at all.

          • brad says:

            Ideally yes. But for the near to medium term future there’s still a decent living to be made doing an endless series of wordpress sites. Not work I would want to do, but I’m not everyone.

          • a real dog says:

            I wholeheartedly disagree. There are certainly new tools, but between leaky abstractions, knowing which tools to use and how to glue them together, and being able to dive deep into the internals once something inevitably breaks, there is no room for the guy clicking “next”. Even junior devs need to know a lot to be able to contribute on any serious project.

            What changes is the scope. A system that would take 30 people a year in 2000 will now take 3 people a few months tops, just because there are so many well-maintained components you can stack on top of each other instead of reinventing the wheel. Unfortunately, your boss is also aware of it.

          • eric23 says:

            there are so many well-maintained components you can stack on top of each other instead of reinventing the wheel

            This has always been the case. The components used to be called “registers” or “functions”. Now they are called “libraries” or “frameworks”. What were once hard problems become easy problems, but what were once unattainable problems now become hard problems. And just like once upon a time you had to have absolute clarity regarding what the registers and pointers in your program were doing, in the future you’ll need absolute clarity regarding what the complex libraries you put together are doing. If you lack clarify on any “minor” point, it will result in a bug somewhere that you’ll probably have to figure out in order to fix. In short, the job of programming will remain about what it is now.

          • Bugmaster says:

            @eric23:

            In short, the job of programming will remain about what it is now.

            While I am somewhat convinced that this is true, I’d like to underscore the point that what programming is now (at least, outside of the entry-level tasks) is a creative endeavour that requires the programmer not merely to understand how all the building blocks work, but how to apply them in order to solve novel problems. Quite often, doing so will result in the creation of new building blocks.

        • Faza (TCM) says:

          I took Bugmaster’s statement to imply something along the lines of what I do on the programming side of my job: business, for want of a better word, comes to me with a request to “have a thing that does X” and it is my responsibility to translate this into a concept that makes sense as a program and subsequently execute it. Doing so effectively and – the perennial requirement – fast requires knowledge of the environment and the domain, plus a bit of creative thinking.

          As the projects evolve, it also entails a fair bit of architectural work, to make subsequent changes faster and easier to implement, and to keep things consistent (which is why I spend a fair bit of time on the plumbing, as it were).

          Nothing I do is remotely in the realm of computer science. It is very much computer craft. Nevertheless, I would argue that it does require the ability to visualize the solution – as a whole – before you type any code whatsoever, which to me is the very definition of creativity.

          If I didn’t like this kind of problem solving (allowing me to get good at it), I wouldn’t be able to do the job I do.

    • martinw says:

      4. i am studying computer science as a result of needing to get a job

      What’s this doing in the list? “You need a job in order to make a living” and “computer science is a lucrative career for those who have the aptitude for it” are both well-known and uncontroversial, you don’t need to discover them from first principles using special rationalist techniques.

    • Murphy says:

      I now offer to bet money on most disagreements frequently causing the other person to back down.

      I don’t think getting the other person to back down is the best end-goal but I do admit I’ve gone the route sometimes of

      “ok, so you say you’re absolutely 100% (or so close it makes no difference) sure that X is going to happen… ok so it looks like paddypower bookmakers are offering 7/1 odds on the issue, if you’re that sure it’s basically free money. You could make a lot of money.

      So how sure are you really? would you take those odds betting some of your savings?”

      which does tend to switch people to “ok… maybe not that sure”

      I don’t so much see it as a way of winning arguments as a way of getting people to think of things as less than certainties.

    • Tatterdemalion says:

      I’ve shifted my viewpoint on a lot of issues from “I don’t have an opinion nor is having one of any use”.

      Interesting – I think the most valuable rationality insight I’ve gained has been to shift my viewpoint on a lot of issues to, or at least towards, that.

      Being right about complicated things is really hard. I don’t think I’ve gotten much better at that as a result of reading SSC, but I think I have gotten.much better at not being confidently wrong, by the simple expedient of being less confident iny own judgement in areas I’m not an expert in.

      • Whoops I meant TO not FROM ugh too late to change it.

        Yes I agree with you that going from having an opinion to going “why bother” is among the most valuable rationality insights

    • Jiro says:

      The bet thing I consider a failure of rationality. People have reasons for avoiding bets other than irrationality. (For one thing, bets signal confidence, but they don’t signal justified confidence. For another, not all people are equally capable of absorbing the loss from a bet, so considering bets acceptable gives another advantage to the rich. And betting also signals an inability to manage money properly, given the way a lot of people in the real world bet.)

      Not to mention that bets have the same problem as “I would change my opinion if…”–since you’re not perfect, it’s possible you could lose the bet because of loopholes in the bet’s phrasing rather than because of something happening that substantially relates to the proposition you’re arguing about. Eliminating loopholes in advance is hard, and just the risk of them may be enough to make a bet a bad idea.

      • Virtually all bets are made for a trival sum of money that is <$20, the money exchanging hands is little more than a formal ritual that says "I was wrong and you were right".

        You are right in that defining beforehand an arbitration method is really important. I would say making thing legible is over 50% of the value of betting. Most bets are of the form "DEFINED BY XYZ ORG ON TIME foo UST WILL VARIABLE bar BE ABOVE quex?"

        • making thing legible is over 50% of the value of betting.

          Closely related to something I realized early in my career as an economist. My first econ article was rejected by the JPE on the grounds that it contained no empirical test of its theory. I found ways of testing the theory, resubmitted, and it was accepted.

          The main thing I learned was not that my theory was true—the results of my tests were positive but not all that strong—but that figuring out how to test the theory forced me to think much more clearly about exactly what the theory said.

      • thevoiceofthevoid says:

        Also, I think there’s probably a widespread “anyone offering you a bet is likely a scam artist trying to part fools from their money” heuristic. Which can be quite adaptive at carnivals and busker-frequented city squares.

    • Jack Lecter says:

      I think ‘not following the news’ is a really good life choice. That alone probably puts you in the black.

      (I think it’s true that Extreme Rationality probably won’t make you a superhero, and there were some people who thought it would and need (needed?) to hear that it won’t. Similarly, if you’re looking for the most efficient way to become a superhero, it’s probably not worth the time and effort. On the other hand, if the question is just ‘does this really fun thing I do occasionally help me make slightly better decisions,’ I think the answer is ‘yes’.)

      • Freddie deBoer says:

        I haven’t read anything where the word “Trump” appears in the headline since mid 2017.

        • Jack Lecter says:

          Good for you! I can’t actually say the same, and I wish I could. Bit like going on a (good) diet.

          👍

        • Watchman says:

          You’re seriously behind in the latest developments in the game of bridge then?

    • brianmcbee says:

      I’m going to agree with you to an extent, because learning about biases is probably the most useful tool of all the stuff the rat folks talk about for the average joe. Becoming aware of in-built biases and heuristics is just going to make you a better decision maker.

      AI risk? I’m glad smart people are thinking about it, and I understand and agree with their reasoning, but is that a risk for tomorrow or 100 years from now? It’s like a subduction zone earthquake, or large asteroid strike.

      Bayesian probability? Yes, terrific, but not that useful in my day to day life.

  5. ckrf says:

    What is this in response to?

    • theredsheep says:

      I assume some moron read the post(s) about The Secret of Our Success and went to town on Owning the Rats on Twitter. Something like that.

      • Reasoner says:

        If you aren’t self-critical: “The community is an ideology powered by groupthink”

        If you are self-critical: “Finally the community realizes how dumb they are”

        There’s no way to win with Twitter. Does someone know how to turn this thing off

    • JulieK says:

      As @theredsheep says, some of Scott’s recent posts have been about the limits of rationality, with examples of situations when it would be a mistake to abandon a seemingly “irrational” tradition.
      I guess some critics have read these posts and accused Scott of totally abandoning rationality.

    • Scott Alexander says:

      Partly what Red Sheep says, partly some rationalists complaining that I’m not being as rigorous as I could be (with different people having different specific complaints) and this could dilute the rationalist brand unless I point something like this out.

      • Watchman says:

        Is making rationalism a brand actually a rational thing to do though? At the heart of rationalism is learning to think better; at the heart of a brand is a requirement to promote only thinking that is positive about the brand. If rationalists are demanding you defend a brand rather than seek to learn to overcome biases, such as adherence to a brand, then are they not abandoning rationalism in favour of a similar belief system which limits what biases you can challenge, contrary to the point of rationalism?

        • OriginalSeeing says:

          Opening nitpick: The term for the thing is rationality, not rationalism. Rationalism is its own thing and a common straw man for rationality.

          Rationalists already thought about what you described when it was first starting up and they/we have looked at it again repeatedly over time. The desire to improve and never diminished the practical value of rationality in order to gain side benefits is strong among people who actually care.

          Hopefully rationality will remain resistant to threats like what you mentioned by being a practical art (even if it is hard to measure) and by methodologies such as Rationality is Systemized Winning.

          Most of rationality branding and advertising is essentially dead now anyway. Tiny amounts are done directly at specific groups or individuals, but overall it isn’t really advertised at all (from what I’ve seen, maybe I’m out of the loop).

  6. Frederic Mari says:

    Maybe my experience will be relevant to this conversation.

    I kinda stumble upon this site, and I can’t quite recall how. I’ve followed Marginal Revolution for a while (though I disagree with its authors on quite a bit of economics), I tend to like YouTube channels like “Rationality Rules” or “Genetically Modified Skeptic” and certainly I think Sam Harris, Hitchens and Dawkins are great guys and interesting to listen to (though, again, I’ll disagree on specifics with any and all of them).

    I’ve always been an intellectual/drawn to intellectual subjects and less socially adept than I’d like. I get annoyed with obviously bad reasoning/hypocrisy/nonsense too much for my own good.

    But I still don’t understand what’s the rationality community. Pretty much everyone tries to be rational or think they are. And everybody would like to think that their beliefs are in line with the world-as-it-is. Even religious people. Indeed, especially the religious people, even when they use [faith] as an argument. It’s a recognition they can no longer argue logically yet they feel their beliefs match the world/their experience better than faithlessness/atheism.

    So yeah. I’d like to think I’m going about thinking in a logical fashion. I enjoy back-and-forth on pretty much any topics.

    One of the article that hook me to this site was one of the first article I read, the “Albion Seed” review. God knows how I came across it, just about a year or year and half ago.

    Bottom line – I’m not sure there’s something like a rationalist community that is fundamentally distinct from “people trying to reason correctly in all various kind of endeavours”. I think that what you got is a couple of personalities – LessWrong, Scott Alexander etc. – drawing like minded people into little corners of the internet/IRL and that’s that. Definitely more a picnic than a monastery.

    • Ninety-Three says:

      But I still don’t understand what’s the rationality community. Pretty much everyone tries to be rational or think they are.

      A decade ago, Eliezer Yudkowsky wrote a popular Harry Potter fanfic which contained a link advertising LessWrong, which I will paraphrase from memory as “If you want to be as smart and insightful as Harry, read my blog posts!” This lead to the Sequences, a series of blog posts which varied between implicitly and explicitly claiming that if you read and thought hard enough about cognitive biases, you would ascend into some kind of intellectual superman.

      Insofar as there is a “rationality community” rather than just a “rationality picnic”, it’s the people who still believe some of that original promise that there is a particular way to think really hard about topics which will enable you outperform most people who think really hard about topics.

      • Frederic Mari says:

        Thanks for the background explanations. Things makes more sense now.

        “people who still believe […] that there is a particular way to think really hard about topics which will enable you outperform most people who think really hard about topics”. … seems suspiciously like an irrational belief, does it not? 🙂

        Unless they got data to back this up?

      • Jack says:

        Beautiful summary.

      • The_Grey_Rook says:

        This chronology is confused. HPMOR was finished in 2015, not a decade ago, and only began in 2010, four years after Yudkowsky started the posts that would become the sequences on overcomingbias.com. Yudkowsky’s, Bostrom’s, and Hanson’s ideas attracted substantial interest in the 2000s, and lesswrong.com was formed in 2009, becoming a popular hub for discussion of Yudkowskian topics. The loose cluster of people and ideas that form the rationalist community already existed at that point, before HPMOR even started, much less gained a substantial readership.

        HPMOR brought more readers to lesswrong, but the extent is I think often overstated. Almost all of lesswrong’s important contributors already knew about it before HPMOR, and in a hypothetical world where HPMOR is never written, I doubt the history of this corner of the internet changes much.

        • Jack says:

          HPMOR brought more readers to lesswrong, but the extent is I think often overstated.

          It strikes me that this is a question with an answer. Anyone have any data either way?

          • Ninety-Three says:

            Based on Lesswrong surveys, about a third of the users there had read the whole thing and another sixth had read some of it, so at most that much.

            But I admit I could’ve been clearer: I was using HPMOR because I think it’s a good way of illustrating the rationalist community’s pitch, rather than trying to paint it as the core of the community.

          • Enkidum says:

            Anecdata: I read quite a bit of lesswrong I’d assume around 2010ish, after it was well established and active, but have never read HPMOR, mostly because the whole idea sounds inherently cheesy to me.

          • Garrett says:

            For n=1, I wasn’t familiar with the rationality movement until I encountered HPMOR. I thought it was great until about the 11th chapter when it started getting into thinking that you could predict another person’s actions you didn’t know very well by mirroring their cognition. I mean, yes, but also no. Yes, in the “I wouldn’t like someone who intentionally hit me in the head with large rocks repeatedly”. No in the “I know what to say/do to get someone to take the actions I want them to with minimal negative consequences for my self”.

            At which point it went from science-applied to fanfic into “somebody’s way too full of themselves”.

    • Bugmaster says:

      I would say that the capital-R Rationality community is different from the “people trying to think rationally” quasi-community in at least two aspects:

      1). Heavy reliance on concepts and jargon from Eliezer Yudkovsky’s Sequences. I wouldn’t go so far as to say “venerating Yudkowsky personally”, although it can feel that way sometimes.
      2). Strong concern about AI Risk, due to the belief that some form of Singularity is imminent.

      • Frederic Mari says:

        Again, thanks for the background info, it’s useful.

        2) Strong concern about AI risk.

        And you’re not the only one mentioning a couple of dearly held beliefs by that specific community.

        It’s a bit of a weird emphasis. I’m not dismissing AI risk out of hand. But it seems to me that we will be smart enough to put some safeguards in place i.e. that’s the whole Asimov point about Frankenstein complex vs. the laws of robotics. While I’m sure one can find examples of times when Humanity neglected to take the necessary precautions, it seems to me that climate change is more realistic risk than an AGI going rogue/taking over/whatever…

        • FeepingCreature says:

          I can’t think of a single instance of a scientific endeavour where we put adequate safeguards in place before hitting a failure mode.

        • Dacyn says:

          The concern is that a superhuman AI will be fundamentally different in many respects than a subhuman AI / AI which is only superhuman in some respects: it may be able to self-improve rapidly to become more powerful than we could ever imagine; it may deceive us into thinking that it wants to satisfy our goals in order to satisfy its own goals (which are likely to be incompatible with ours). The most detailed arguments are given in Nick Bostrom’s book “Superintelligince”.

          FWIW I am fairly skeptical of these sorts of AI ideas, though I still consider myself a rationalist.

        • While I’m sure one can find examples of times when Humanity neglected to take the necessary precautions, it seems to me that climate change is more realistic risk than an AGI going rogue/taking over/whatever…

          Climate change is more likely than an AGI taking over. But the potential downside of an AGI taking over is much worse than any plausible outcome of global warming, given the geological record.

          • Watchman says:

            But as we don’t have a geological record for AI takeover then isn’t this actually comparing actual evidence with an assumption? Wouldn’t the climate-change comparator for the potential downside of a dominant AI as projected by some rationalists be the extreme warming scenarios postulated by some environmentalists? Both are modelled, and both are modelled seemingly on the basis of a preferred set of assumptions. That we have evidence to downplay the likelihood of one of these things should not mean that we have to assume the other is a more pressing risk.

          • sty_silver says:

            Also, even if the risks were comparable, you would have much better chances to help with X-risk from AI because far fewer people are working on it.

          • Bugmaster says:

            Well, the downsides of a demonic invasion from Hell itself (most likely through the surface of Phobos) are much worse than even those of AGI; should we start stockpiling shotgun ammo and armor shards, or what ?

          • theredsheep says:

            What Bugmaster said. This vaguely reminds me of Pascal’s Wager.

        • Jaskologist says:

          The concern about AI Risk flows directly from the reliance on Yudkowsky’s Sequences. He originally wrote those precisely in order to convince people to worry about AI.

          • Bugmaster says:

            And that’s kind of my problem with the Sequences, and by extension the Rationalist movement . They claim to be all about developing habits of rational thought and eradicating mental biases… but their real goal is to get you to subscribe to a specific conclusion (which is deemed to be obviously true and thus above criticism). I don’t let Christianity get away with stuff like that, either.

        • phi says:

          Yeah, sure people will be smart enough to *want* to put safeguards in. The issue is that they may not be smart enough that they can actually *build* those safeguards. To take just one example, an “off button” would appear to be a fairly important safeguard to have for an AGI. However, designing such an off button is a surprisingly difficult task. Indeed, as far as I know, no one has ever thought of a completely satisfactory solution.

          (You can’t just use the power plug of the computer the AGI is running on as your off switch, since the AGI can send its code over the internet to be run on a different computer. See here for a few more difficulties: https://en.wikipedia.org/wiki/AI_control_problem#Kill_switch)

      • FeepingCreature says:

        Note that “imminent” seems to mean “some time between 2030 and 2100”.

        • Bugmaster says:

          That sounds pretty imminent to me; there’s a good chance I myself will be alive in 2030 (though not 2100, sadly).

    • imoimo says:

      I’ll add that for me and I suspect many people, “joining the rationality community” felt from the inside like “realizing I’ve been a lot less truth-seeking and more tribal than I thought.”

      Pointing out that everyone thinks their beliefs are in line with the world-as-it-is doesn’t contradict there being a rationalist community because rationalists will agree with you. But they’ll next point out how many people confuse the map and the territory. The refreshing feeling of noticing this distinction is a big part of the rationalist experience (at least for me).

    • Aapje says:

      @Frederic Mari

      Pretty much everyone tries to be rational or think they are.

      Sure, but many people have definitions of ‘rational’ that are not rational in the Rationalist sense or in my sense of the word.

      If someone declares that sitting on their couch and watching a sports program is exercise, are they then actually exercising or are they just wrong?

      It’s a recognition they can no longer argue logically yet they feel their beliefs match the world/their experience better than faithlessness/atheism.

      The failure to distinguish between ‘this is helpful for me to believe’ and ‘this is actually true’ is a failure of rationality.

  7. brad says:

    9. Every time I say anything more sophisticated than a fourth-grader’s strawman of an evil robot who doesn’t understand love, someone on social media interprets this as “SSC finally officially forsakes rationalism, admits rationality can never work!” If I ever want to officially forsake rationalism, or admit that being rational can never work, I promise I’ll say so in so many words. If I haven’t said this, consider that you might not know what you’re talking about.

    By way of background: I have facebook installed on my phone and every once in a while I go through and like some pictures of friends and relatives and their respective offspring. I don’t go to twitter. (What else is meant by social media — instagram I guess? Snapchat?)

    I’m not one of those people that can’t understand why people like things I don’t like. I think fishing is insanely boring but if you love it, more power to you. But if you pretty clearly hate fishing, then I don’t understand why you keep going fishing.

    Is it different if you are famous? Do you feel like you need to manage your brand on there?

    • imoimo says:

      I read that as Scott just feeling misunderstood, rather than trying to force his values on others.

  8. theredsheep says:

    I assume part of the problem with #9 is that “rational” is such a strong positive adjective that the use of it as a descriptor for one specific community comes across as insulting and arrogant–“are you saying everyone else is irrational?” I know, rational vs. /ist. But it’s like “freethinker,” a term I refuse to use because it seems to imply everyone else is a moron. People who aren’t reading closely just see some d-bag touting how clever he is and watch for him to stumble or express doubt the way dogs watch for dropped food.

    I don’t identify as a rationalist because I honestly don’t know where the boundaries around the concept lie, if there are any. I’m not an atheist, am not worried about AI, don’t think cryonics is a good idea, and find transhumanism deeply unpleasant, so I assume it’s not me. But the ideas that people should be humble and charitable in their reasoning, and that the Enlightenment produced some pretty good norms and practices which should be retained and defended, sound good to me.

    “Ratfic” seems to have a similar problem.

    • Randy M says:

      “are you saying everyone else is irrational?”

      The rationalist site name is “Less Wrong.” They are in fact saying everyone else is irrational, but at least it’s in the same way that the Christian says everyone else is a sinner–everyone else, and them too. EY started his blog with Robin Hanson, who went on to start a blog called “Overcoming Bias.” Basically their message is “people all behave in irrational ways. Let’s try and identify them and stop doing that.” Or at least that’s the starting point.
      Sometimes it can come off as Ayn Rand positing that her preferences were the objective truth.

      • brad says:

        I think that’s the steelman version, and on its face its pretty unobjectionable. Why not learn about cognitive biases and try to work to avoid them in your own thinking? But in practice there’s a whole lot of baggage of the form of “we’ve already done this, come to these conclusions, and while we vigorously debate the details weren’t aren’t very interested in re-litigating the whole baggage train over and over again.” I’m thinking here particularly of the AI stuff but there are other things as well.

        That’s fine, their community to do with what makes them happy. But if you tell me that Christianity is all about loving your neighbor as you love yourself, I’m going to think you are full of it. Because that’s surely a part of the message but it is far from the only thing that goes into the day to day experience of being Christian.

        • Jack Lecter says:

          and while we vigorously debate the details weren’t aren’t very interested in re-litigating the whole baggage train over and over again.

          I think some people are, some people aren’t. I also think- it’s an oversimplification, but, to go ahead and oversimplify- the point of debate is to make progress. To learn new true things and unlearn old false things and hopefully improve your model of the world. If that wasn’t happening- if everyone was in the same place, epistemically, as they were when the Sequences were first written- that would be kind of a red flag that not a lot of epistemic or instrumental progress was happening.

          And- at the risk of further conflating SSC and LW- I don’t think I’ve ever found a community that punished dissent less. In my experience, even groups that pride themselves on being Really Really Tolerant turn out to have some beliefs you just Do Not Question. I don’t see that much, here or in other rat spaces. Sometimes I see boredom, or non-engagement, if someone brings up an old point people think was settled long ago- and I can see how that would suck if you had a question you needed answered. But the good news is, most of those old questions were settled (or “settled”) online, in text, preserved for posterity in The Sequences. And I know ‘Read The Sequences’ isn’t terribly helpful advice if you’re looking for something specific, but I think often people would be willing to offer more specific pointers if you asked.

          And… honestly, I kind of have the feeling some of the replies in this thread might not just be about epistemic overconfidence. I think a lot of Rationalists, Yudkowsky in particular, rub a lot of people the wrong way for what are essentially fuzzy interpersonal reasons. Which, is obviously a legitimate preference to have, but it’s not one I share, and I wish they’d stop using terms to express it like ‘arrogant’ and ‘overconfident’, because it’s really easy to interpret these terms as meant to reflect on the quality of the ideas.

          And… if, by some chance, they are meant to do that, I wish people would be more specific. Criticism is good and healthy and necessary and sometimes very useful and often correct, but vague clouds of negative affect with no intellectual content aren’t very truth-oriented or useful, and so I wish people would make them more explicit so I could just ignore them instead of trying to steelman them for awhile then giving up.

          • brad says:

            So fair cop, many Rationalists, certainly not excluding Yudkowsky, rub me the wrong way. However, I don’t think that’s an adequate response to my critique.

            Even without the “unwilling to relitigate” part we are still left with a community that has certain dominant topics of conversation that are not exclusively cognitive biases and how to avoid them. I disagree about the likely future shape of AI, at least in the near to medium term, and don’t especially want to spend a hundred hours debating the questions.

            As I said in my first response, fair enough. But it means that the Rationalist community is separate and distinct from rationalism. At best it’s that plus these other aspects. If the plus happens to not appeal than the capital R community isn’t going to be of interest even if the whole cognitive biases and how to avoid them is. So it’s more than a bit misleading to describe Rationalism as about that and leave it there.

    • imoimo says:

      I agree “rationalist” very easily comes across as arrogant branding. “Aspiring rationalist” seems only marginally less likely to get a facial response similar to smelling dog urine. In person I prefer these descriptors:
      – the Less Wrong community
      – the SSC community
      – people who read the same set of blogs (my go-to when the person is totally unacquainted)

      If recommending HPMOR or other ratfic I’ll usually just hype the story and mention a couple unique plot points, and wholly avoid reference to the associated community.

  9. tmk says:

    This is the most well-known blog in the rationalist community, so the content is going to reflect on the community whether you like it or not. Just like how Donald Trumps Twitter account reflects on the Republican party.

    • Jaskologist says:

      Donald Trump had to go through an election to represent the Republican party, though. Nobody elected Scott, and while this is the most well-known blog, I’m not sure how many of the people here actually consider themselves part of the rationalist community, so it’s not necessarily fair to count all of the readership as equivalent to “rationalist votes.”

      • sty_silver says:

        I’m not sure how many of the people here actually consider themselves part of the rationalist community

        13%. It’s in the blog post.

        • Dan L says:

          By the public survey data, it’s 13(.6)% of the total readership – among commentators it increases to 20% but frequent commentators drops down again to 8%. “What percentage of comments are made by self-described rationalists?” would be an interesting question, but isn’t easily teased out without an extra layer of assumptions.

          Data

      • Jaskologist says:

        Sheesh. Right in the first sentence, too.

        In which case, I think it’s fair to consider this the most prominent blog by a rationalist, which does not mean that it is reflective of the rationalist community, in the same way that Neil DeGrasse Tyson is the most prominent scientist.

  10. I gather you are using “rationalist” to identify an intellectual movement associated with Less Wrong and Yudkowsky, not

    One who follows reason and not, authority in thought or speculation; a believer in the supremacy of reason over prescription or precedent.

    which is the first dictionary definition I found.

    A long time ago, my parents asked me if it would have been better for them to have brought me up in their parents’ religion, which was Judaism, orthodox since they were immigrants from Eastern Europe. My reply was that I preferred to have been brought up in my parents’ religion, which I took to be 18th century rationalism, the view of David Hume and Adam Smith, the idea that reason was relevant to understanding the world.

    In that sense I am a rationalist. But I have no connection with Less Wrong, have spoken to Yudkowsky once at a party, read only the first few pages of his Harry Potter fanfic.

    And I don’t remember how I answered the question on the survey.

    On your general point of in what sense this is or isn’t a rationalist blog…

    I am a libertarian. I have published two novels, with a third almost done. But I am not a libertarian novelist in the sense in which L. Neil Smith is, at least in The Probability Broach, or that Rand was an Objectivist novelist. The fact that I am a libertarian affects my fiction just as the fact that I am an economist does, but it isn’t what my fiction is about.

    • Bugmaster says:

      On a side-note, I’ve read the webcomic version of The Probability Broach, and I still can’t tell whether it’s meant to be a parody or not…

      • I don’t know the webcomic version, but Smith is a libertarian and I took the novel to be a somewhat heavyhanded argument for a libertarian view.

  11. johan_larson says:

    Is there some other school of rationalism than Rationalism as founded by EY? I mean some other movement that is all about trying very hard to be right in a general sense, and trying to avoid rules of thumb, common practice, deference to authority, and all sorts of other fallacies that people tend to trip over on the way to getting it right? No one wants to be wrong, sure, but for most fields being right is a tool, not the focus of study. Who else is trying to popularize how to be right? (And don’t say “science” or “philosophy”; I’m looking for something more specific than that.)

    • Jack says:

      Two that immediately come to mind:
      1) Paul Geisert’s “Brights”. They have a very specific notion of how to best be right: be a naturalist.
      2) …Methodology in all of academia. Of course the study of methodology has all the problems you mention (as does the capital-r Rationalism). But there are people in every field who focus on methodology and try to keep each other honest.

    • Scott Alexander says:

      We’re usually most often compared/contrasted with street epistemology and the skeptic movement.

      • Bugmaster says:

        Can you recommend a good introductory article on street epistemology ? From what I’ve seen, it just looks like atheist preaching to me, but I could be wrong.

    • Jaskologist says:

      How can you just toss out “science” or “philosophy” as not counting? This has been a primary focus of philosophy and the subbranch of it that became science for millennia. Was Francis Bacon not trying to do this? The Logical Positivists would also fit your criteria, although most now consider that a failure.

      And then there’s the complication that a lot of people who “try very hard to be right” eventually come to the conclusion that all the things you listed as fallacies are actually the best strategies to use most of the time.

  12. DinoNerd says:

    I looked at Less Wrong some years ago, and if it had still been a live community, inviting to newcomers, I might have gotten involved. I liked the whole idea of trying to be less wrong. What I didn’t like was the “now go read a great heap of stuff with no interaction, to learn all the shared context and terminology, before we’ll talk to you at all”. I liked it even less because of expectations formed by interactions with other groups with similar entry barriers, and basically found other ways to spend my time.

    In making this post I took a look at the new site, and found https://www.lesswrong.com/posts/bJ2haLkcGeLtTWaD5/welcome-to-lesswrong-1 That has the same problems of “go read a lot” and unexplained terminology (“sequences”?!) but it may be as close as that community can get to the effort-post I was going to request, explaining what this is all about without requiring me to learn all the terminology.

    I’m particularly interested in understanding what AI risk or cryonics has to do with improving reasoning and decision making ;-( That juxtaposition has me expecting (fearing?) that the community has significant shibboleths, aka social landmines, that would probably eventually get me expelled for violating some unstated expectation or the other.

    • sty_silver says:

      If you looked at LessWrong a few years ago, it was probably the abandoned ruin that had resulted from all the original people leaving. As Scott once said, these people aren’t representative of anything.

      Unless “a few” is like “12” or something.

      Nowadays it’s representative again (there was a big relaunch), but I certainly don’t think getting into it is easy. I don’t think you’ll be treated in an unfriendly way, but it might be very difficult to have contributions that the community appreciates.

      I’m particularly interested in understanding what AI risk or cryonics has to do with improving reasoning and decision making

      There is no causal arrow from these to rational decision making. There might be a causal arrow going the other way.

      Look at it like this. The rationalist guys came together, did all sorts of reasoning about what good rationality looks like, then applied it to the real world and concluded that AI alignment is currently by far the area where any one person can have the biggest positive impact. So they focused on that. It’s a rationalist output. If they concluded the above and didn’t focus on that in response, that would be kind of weird. It would basically mean they’re not taking their own methods seriously and/or aren’t willing to act on them.

      You can of course claim that their conclusion is wrong, as many here do.

      I personally consider cryonics to be far less central, but maybe that’s not representative, idk.

      • jaimeastorga2000 says:

        Look at it like this. The rationalist guys came together, did all sorts of reasoning about what good rationality looks like, then applied it to the real world and concluded that AI alignment is currently by far the area where any one person can have the biggest positive impact. So they focused on that.

        …that’s almost completely backwards from what actually happened. The real history of the rationalist community is that Eliezer Yudkowsky kept trying to talk to people about his high-level transhumanist ideas (most notably AI risk, but also uploading, cryonics, etc.) but they kept getting stuck on the same basic points of epistemology and rationality (“of course an AI would be friendly, who would design an AI if it might destroy the world?” “you DO realize that an upload of you is just a copy and the real you is dead, right?” etc.) so eventually he decided to bridge the inferential gap once and for all with 2 years of nonstop blogging. Those blog posts became The Sequences, and the rationalist community grew around them (the initial population was heavily seeded with members of the Extropians and SL4 mailing lists). Hence why Eliezer is considered the father of the rationalist community.

        Frankly, I find it insulting when, after all that effort, newcomers react to being pointed at the book which answers their questions with the equivalent of “TL;DR”.

        • sty_silver says:

          Interesting. I apologize for being inaccurate. Thanks for correcting me (I only came to LW after the sequences were all done and EY had already left).

          Though, the point that I was trying to make is that the decision to focus on AI is an output of applying rationality to the real world. That point is only made stronger by your correction, isn’t it?

        • Bugmaster says:

          Personally I think the Sequences are not the best introduction to thinking rationally — exactly for the reasons you list. The Sequences are not a book about epistemology; they are a book about the evangelism of AI Risk/Cryonics/Many-Worlds Interpretation (for some reason), and all the epistemology and rationality is just a means to that end.

          I have no problem with evangelism, and no problem with rationality; but IMO Eliezer combines the two in a way that is (intentionally or not) deceptive. His message seems to be not, “here are the things I believe, and here’s why I believe them”, but rather, “if you don’t believe the same things that I do, then obviously you’re just irrational”.

          • brianmcbee says:

            I read quite a bit of the sequences back in the day, and found a lot to be enjoyable.

            I tried again sometime in the last few years, and just gave up. It’s a ridiculous amount of text. I’m retired, and I have neither the time or attention span for that. I can’t imagine anybody with a job or school does either.

            So if somebody with the knowledge and interest could condense that stuff down to an executive summary, that might be a useful thing. With some embedded “here’s where to learn more” links, preferably. Or maybe that’s already been done, I haven’t looked recently.

    • whereamigoing says:

      This seems a bit similar to the experience people have with StackExchange — they are put off by how difficult social interaction or asking fun or soft questions is on that website. And while I sometimes agree that StackExchange filters subjective questions too aggressively, in general it makes sense — the ultimate goal is not community, but getting good answers to questions.

      Similarly, the goal of LessWrong, at least originally, was supposed to be producing good writing on how to avoid bias, not community for its own sake. So there’s an urge to tell newcomers, “With all due respect, we’d like you to be more knowledgeable before posting so you don’t clutter the website.”.

      Maybe one of the main differences between LessWrong and the general population is that LW heavily selects for people who are willing to read a lot before trying to write something new themselves. Personally I only wrote about 5 comments in the first 5-6 years of reading LW and SSC, but many more recently.

      • DinoNerd says:

        Interesting. I arrived at StackExchange already fully acculturated into the founding community – computer nerds with at least some passing familiarity with open source norms – and had a very positive experience.

        I began by simply reading answers I’d found via google, then one day I had a question without an answer, so I posted it. That worked out well, so I started looking at other people’s questions and posting answers.

        Someone editted my first question to follow formatting norms I hadn’t noticed, and someone else suggested “less context; more code” and I was off to the races. (Well, I did also go on a bit of a badge hunt for a while, which led to learning more about community norms, and helping out more.)

        OTOH, I’ve seen people who freaked specifically about people editting their questions, and found the lack of social banter (which is in fact present, just muted) completely intolerable. And I’m sure it helped that I was somewhat of an expert in some relevant areas when I started, not asking help with something generally regarded as trivial.

    • Scott Alexander says:

      I feel like that’s kind of how everything is, right? If you want to learn economics, you’ve got to know stuff like what supply and demand are before the economists will even talk to you, and that requires reading a lot of stuff.

      I guess schools/universities are a way around this problem, but we don’t have one.

      • Bugmaster says:

        To be fair, economics is a discipline with practical applications (i.e., jobs) as well as ongoing research projects; it’s not just a community.

        • Life is a discipline with practical applications too. If the claim that learning what the rationalist community teaches results in your doing a better job of living your life, making better decisions, that’s a reason to put time and energy into learning it.

          • Bugmaster says:

            What I meant was, economics has lots of real-life feedback built into it. If you’re good at your job, you’ll get promoted; if not, you’ll get fired. If you’re good at research, you’ll publish peer-reviewed papers and will be cited; if not, you won’t. If economics in general is a worthwhile pursuit, there would be lots of companies and individuals competing in the market for their services; if not, you might expect to see some hobbyists but not much else. Non-practical communities have their own gates, of course, but nothing quite as rigorous.

          • thevoiceofthevoid says:

            @Bugmaster
            I think the issue isn’t that rationality doesn’t have any real-life feedback. The issue is that rationality’s feedback is every outcome in your life. Figuring out which of those are due to which of your decisions and/or factors beyond your control can be a bit of a chicken-and-egg problem, though fortunately not a completely intractable one.

          • Bugmaster says:

            @thevoiceofthevoid:
            Ok, so hypothetically speaking, how would I know that my rationality skills are yielding tangible benefits ? A factor that potentially affects every outcome in your life is indistinguishable from a factor that does nothing, because you have nothing to compare it to. Don’t get me wrong, it’d be really nice if rationality was a mental superpower that can make my life better in literally every way… but I’ve heard such grand claims before, from lots of other people with huge flowing beards whose eyes shone with unquenchable devotion — and they didn’t have much in the way of evidence, either.

  13. blacktrance says:

    To be fair, stuff like A Parable on Obsolete Ideologies are central to the spirit of rationality, but you’ve said you’re embarrassed by it. As I see it, the core of rationality is the combination of New Atheism and economic reasoning, taken seriously and with their insights applied to a wide variety of areas (including one’s personal life), and stuff like Chesterton’s Fence is contrary to it.

    • and stuff like Chesterton’s Fence is contrary to it

      I don’t think so. We apply our reason with very imperfect information. The fact that an institution exists is data, even if we don’t yet know why it exists, and should be taken into account in our decisions.

      • blacktrance says:

        It’s data, but it mostly tells us that (1) at some point someone thought it was a good idea, and (2) it hasn’t been so deleterious that it led to the practicing population’s extinction, which is a low bar. It doesn’t mean your life is better for following it, or that the tradition was ever useful at all (because it can survive as a group-distinguishing marker). In effect, Chesterton’s Fence is status quo bias.

        (I’ve read Scott’s recent sequence on tradition, and while I don’t dispute the individual facts, I think the conclusion is wrong.)

        • It doesn’t mean your life is better for following it, or that the tradition was ever useful at all

          It doesn’t mean those things but it suggests that they might well be true, so is a reason to try to figure out why the tradition exists and to be reluctant to abolish it before you have done so.

        • brianmcbee says:

          At some point it was better than what it replaced for *somebody*, at least. Thinking about who that was, and when, and why, is probably a good idea. Even though circumstances might have changed enough that all of that is irrelevant now.

  14. HeelBearCub says:

    5. I’ve been consistently skeptical of claims that rationality has much practical utility if you’re already pretty smart and have good intuitions and domain-specific knowledge.

    Does this strike anyone else as particularly problematic?

    The idea that formalized Bayesian reasoning somehow is more easily/effectively applicable by those who are less smart and intuitive seems very incorrect to me. If you want to “raise the sanity waterline” of those who are least rational, the rationalist movement doesn’t seem to be the way to do it.

    • Scott Alexander says:

      I think I’m thinking here of something like “If you’re an anti-vaxxer, probably learning a lot of stuff about how reason works can give you some real world benefits (like getting vaccines), if you’re already at the cutting edge of knowledge, it probably won’t.”

      I’m making a big and questionable leap in thinking that rationality works well enough that it will make an anti-vaxxer get vaccinated, rather than give them a bunch of newer more sophisticated excuses for being an anti-vaxxer. But I think that’s reasonable to hope for in a way that some other things aren’t.

      • Bugmaster says:

        This statement is quite hopeful, but sounds completely counter-intuitive to me. Is there any data to suggest that rationality training (when not confounded by other factors, e.g. emotional impact of real-world events) can convert anti-vaxxers to the pro side ?

        • Scott Alexander says:

          I know it converted a couple of people from strong evangelical religion to atheism, which seems similar to me. Can’t prove it was the ideas and not the social context, but the people involved say it was. I don’t think there are enough anti-vaxxers looking into it for this to be testable.

        • cuke says:

          This isn’t data exactly, but I was just listening to a recent Freakonomics podcast on behavioral change research that talked about a study of people changing their minds more as a result of being asked to explain their understanding of how something works rather than asking why they believed something was so.

          The research indicated that people are generally over-confident about how well they understand something. They cited a few examples from how zippers and ballpoint pens work to how global warming works.

          The researchers were saying that if you ask someone “why do you believe [or not believe] climate change is an urgent threat?” that people will get more and more dug in on their list of “why” reasons, but if you ask them how global warming works, in the course of trying to explain, many people realize they don’t understand as much as they think they do, and that experience of seeing their own ignorance is humbling in a way that seems to lead people to be more likely to shift their views. I could be summarizing this wrong, but that’s what I remember.

          I think this may be no more complicated than saying that if you ask someone to defend their position, they get dug in. But if you express interest in understanding how whatever it is works from their point of view, and they have to explain it to you as an outsider to their experience, it seems to call on more objectivity and perspective (rationality?) and that can be supportive to change. People tend to say things like “I’ve never thought about it that way before,” and you get the sense that something that was settled in their mind about an issue suddenly becomes a little unsettled, which is needed for change.

          • Back when I was arguing climate stuff on FB, I concluded that almost nobody in the argument, on either side, understood the greenhouse effect. I don’t remember if I succeeded in persuading anyone that his view of it (CO2 as an insulator) didn’t make sense, but it would be interesting to see if doing that reduced confidence in related views.

      • The point isn’t that rationality isn’t useful to smart people, it’s that they already know the relevant parts, so have little use for “rationality” in the sense of “what the rationalist community teaches.”

      • rather than give them a bunch of newer more sophisticated excuses for being an anti-vaxxer.

        Dan Kahan’s research on issues where belief has become a market of group identity suggests that it will. More intellectually sophisticated people are more likely to agree with their group, whether that means believing in evolution or not believing in evolution.

      • HeelBearCub says:

        “If you’re an anti-vaxxer, probably learning a lot of stuff about how reason works can give you some real world benefits (like getting vaccines), if you’re already at the cutting edge of knowledge, it probably won’t.”

        What is your model of the average anti-vaxxer? I’m questioning whether it lines up with mine. That or your “pretty smart” means something different than I initially thought. Or your “good intuitions” means something tautological.

        The anti-vaccine movement, AFAIK, is dominated by upper middle class, successful, college educated people. I think there are also some religious communities that are anti-vaccine, but I don’t think the rationalist community is well positioned to effect them.

        • Scott Alexander says:

          I think anti-vaxxers are less rational than average, for reasonable meanings of “rational”.

          • HeelBearCub says:

            But they aren’t lacking in above average smarts, right? That’s the disconnect I am having. They tend to be more successful than average.

            I think what you really mean is that there isn’t some truly higher level of rationality. There is no black belt. If you already grasp the idea that our heuristics can lead us astray, into logical fallacy, that’s, like, 90% of it. The rest is just practice.

          • I think anti-vaxxers are less rational than average

            I don’t know about the average, but my guess is that most anti-vaxxers hold their belief for the same reason that most pro-vaxxers do—because the people they trust tell them it is true. Similarly for most creationists/evolutionists etc.

          • theredsheep says:

            I don’t have a good feel for what constitutes average rationality, but people who oppose vaccines do have studies to point to, and complex theories as to why the debunkers of those studies are wrong.

          • sty_silver says:

            having complex theories that support your ideas is definitely -not- a sign of rationality

          • cuke says:

            For various reasons I have a bit of a front row seat to a lot of middle/upper class educated folks who have chosen not to vaccinate their kids.

            I think there are some pretty odd bedfellows in this movement, including not well educated, rural, conservative, anti-government folks; religious minority folks of varied class/education backgrounds; conspiracy theory folks; counterculture aging hippy folks; alternative health movement people; affluent highly educated people; civil libertarians, etc. There’s some overlap among some of these groups, but there are some pretty big political and class differences in there.

            The affluent educated folks who are not vaccinating their kids are the ones I know best and in my experience, the majority of them are dealing with poorly-understood chronic illnesses in their family. Mainstream medicine has failed them, they are living with multiple chronic conditions, their kids are reactive to many things, and no one knows what the hell is going on. So they have become wary of many things as a result of their lived experience.

            Twenty years ago you could have taken the core of this group to be well-off parents of autistic kids, but now this group includes families who are dealing with all kinds of mystery chronic illnesses. In the course of trying to get medical care and being repeatedly disappointed, they have become skeptical of mainstream medical claims. For many of these folks, vaccines are just one of many perceived threats that also include pesticides, hormones, household chemicals, mold, various categories of food, and other environmental exposures.

            I think this subgroup has some kind of semi-trauma history around an environmental exposure of one kind or another, medical intervention gone bad, genetic vulnerability to chronic illness, and maybe an extra dose of anxiety. They may also have lived through — or their parents did — some kind of fundamental betrayal by a government that makes them particularly untrusting of authority claims.

            And because of those experiences, rationality is not the missing ingredient that would convince them to get their kids vaccinated. These are people who by and large feel that they or their kids are the vulnerable ones who should be spared further immune insult just like the kids who are undergoing chemo for cancer, only the medical establishment doesn’t recognize their chronic illness as a legitimate thing in the same way cancer is. A huge number of people who don’t want to be forced to vaccinate their kids get that vaccines are a really good thing and they understand herd immunity is essential. The just want to be able to opt out because of the chronic illness their family is dealing with.

            A large portion of the parents who are opting for philosophical or religious exemptions would otherwise go get medical exemptions if the other two categories were eliminated. That makes the movement seem like a ton of people are “philosophically” opposed to vaccines when it’s more that one or more of their kids are sick.

  15. NikEfimov says:

    This is the first comment I’ve left in more than two years of reading this blog!

    I just wanted to comment on the capacity of either reading this blog or learning about rationality for improving your everyday life, because it’s greatly improved mine. The improvements haven’t been so much due to applying rationality to a specific problem, but more from the psychological health and clarity that came from a change in perspective. Until a couple of years ago I routinely misunderstood social situations and contexts, I was fundamentally confused by politics and social media trends, and while I noticed this confusion in myself, none of the books or blogs I was reading at the time resolved the underlying feeling. I noticed after reading any article on SSC that my confusion level went down by some amount. Now after reading most of this blog, the Less Wong sequences, and plenty of other blogs and books on psychology, economics and cognition, I find I can always at least find a model that applies to the situations that I used to find hopelessly incomprehensible.

    I hope that one of the things that SSC does in general is provide a certain pathway for a certain kind of person, like myself. Different things work for different people. When I try to explain to other people in person how rationality has helped me, I usually have a really hard time. My problems aren’t all that common. Many people either don’t need the change in perspective I experienced because they like their perspective just fine, or they find my new point of view so obvious that they have a hard time seeing how it took me so long to get there. But for the people who are outside of rationalism and would benefit from being inside of it, SSC plays the critical role of helping people rehearse thinking patterns that they can learn how to apply independently later. I think that’s the largest benefit of Scott’s writing style, the way it draws you in and helps you empathize with his point of view. Of course some people are tempted to sacrilize rationalism and its rituals and public figures – I think a minimum amount of this is unavoidable, and I have to constantly check myself in this regard, but without the sort of content that produces this I likely never would have ended up here and would have been much worse off in my own estimation.

    • whereamigoing says:

      I had a similar experience, though maybe less drastic. One could almost say that rationalists are the sort of people that need rationalism to get by in their everyday lives 😛

    • Scott Alexander says:

      I think this comment is probably the closest thing to my view of rationalism and how it might be helpful. If you go looking for One Weird Trick, you probably won’t find it. If you absorb a lot of stuff and let it change some of your intuitions, that could potentially be helpful (or lead you up a blind alley, or…)

      • HeelBearCub says:

        If rationalism is primarily a way to help with “routinely misunderstood social situations and contexts” that seems very different from the description “on the tin”.

        This description seems like, functionally, using rationalism as a way to adapt to autism. If that is deep in the roots of the movement, it could explain some things.

        • NikEfimov says:

          I didn’t say understanding social situations was a primary or advertised benefit of rationalism, although that’s compatible with it. Rationalism is pretty hard to define, I don’t think I’m up to that task.

          Specifically for me, I found that my social instincts were badly calibrated and produced bad results. I’m neurotypical and have never suspected otherwise (nor has anyone else as far as I know), I just somehow ended up without the kind of social programming that allows most people to just get along easily. So I had to learn those skills consciously. Reading about biases and hidden motives helped me a lot, and learning rationalist habits of mind helped as well. I’m still not very good socially but I can deal with that for extended periods, at work or at parties, with no issues at all.

          But rationalist thinking patterns have also helped me with diet and exercise, managing money, and ingesting news without getting incensed or deranged. I’m more fully aware of my values and principles, and why I might have those and not others. These are very unimportant things compared to existential risks or effective altruism, but they mean everything to me and I’m eternally grateful to Scott for helping me along this path.

          • HeelBearCub says:

            What you are describing seems to be the typical (and stereotypical, but not really inaccurate) nerd experience. I’m nearly 50 now, and social dynamics have become much clearer throughout my life. I had friends, but we were always the bunch of weirdos. I had a girlfriend or two in HS, but they essentially chose me, not that I wasn’t interested. I was devastated to find out when I got to college that it wasn’t the academic utopia of eager learners I thought it would be. Same old social BS that I could never really make sense of.

            Whether that’s somewhere on the far end of the “spectrum”, I’ll leave to the psychiatrists. I certainly don’t think it comes close fitting the current DSM criteria, AFAIK, but it seems likes it’s at the very least adjacent to it in some way.

            In any case, the rationalist project sure as heck seems to claim to be about far, far more than explaining social behaviors to those who are mystified by them. So if Scott is endorsing this view point, it seems pretty interesting.

          • NikEfimov says:

            Oh, now I see how what I said might have been unclear – It’s not quite that reading rationalist publications directly taught me how to understand social stuff, it’s that reading rationalism gave me the tools, models and habits of thinking that allowed me to identify my problems on my own, and figure out how to solve them. By calibrating my basic thinking, I was able to have subtle effects on lots of domains personal to me.

            I don’t think there’s an agreed-upon “rationalist project” implied by the practice of rationality (but rationality must have a purpose other than itself.) There are just various projects that appeal to various rationalists, that could change in any way at any time as demanded by emerging circumstances.

  16. Freddie deBoer says:

    Some people are really into intermittent fasting, these days. They make a lot of claims about its effects. I’m not qualified to judge. What I do know though if that there are a lot of people who skip breakfast and call it intermittent fasting. And irrespective of the larger efficacy of intermittent fasting, I feel skipping breakfast is just, you know, skipping breakfast. The thing is that there’s something about contemporary life that makes people feel that it’s necessary to rebrand what are essentially timeworn techniques, perspectives, ideas, to slap a new coat of paint on them, use new language – not because they’re useless but because of the cultural pressure to make things newer and fancier and the market value of a new idea.

    I spend a lot of time these days wondering what radical beliefs and behaviors I embrace that are, at the end of the day, just skipping breakfast.

    • Bugmaster says:

      I very rarely eat breakfast, not because of some sort of a rational or ideological stance, but simply because I rarely get hungry before noon. I’ve never heard of this intermittent fasting thing (it actually sounds like an oxymoron); but that aside, is there any data (outside of cereal commercials) to suggest that I should force myself to eat earlier ?

      • Freddie deBoer says:

        I doubt it!

      • HeelBearCub says:

        Intermittent fasting isn’t really merely “skipping breakfast”. It’s going without food long enough to use up the blood sugar that came from your gut and forcing the body to make some more. Basically that’s going something like 14 hours or so without food. Every hour of time after that is time your body is spending in a different metabolic state. At least as I understand it.

    • FormerRanger says:

      I rarely eat breakfast, just because I rarely eat breakfast. Further, I had heard that it was good for you, in some nebulous way. However, a little bit of googling reveals that at least recently there have been a lot of warnings, possibly based on Actual Studies, that there is a significantly higher rate of cardiac issues among people who “always” skip breakfast. Not sure it’s an actionable finding, but it’s there.

      From WebMD: Skipping Breakfast a bad move for your heart?

  17. nadbor says:

    Geez…

    Every time rationality community comes up people in this comment section are tripping over themselves to denounce it, Less Wrong, EA and Eliezer Yudkowsky especially. I went to a LW/SSC and even there it was like this. Everyone’s like ‘how dare they call themselves rationalists they think they’re better than us’. No. We have met the enemy and he is us.

    ‘No one is trying to be irrational so rationality community is redundant’. What kind of argument is that? No one is trying to be fat and yet Weight Watchers is a thing and no one is saying ‘how dare you try to not be fat you snobs!’.

    I find SSC and ocasionally Less Wrong, Shtetl Optimized highly entertaining. I liked HPMOR. I think the study of human cognitive biases and all the ways we form our beliefs (and how those can go wrong) fascinating. I went to a few meetups and I’ve had great conversations there (with people who all swore they are not part of the community). I enjoy thinking and about nerdy shit like FAI. I lurk and ocassionally participate in SSC comments section.

    Based on the above I identified myself in the survey as a member of the rationality community/aspiring rationalist (don’t remember the exact phrasing). I’m not going to apologise.

  18. HowardHolmes says:

    What is the point of saying “I am a rationalist?”
    What is the point of saying “I am a Rationalist?”
    What is the Point of saying “I am a member of the Rationalist Community?
    What is the point of labeling myself?
    Is it not all about distinguishing ourselves, differentiating ourselves, making ourselves better? Than who? And why?
    Who cares? If we care, why do we care?

    • Scott Alexander says:

      What is the point of writing this comment?
      What is the point of line breaks between each sentence?
      What is the point of capitalizing Point in the third line, but not the others?
      Is it about distinguishing yourself from the people who foolishly apply labels to themselves? About feeling like you’re better than them?

      • HeelBearCub says:

        Scott, I think Howard is well into a nihilist world view, IIRC. His comments make more sense when viewed through that lens.

      • HowardHolmes says:

        What is the point of writing this comment?

        To signal

        What is the point of capitalizing Point in the third line, but not the others?

        Random error

        Is it about distinguishing yourself from the people who foolishly apply labels to themselves?

        Yes

        About feeling like you’re better than them?

        Yes

    • theredsheep says:

      “Does anybody really know what time it is?
      Does anybody really care?
      And if so I can’t imagine why …”

    • Viliam says:

      What is the point of saying “I am a rationalist?”

      If you want to be rational alone, there is no point. But if you want to meet similarly thinking people, you need to somehow explain what it is you are looking for.

      I am interested in meeting similarly thinking people, because they bring a lot of value to my life.

  19. Your reluctance makes you the perfect messiah! Half of the fun of reading SSC is that you reliably break our hearts with accuracy. I guess I shouldn’t be surprised you would write distancing statements from the Rationalist Movement, give how frequent “the reluctant leader” paradigm is in the history of subcultures.

    In all seriousness, I use the expression “Rationalist Movement” as a way to succinctly explain why I take time to go to these meetups or post comments. I understand that you’re not purposefully selling rational thinking, but I think a big part of why we all read SSC is because you reveal what seems like a highly rational thought process, and it inspires us to think better. This reason is also why I read Nate Silver.

    Maybe reading about rational thinking doesn’t make us think better, but at least we feel like it does, or perhaps we want to signal that we value it. In person, you had mentioned to me that that was at least one positive takeaway from the Rationalist Movement, which was the promotion of rational thinking, for better or worse.

  20. Peter Gerdes says:

    I’d argue the big win the rationalist community provides is creating an audience with respect to which one gains status/approval to the extent one’s remarks seem to make rational arguments as opposed to endorse the kind of outcomes that people in the community like or have the right emotional valence.

  21. Yosarian2 says:

    5. I’ve been consistently skeptical of claims that rationality has much practical utility if you’re already pretty smart and have good intuitions and domain-specific knowledge

    For what it’s worth that has not been my experience. Reading rationalist writing helped motivate me to approach my own life in a more rational way and start trying to find actual things I could do to make it better, and while not all of them paid off, the ones that did totally transformed my life in a very positive way.

  22. Eponymous says:

    I mean, I don’t think you can really dissociate this blog from Rationalism (referring to the specific community), barring specific repudiation. You and your writings are pretty integral to it. You’re a high tribal elder in that community, and directly responsible for writing some of its core texts (many as posts on this very blog!).

    If Saint Paul starts a blog, he might have trouble getting people to buy the “not every blog by a Christian is a Christian blog” line.