Yes, We Have Noticed The Skulls

[Related: Tyler Cowen on rationalists, Noah Smith on rationalists, Will Wilkinson on rationalists, etc]

If I were an actor in an improv show, and my prompt was “annoying person who’s never read any economics, criticizing economists”, I think I could nail it. I’d say something like:

Economists think that they can figure out everything by sitting in their armchairs and coming up with ‘models’ based on ideas like ‘the only motivation is greed’ or ‘everyone behaves perfectly rationally’. But they didn’t predict the housing bubble, they didn’t predict the subprime mortgage crisis, and they didn’t predict Lehman Brothers. All they ever do is talk about how capitalism is perfect and government regulation never works, then act shocked when the real world doesn’t conform to their theories.

This criticism’s very clichedness should make it suspect. It would be very strange if there were a standard set of criticisms of economists, which practically everyone knew about and agreed with, and the only people who hadn’t gotten the message yet were economists themselves. If any moron on a street corner could correctly point out the errors being made by bigshot PhDs, why would the PhDs never consider changing?

A few of these are completely made up and based on radical misunderstandings of what economists are even trying to do. As for the rest, my impression is that economists not only know about these criticisms, but invented them. During the last few paradigm shifts in economics, the new guard levied these complaints against the old guard, mostly won, and their arguments percolated down into the culture as The Correct Arguments To Use Against Economics. Now the new guard is doing their own thing – behavioral economics, experimental economics, economics of effective government intervention. The new paradigm probably has a lot of problems too, but it’s a pretty good bet that random people you stop on the street aren’t going to know about them.

As a psychiatrist, I constantly get told that my field is about “blaming everything on your mother” or thinks “everything is serotonin deficiency“. The first accusation is about forty years out of date, the second one a misrepresentation of ideas that are themselves fifteen years out of date. Even worse is when people talk about how psychiatrists ‘electroshock people into submission’ – modern electroconvulsive therapy is safe, painless, and extremely effective, but very rarely performed precisely because of the (obsolete) stereotype that it’s barbaric and overused. The criticism is the exact opposite of reality, because reality is formed by everybody hearing the criticism all the time and over-reacting to it.

If I were an actor in an improv show, and my prompt was “annoying person who’s never read anything about rationality, criticizing rationalists”, it would go something like:

Nobody is perfectly rational, and so-called rationalists obviously don’t realize this. They think they can get the right answer to everything just by thinking about it, but in reality intelligent thought requires not just brute-force application of IQ but also domain expertise, hard-to-define-intuition, trial-and-error, and a humble openness to criticism and debate. That’s why you can’t just completely reject the existing academic system and become a self-taught autodidact like rationalists want to do. Remember, lots of Communist-style attempts to remake society along seemingly ‘rational’ lines have failed disastrously; you shouldn’t just throw out the work of everyone who has come before because they’re not rational enough for you. Heck, being “rational” is kind of like a religion, isn’t it: you’ve got ‘faith’ that rational thought always works, and trying to be rational is your ‘ritual’. Anyway, rationality isn’t everything – instead of pretending to be Spock, people should remain open to things like emotions, art, and relationships. Instead of just trying to be right all the time, people should want to help others and change the world.

Like the economics example, these combine basic mistakes with legitimate criticisms levied by rationalists themselves against previous rationalist paradigms or flaws in the movement. Like the electroconvulsive therapy example, they’re necessarily the opposite of reality because they take the things rationalists are most worried about and dub them “the things rationalists never consider”.

There have been past paradigms for which some of these criticisms are pretty fair. I think especially of the late-19th/early-20th century Progressive movement. Sidney and Beatrice Webb, Le Corbusier, George Bernard Shaw, Marx and the Soviets, the Behaviorists, and all the rest. Even the early days of our own movement on Overcoming Bias and Less Wrong had a lot of this.

But notice how many of those names are blue. Each of those links goes to book reviews, by me, of books studying those people and how they went wrong. So consider the possibility that the rationalist community has a plan somewhat more interesting than just “remain blissfully unaware of past failures and continue to repeat them again and again”.

Modern rationalists don’t think they’ve achieved perfect rationality; they keep trying to get people to call them “aspiring rationalists” only to be frustrated by the phrase being too long (my compromise proposal to shorten it to “aspies” was inexplicably rejected). They try to focus on doubting themselves instead of criticizing others. They don’t pooh-pooh academia and domain expertise – in the last survey, about 20% of people above age 30 had PhDs. They don’t reject criticism and self-correction; many have admonymous accounts and public lists of past mistakes. They don’t want to blithely destroy all existing institutions – this is the only community I know where interjecting with “Chesterton’s fence!” is a universally understood counterargument which shifts the burden of proof back on the proponent. They’re not a “religion” any more than everything else is. They have said approximately one zillion times that they don’t like Spock and think he’s a bad role model. They include painters, poets, dancers, photographers, and novelists. They…well…”they never have romantic relationships” seems like maybe the opposite of the criticism that somebody familiar with the community might apply. They are among the strongest proponents of the effective altruist movement, encourage each other to give various percents of their income to charity, and founded or lead various charitable organizations.

Look. I’m the last person who’s going to deny that the road we’re on is littered with the skulls of the people who tried to do this before us. But we’ve noticed the skulls. We’ve looked at the creepy skull pyramids and thought “huh, better try to do the opposite of what those guys did”. Just as the best doctors are humbled by the history of murderous blood-letting, the best leftists are humbled by the history of Soviet authoritarianism, and the best generals are humbled by the history of Vietnam and Iraq and Libya and all the others – in exactly this way, the rationalist movement hasn’t missed the concerns that everybody who thinks of the idea of a “rationalist movement” for five seconds has come up with. If you have this sort of concern, and you want to accuse us of it, please do a quick Google search to make sure that everybody hasn’t been condemning it and promising not to do it since the beginning.

We’re almost certainly still making horrendous mistakes that people thirty years from now will rightly criticize us for. But they’re new mistakes. They’re original and exciting mistakes which are not the same mistakes everybody who hears the word “rational” immediately knows to check for and try to avoid. Or at worst, they’re the sort of Hofstadter’s Law-esque mistakes that are impossible to avoid by knowing about and compensating for them.

And I hope that maybe having a community dedicated to carefully checking its own thought processes and trying to minimize error in every way possible will make us have slightly fewer horrendous mistakes than people who don’t do that. I hope that constant vigilance has given us at least a tiny bit of a leg up, in the determining-what-is-true field, compared to people who think this is unnecessary and truth-seeking is a waste of time.

This entry was posted in Uncategorized and tagged . Bookmark the permalink.

617 Responses to Yes, We Have Noticed The Skulls

  1. Ashley Yakeley says:

    You set up a straw anti-rationalist, you actually admit that that it what you are doing, and then you knock it down.

    • Scott Alexander says:

      I’m saying that this is the kind of criticism we actually get. There’s a big debate thing going on now on social media; the links on the top of the post should be good starts.

      • Ashley Yakeley says:

        Your straw anti-rationalist does not summarise the views of Will Wilkinson (just to pick the quickest read). Why not address what he says directly?

        As I understand him, the issue is not so much “you are doing the wrong things in your attempt to get the right answer to everything”, but “getting the right answer to everything is typically the wrong goal” (and he doesn’t mention “wanting to help others and change the world”).

        • AnonYEmous says:

          Hey there friendly neighborhood commenter, it is I, here to use you to make a somewhat related point:

          The problem that people seem to have with what Scott has done here is that it seems like a way to knock down some weak arguments, basically a strawman. But if people really have those weak arguments, then it’s not necessarily a strawman; maybe if he extends it to the entire movement as seems alleged here, but even then it would depend on how much of the movement makes use of these weak arguments seriously. In other words, “strawman” isn’t the right word here.

          This community would encourage Scott to “steelman” these arguments; maybe it would be a good idea to say that currently, Scott is “Tin-Manning” these arguments.

          — well, here I am trying to put two different terms into the vernacular in the same comment thread. Life takes you to crazy places I guess. (Either way though, this isn’t much of a strawman, so take that to heart if anything.)

          • Ashley Yakeley says:

            Scott is addressing his own contrived criticisms of rationalists, and not the actual, stronger, criticisms of rationalists. At best, there is some overlap. I feel like straw is an appropriate characterisation.

          • Aapje says:

            The most objectionable part of straw manning is the claim that a specific person or group has a certain (poorly thought out) belief, without any solid evidence that this is the case.

            I think that it has a lot more merit to claim that some people hold a belief and then to address the problems with that belief, where you ask people to check whether they have to beliefs and consider your criticisms if that is the case.

          • AnonYEmous says:

            Ashley: But he claims that he is addressing real criticism coming from real people; namely, tin men. So now the discussion moves to: are those tin men real? Are they numerous? Obviously you could prove that they aren’t real or that they are few in number. But that’s what you gotta do at this point.

        • nelshoy says:

          I think there needs to be a distinction drawn between “stuff rationalists do” and “stuff most rationalists have discussed and are aware of”. We are flawed human beings and make mistakes, but we also know a lot about how we make mistakes.

          The implicit point of criticism is to make known particular problems with the object of criticism that aren’t common knowledge, but a huuuuge share of criticisms discussed are not only unoriginal but should count as “rationalist common knowledge”. I think that’s Scott’s issue.

          Now, we can’t really expect every critic to have spent time reading all the self-criticism on LW and elsewhere, but I think the least that could be done when you engage with us is provide us with examples of what you’re talking about.

        • Scott Alexander says:

          I did address him directly. My response is on that Twitter thread. But in the same thread, other threads on his Twitter, and various other places, there are also a lot of commenters saying the sorts of things I’m talking about above. Just to pick the clearest example, see if you can find the literal picture of Spock.

        • Markus Ramikin says:

          Why is one only allowed to address the strongest arguments out there? If there’s weak /but common/ crap out there, must one ignore it?

          • HeelBearCub says:

            Because steel-manning?

          • Randy M says:

            Isn’t steel-manning about taking a weak argument and addressing it’s strongest possible form? Not about letting weak arguments look as if they were unanswered.
            Sometimes others will think the weak arguments don’t have answers; sometimes you’ll realize a weak argument wasn’t so easily dismissed as seemed at first.

            (Arguing only the meta-point here, right? I don’t know all the arguments Scott may have missed etc.)

          • Nornagest says:

            Steel-manning is something you do, on your own, to improve your understanding of an idea. Trying to do it in a live argument with somebody else usually comes off frustrating at best and rude at worst, because the frame you think is strongest usually isn’t the frame they do.

          • carvenvisage says:

            @Nornagest that’s really well put

        • Scott’s response is good enough. He address how Rationalists are open to the possibility of being wrong.

      • murbard says:

        If that’s the case, it’s only because people suck at making cogent criticisms. I suspect the real reason “rationalists” are called a religion is probably because they have been known to wear robes and chant ritual songs in an explicit bid to adopt religious-like practices.

        • murbard says:

          I mean maybe the critics don’t actually *know* this, and maybe they are indeed wrong based on a the information they have. But once you do X, you ought to forfeit the right to complain that people are unjustifiably accusing you of doing X

          • sketerpot says:

            The critics don’t usually mention this. But if they did it would be a poor criticism, implying that because some rationalists occasionally do amusing rituals (like a religion) they are therefore epistemically bad (like a religion). In general, arguments that X-is-like-a-religion seem to usually be the worst argument in the world unless the person making the argument is really exceptionally careful.

          • FeepingCreature says:

            Eh, wouldn’t it be a Gettier criticism? True, and for a particular reason, but if the reason is a bad one then addressing the criticism by explaining the background will do nothing.

          • ignition says:

            Most people accusing rationalism of being a religion are not objecting to anything on the basis of it being epistemically bad. Most non-rationalists do not care about anything being epistemically bad, unless it causes clear real-world problems.

            The “rationalism is a religion” objection might be rephrased “rationalism centers around a strong and cohesive subculture, therefore any weird claims it makes are probably based on quirky cultural customs rather than universal reason”.

            I don’t know whether this is true, but I completely understand why people would use that rule of thumb, especially when rationalism-associated ideas make a lot of broad universal claims and counterintuitive moral demands.

        • semicyte says:

          I don’t know if you’re kidding about the robes and chants, but if you’re not I’d love to learn more about them.

          It may be unfair to pull in the LessWrong days as evidence of rationalism as religion, but screw it, living under the same roof to develop stronger bonds in the shared belief, calling the community’s exalted text The Sequences, having an obsession with a vanishingly unlikely AI apocalypse, and (this is by far the pettiest objection) following a guy named Eliezer Yudkowsky, which is as cult-leader as names get, all suggest rationalism as a religion.

          Some of the above is serious and some is not. If Scott wants to strike down a strawman, by God I’ll give him one. But if he’s mad about the perception of the rationalist community as a religion by drive-by commentators, the rationalists have made it incredibly easy to distort themselves.

          • tk17studios says:

            Assumptions in this critique that I think are unfounded (or at least need to have their justifications spelled out):

            – living under the same roof to develop stronger bonds in the shared belief (I don’t think any of the ~10 group houses I know of exist for this reason?)
            – calling the community’s exalted text The Sequences (those words are yours, and ‘The Sequences’ is a completely neutral title based on the fact that there were several sequences of posts that had a chronological order and were grouped by theme or purpose)
            – an obsession with a vanishingly unlikely AI apocalypse (citation needed—plenty of impartial, intelligent people have publicly weighed in on the side of “this is worth considering,” and put significant sound reasoning into the spotlight)

            … the bit about rationalists having made it easy to distort themselves seems true, but in addition, comments like the one above add distortions that they had nothing to do with paving the way for, and which people just … like to invent, I guess?

          • The Nybbler says:

            ‘The Sequences’ is a completely neutral title based on the fact that there were several sequences of posts that had a chronological order and were grouped by theme or purpose

            This might count as a point in the other direction. “Pentateuch” just means “five scrolls”, and “Bible” derives from a word meaning “books”.

          • Ilya Shpitser says:

            The Sequences (all caps) and the way it’s used — “read The Sequences” — does _not_ pass as neutral language to outsiders, fyi.

          • bbeck310 says:

            This prompts an interesting question, because this story sounds so familiar–is there some reason in human nature that turns wildly disparate versions of “rationalism” into cults (or is it just that human nature turns everything into cults?)

            Progressive technocrats, Ayn Rand, and Yudkowsky all promoted something they call “rationalism,” all defined primarily by rejecting spiritual views of human nature (the progressives were pure materialists; Rand’s version of “rationalism” claims that all principles of philosophy can be reached through deductive reasoning; and Yudkowsky reverses Rand by promoting more or less pure empiricism, but none had much use for spirituality or tradition, except to some extent Yudkowsky would recognize the Chesterton’s fence concept). And all of them look to outsiders a lot like religions, if not cults of personality (Rand was probably the worst here).

            Or maybe it’s just that totalizing philosophies end up looking a lot like religions when you try to practice them? You don’t see a lot of Burkean conservative traditionalists becoming cult members.

          • This prompts an interesting question, because this story sounds so familiar–is there some reason in human nature that turns wildly disparate versions of “rationalism” into cults (or is it just that human nature turns everything into cults?)

            Rationalists should believe that religion is an attractor, since they need a way of explaining its prevalence without its being true, Rationalists should not causally regard themselves as exempt. Rationalists should notice that they may even have a particular susceptibility, the on Ilya mentioned, where rationalists tend to be lacking in social contact, and may therefore be tempted to start exchanging adherence for acceptance.

          • Viliam says:

            none had much use for spirituality or tradition … And all of them look to outsiders a lot like religions

            Oh… so it’s the lack of spirituality that always makes the outsiders compare us to an organized religion!

            😀

            Okay, trying to steelman this, because I feel there is actually a good point…

            How do typical people treat religions? With lukewarm respect. (Even those who don’t like it, usually say something like: “I believe in god, but not in the church” or “religion is a great idea, but unfortunately many spiritual and political leaders abuse it for their selfish purposes”.)

            How do cult leaders and cult members treat (the other) religions? They consider it important to explain why they are false. As a pretext for introducing their own solution.

            That is probably a factor in why atheists and similar groups, ironically, often send the cultish vibes. Because dismissing (existing) religions is exactly what one would do when trying to recruit people into their own.

          • The original Mr. X says:

            Progressive technocrats, Ayn Rand, and Yudkowsky all promoted something they call “rationalism,” all defined primarily by rejecting spiritual views of human nature … And all of them look to outsiders a lot like religions, if not cults of personality.

            “When men stop believing in God, they don’t believe in nothing, they believe in anything.”

          • bbeck310 says:

            Viliam,

            That is probably a factor in why atheists and similar groups, ironically, often send the cultish vibes. Because dismissing (existing) religions is exactly what one would do when trying to recruit people into their own.

            I don’t think that’s it–skeptic types (Penn Gillette, the Amazing Randi, Richard Feynmann) and even vocal anti-religion atheists (Christopher Hitchens, Sam Harris, Bill Maher) don’t send cultish vibes, but they all stop at “unfalsifiable beliefs are bad and shouldn’t be followed.” But none of them follow up “dismiss all religions” with “and adopt my totalizing philosophy instead;” their promoted value systems are pretty much all “just be nice to each other, OK?”

            There’s some point where a movement like effective altruism goes from “hey, maybe we should make sure our charity gets the most bang for its buck” to “follow these rules that read like something out of Leviticus to be a good person.” I’m not sure where that shift happens, or what causes it. Haidt’s idea that humans are coded for hive behavior seems as good an explanation for this phenomenon as any.

          • Ilya Shpitser says:

            “Oh… so it’s the lack of spirituality that always makes the outsiders compare us to an organized religion!”

            No, Viliam what makes people compare you to organized religion is Eliezer wants to be the pope, and you (the community) want to let him be the pope. If both parties want it to happen, it’s going to happen.

            Have you ever noticed that Scott doesn’t want to be the pope? I hold Scott in much higher esteem than Eliezer for many reasons, but this is a big one.

          • Nornagest says:

            Eliezer wants to be the pope, and you (the community) want to let him be the pope.

            I think this would have been much more accurate circa 2010-12.

          • Ilya Shpitser says:

            Which part changed? When? Why?

          • Viliam says:

            Eliezer wants to be the pope, and you (the community) want to let him be the pope

            Just to make sure… you mean the guy who doesn’t even post on LW anymore, right? Yeah, that behavior reminds me of pope Benedict XVI.

          • Ilya Shpitser says:

            What does posting or not posting on LW have to do with anything? It’s about the person, and the social dynamics, not about the specific medium.

            The straw that broke the camel’s back for me was him openly trolling for sex on facebook (since deleted). I am sure this type of iffy stuff is now done informally via the social network in the Bay Area.

            I mean, I don’t really want to spend time digging up a long trail of disappointing shit EY said/did over the years. It’s also kind of a discussion-quality-lowering exercise for SSC. But if you want, we can go through it together.

            “The pope” is a figure of speech. I mean a guru. A guy who writes epistles to the NYC community. A guy who officiates marriages. A guy who writes parables. etc. etc. etc.

            What the heck kind of look is that?

            As I mentioned before in another context, I am into virtue ethics (learning a model of a person), not into explicitly drawing lines in the sand (looking for rule/norm violations). The advantage of that view is you quickly get a read on a person even if no specific thing they did on its own was a particularly strong signal.

          • Besserwisser says:

            Just to make sure… you mean the guy who doesn’t even post on LW anymore, right? Yeah, that behavior reminds me of pope Benedict XVI.

            Exactly. I’ve never seen the pope post on LW.

          • Nornagest says:

            Which part changed? When? Why?

            The short version is “the community splintered”. Before 2013 or so, rationalism was basically synonymous with the Sequences and the Less Wrong blog, but then a bunch of stuff happened over the next couple of years: Eliezer largely stopped producing new rationality content (this overlaps with HPMoR, but there was a long hiatus even on that) and committed a bunch of embarrassing administrative and social gaffes; most of the other major LW contributors left to start their own blogs (like this one!) or to focus on work for CFAR or SingInst; the Bay Area meetups hit their Eternal September moments (Berkeley first, South Bay later). Rationalist Tumblr and Facebook became significant.

            There wasn’t a sole turning point, but in 2012 it would still have been meaningful to talk about a single rationality community with a more-or-less unified agenda. By 2015, on the other hand, it was more of a scene or a movement: a collection of social circles and small institutions with different priorities, that just all happened to be pointed in roughly the same direction. And as far as I can tell, Rationalist Facebook and some personal social circles are the only ones that Eliezer still owns.

          • I think this would have been much more accurate circa 2010-12.

            So we can agree that rationalism was a religion circa 2010-2012 😛

          • Winter Shaker says:

            The Original Mr X:

            “When men stop believing in God, they don’t believe in nothing, they believe in anything.”

            That meme might well be true on a technicality, but it’s still monstrously arrogant and I wish it would go away. It boils down to ‘When men stop believing in [implausible and far-fetched belief A] they believe in [implausible and far-fetched belief B, or C, or…]’.

            That is, the sanity waterline might, sadly, be fixed; people simply need to gravitate towards shared belief in almost-certainly-factually-mistaken claims in order to have nice things, but phrasing it like that looks like it is trying to smuggle in the assumption that ‘God’ is less unreasonable than ‘whatever people come to believe as an alternative’ – without doing any of the work of demonstrating that assumption.

          • without doing any of the work of demonstrating that assumption.

            I’m pretty sure the quote is from Chesterton, who did a good deal of the work of demonstrating that Christianity was not an implausible and far-fetched belief, whether or not successfully. You are taking one sentence and complaining that it doesn’t, by itself, do the entire job of justifying the author’s position.

          • Nornagest says:

            So we can agree that rationalism was a religion circa 2010-2012

            I don’t know if I’d ever have called it a religion. I do think it had a lot more religion-y flavor then than it does now.

        • i suggested this was a bad idea in 2011 ish…i turned out I was the problem.

        • Cerebral Paul Z. says:

          If rationalists recite litanies, I desire to believe that rationalists recite litanies. If rationalists do not recite litanies, I desire to believe that rationalists do not recite litanies.

        • caethan says:

          Soooo, I remember a moment a couple of years ago that kinda crystallized the “This feels creepily like a cult” thing to me. There was a note on Scott’s Tumblr about how Ozy was very angry about some criticism of the rationalist movement, and so decided to go cam (i.e., do sexual stuff on camera for money) in order to donate to the rationalist movement. This was while the two of them were dating, and Scott posted this approvingly on his Tumblr.

          And I can just barely see from an inside view how this might seem reasonable. My immediate reaction, though, was along the lines of “Holy crap, this movement has got people pimping themselves out for money to donate to it, and it’s brainwashed not just them but their romantic partners into thinking this is reasonable.” I’ve walked back from that a little bit, but I still think it was creepy as hell.

          Then there was the bit about a year ago about the young lady who was basically the open mistress of some bigshot rationalist guy, got pregnant, and despite massive pressure from said guy and all his other bigshot rationalist friends to abort the baby, kept it and found herself somewhat ostracized.

          All this sexual stuff sure looks a lot like powerful folks systematically taking advantage of less powerful folks to an outsider. Which is like a huge red flag for “STAY AWAY! THIS IS A CULT!”

          • Zodiac says:

            If I didn’t read this blog for a year now I would probably consider very strongly if I should delete the bookmark.

          • Loquat says:

            RE: your second link, I’d forgotten how she actually came on here and commented about how guilty she felt, like she’d committed a serious offense against her ex-lover by not aborting their child.

            Any community that encourages that sort of thinking, and regards the man as having acted properly in both requiring the initial promise of abortion and refusing child support later, deserves all the RED ALERT THIS IS A CULT RED ALERT it gets.

          • caethan says:

            @Loquat

            Oh, if you did any further reading on it, it’s worse than that. Can’t find the links any more (I think the blogs where I read it at the time got made private/comments got deleted from the thread) but he drove her to the abortion clinic with several other “friends” as “moral support” and then drove her to tears and collapse inside the abortion clinic when she still refused to abort the baby. And, as you say, she was still commenting about how guilty she felt about not doing what everyone wanted her to do.

            I think a community that can happily supply several people willing to try and browbeat a terrified crying woman into an abortion is one that has some problems.

          • Viliam says:

            Some critical parts of the story are missing… such as the fact that the biological father of the baby already had a wife(?, primary partner anyway), and the lady was aware of that, and they had an agreement that this is going to be sex without making babies.

            It’s still a bad story, I am not denying that. 🙁

          • Eponymous says:

            Horrifying. Didn’t know about this. Makes me less likely to interact with the rationalist community in meatspace.

          • Loquat says:

            @Viliam

            To a lot of non-Rationalists, that makes him look even worse. The guy had a wife and kids, but felt the need to risk it all by screwing around on the side with a mistress.

            It also makes Rationalism look even worse if this guy’s actions are totally okay under prevailing Rationalist sexual ethics, and this is exactly the kind of situation that leads to arguments like this one posted in the Determining Consent comments the other day, that “everyone involved consented to it” is actually not sufficient to prove an act was ethical.

          • tmk says:

            Wow, that thread is horrible. If your “decision theory” tells you to abandon your new born child, you deserve a good smacking and a lifetime ban on reasoning anything from first principles.

          • FeepingCreature says:

            What? Okay, this thread can maybe use a bit less grandstanding and moralizing about other people’s private lives okay??

            That said, it is my personal impression that a bunch of people come to rationality because they are not mentally healthy and thus excluded from other social avenues. I’m fairly sure the increase in mental issues from that should not be counted against; furthermore, having children in the current legal climate is a potentially horribly toxic topic and if you think I can’t make a good case the other way, while knowing nothing about the particulars in this case, you have a lack of imagination that should immedlately disqualify you from criticizing other people’s lives by itself.

          • Matt M says:

            That said, it is my personal impression that a bunch of people come to rationality because they are not mentally healthy and thus excluded from other social avenues. I’m fairly sure the increase in mental issues from that should not be counted against;

            As a random aside, this reminds me a lot of the Insane Clown Posse’s defense of Juggalo culture…

          • nimim.k.m. says:

            Yes, this Weird Sex Stuff with Poly-relationships Stuff is one the reasons I want to keep a certain distance to the Rationalist sphere with big R, and consider myself as a member of “I write and read comments on some blogs” sphere.

            It seems play out exactly like every and all stereotypical “free love” experiments at any point of history where they have been prominent. (Polyamory is not a new idea, folks.) In practice, it seems to fuel the exact kind of power dynamics that are most damning evidence of cult-like behavior.

            edit. Reading it now, that particular Vaniver’s post and the following comment thread scream “outrageous”. I’m feeling slightly ill reading posters lines about feeling “incredible guilt and suffering” because of not aborting a baby. This is much more damning than any joke about Roko’s basilisk.

          • Nick T says:

            donate to the rationalist movement

            You made this up. The linked post just says “donate”. (Knowing Ozy, the donation was probably to AMF or something like that, but you don’t have to know Ozy to not make things up.)

          • Richard Kennaway says:

            I see that Greg Egan’s maxim, “it all adds up to normality,” has wider application than recondite philosophy.

          • Ilya Shpitser says:

            “a bit less grandstanding and moralizing about other people’s private lives okay??”

            I am staying out of the victim’s life here (or of this thread generally). I wish her and her child the best in life.

            But let’s get one thing straight. Those people who drove her to the hospital and caused her to break down and cry? They are _abusers_. That is abusive behavior.

            Please do not run interference for abusive behavior.

          • caethan says:

            @FeepingCreature:

            Look, I didn’t comment on those two incidents at the time because, as you say, they’re about other people’s personal lives and not my business. (Albeit once you post personal things on your public blogs you have a limited scope to complain about people discussing your personal life.)

            This post, however, is all about whining about how unfair treatment of the rationalist movement is, and that they are sorely misjudged. Since one of the major public concerns is “Wow, these guys are a little creepily cult-like” then I figured some pointed links to things that struck me as particularly cultish would be helpful in crystallizing that for others.

            For the record, I don’t actually think the rationality movement is a cult. I think it had a reasonable chance of becoming one early on, but it didn’t turn out that way. (In large part because it seems apparent that Eliezer rather desperately wanted to be a cult leader.) What I do think is true now is that at best, rationalists don’t care about looking like a cult and quite possibly are deliberately appropriating various cultish things.

            @Nick T:

            If my reading of that post was wrong and Ozy was donating to general charitable causes rather than to the rationalist movement then I wholeheartedly apologize. I did not deliberately make anything up, but I may well have misread it. I hope you can see that my (potential) misreading is a reasonable one to make for someone not broadly affiliated with the movement.

            If true, that makes me feel someone less creeped out by it, but not to the level of not creeped out at all.

          • Viliam says:

            Maybe I just have a really bad model of people and sexual relationships, but imagine that in a completely unrelated debate, someone tells you the following:

            “I know a guy who has a wife, and also a mistress. Yesterday, the mistress told him she was pregnant with him…”

            Now, imagine that this is all you know. It’s just a random American guy. And his mistress got pregnant. And he has a wife. How would you expect this story to continue? Which endings would make you feel “yeah, I totally expected this”, and which endings would make you feel “wow, this is totally shocking; I can’t imagine anyone in my neighborhood doing that”?

            I can’t speak for you, and I admit I am not an expert on relationships, but I would consider the following three outcomes, in random order, to be all within the “normal” range (i.e. this is what I would expect people around me are generally doing in such situation, without necessarily approving any of that behavior) —

            a) the guy tells the mistress to get an abortion;
            b) the guy leaves his wife, and marries the mistress;
            c) the guy makes a deal with the mistress that she will raise the baby, and he will secretly support her financially.

            Somewhat less likely, but still plausible:

            d) the guy tells his wife, trying to keep both women in his life, but the wife divorces him;
            e) the guy kills the mistress — okay, maybe I am watching detective stories too much, and this option shouldn’t really make it into the top 5.

            My point is, some of these options create a better impression of the guy than other ones, but none of them makes me go “that’s unpossible… there must be some hidden reason for all this weird behavior… the guy must be a secret agent, or a cult member, or an alien from Omicron Persei 8″. They all seem to me like an everyday human behavior.

            Now let’s turn it around. Suppose I tell you with 100% certainty that the guy is a cult member, and again, I leave the story unfinished. Which endings would you consider likely, and which unlikely? I may be obtuse here, but again, all five endings mentioned above seem like “yeah, that could happen”. (Actually, for a cult leader, I would also give a decent probability to an outcome where he keeps both women successfully, because he convinces his wife that god told him to do this.)

            Also, correct me if I am wrong, but in America abortion is considered a more essential human right than having food or education (at least judging by the revealed preferences, because many people are stupid or starving, but mere accusations that someone hypothetically could make abortions less convenient are used as a weapon during the elections), so let’s not act shocked that someone actually considered that option. Imagine a gender-reversed version: a woman has a husband, and a boyfriend. One day she finds out her boyfriend made her pregnant. Despite her boyfriend crying and begging her not to do it, she goes and gets an abortion. The End. Such a non-story, right?

            tl;dr — what I see here is a normal human behavior (note: I didn’t say “nice”); without being primed, I think most people wouldn’t feel a need to look for unusual explanations, such as cults

          • Jiro says:

            Imagine a gender-reversed version: a woman has a husband, and a boyfriend. One day she finds out her boyfriend made her pregnant. Despite her boyfriend crying and begging her not to do it, she goes and gets an abortion. The End. Such a non-story, right?

            That isn’t parallel because in the original, the mistress is having the abortion, and the man has power over the mistress, not vice versa.

            Also, the cultishness of the scenario changes when you add in polygamy.

          • Brad says:

            @Viliam
            In what people around here call blue tribe culture (i.e. upper middle / upper class, urban, professional, U.S. coastal, center-left to left) it is considered uncouth, if not outright immoral, to pressure a woman to either get an abortion or to not get an abortion. For the more christian parts of the country it would certainly be considered immoral to pressure a woman to get an abortion. So for that part you have pretty broad agreement across American cultures.

            That said, I agree it isn’t shocking that a man would pressure his mistress to get an abortion, the shocking part is that he would have the support of his social group in doing so.

          • Loquat says:

            To expand on what Brad said, it’s totally the social group. One guy on his own keeping a mistress, pressuring her to get an abortion, and then refusing to take any responsibility for the baby is a simple cad with no broader implications. A social group that approves of and defends all of those actions – and if you read the linked original thread you’ll see multiple people arguing that it is immoral to require a man pay child support when he’d explicitly asked his mistress to promise him consequence-free sex, and even one person suggesting that society would be fairer if women could be forced to have abortions in such scenarios – that’s a social group that’s going to raise some eyebrows among outsiders, to say the least.

          • Kaj Sotala says:

            Any community that encourages that sort of thinking…

            … the shocking part is that he would have the support of his social group in doing so.

            Look, I totally agree that what happened was terrible and that defending or supporting that kind of thing is just awful. But could we not use that event as the yardstick by which to measure the whole community? To my understanding this was an event involving just a few people; though I don’t know for sure, because although I count myself as a part of the rationality community, I don’t live where these people live and only heard about the whole story around the same time that it blew up and everyone else heard of it, too.

            The rationalist community, by this point, is relatively big. Big enough that it’s going to have all kinds of fucked up episodes, because it contains a lot of people and in any community with enough people you’re going to get some pretty fucked up episodes sooner or later.

            Scott actually wrote about this before: https://slatestarcodex.com/2015/09/16/cardiologists-and-chinese-robbers/

            (As for the linked thread, it was in an SSC open thread where everyone can comment. Hopefully not much more needs to be said.)

          • Ozy Frantz says:

            I am simply fascinated to discover that apparently monogamous people never experience unintended pregnancy where the father and the mother disagree about what ought to be done about the fetus, and that no monogamous person has ever done unconscionable things due to being desperate and in an awful situation. How have you achieved this remarkable feat?

            Nick T: IIRC, GiveDirectly, which is of course a rationalist front organization intended to bribe poor Africans into reading the Sequences.

            caethan: You did misread it, and it seems to me that this sort of misreading can be easily avoided through not using strangers’ Tumblr posts to pearl-clutch about their personal lives.

          • anonymousskimmer says:

            @Viliam

            “Also, correct me if I am wrong, but in America abortion is considered a more essential human right than having food or education”

            You’re wrong. Access to food is subsidized at a greater level than all of family planning medicine*. Access to a K-12 education is subsidized entirely at a level which is the typical expectation of the broad populace (not to mention state funding for community colleges and public universities which drastically reduces the price tag).

            If any politican on the right or the left dared state we should remove all state funding for SNAP or K-12 education, or even community colleges, they would be kicked out of their party. The most you ever see is proposed means testing and drug testing, or vouchers in the case of K-12. You can be a Republican, and even a Democrat in some states, and be totally against abortion.

            * – SNAP alone is a legally obligated entitlement which costs the federal government ~$70 billion / year versus less than $600 million for Planned Parenthood. https://en.wikipedia.org/wiki/Supplemental_Nutrition_Assistance_Program

          • Jaskologist says:

            It’s not just how the participants reacted to the situation. The situation itself was created by the community through mores they were pushing.

            Like somebody said upthread, polyamory is not a new idea. If you didn’t even notice that skull, why should we believe you’ve noticed the others?

          • Ozy Frantz says:

            Jaskologist: What about this situation is caused by polyamory? Monogamous people often have unintended pregnancies. If you are using the term “polyamory” to mean “people having sex when they don’t want to have children with each other”, this is a very unusual usage of the phrase, and your criticisms apply identically to e.g. the average college campus.

          • reasoned argumentation says:

            “Those skulls were there when we got here”.

            Talk about skull-unawarness.

          • Cerebral Paul Z. says:

            The argument isn’t “Those skulls were there when we got here,” it’s “That other road you want us to take instead has those same skulls.”

          • Jaskologist says:

            IIRC, the case involved a married women having sex with a married man who was not her husband. It wouldn’t have taken deep wisdom to expect that to turn out badly.

          • Ozy Frantz says:

            reasoned argumentation: Most people– monogamous or polyamorous– have sex with people when they don’t particularly want to have children. For instance, a married couple may wish to delay children until they are further in their careers, or may wish to have no more children than they currently have. It is my understanding that even monogamous couples are not generally celibate in these situations, and therefore run a risk of one partner wishing to abort the fetus while the other partner wishes to raise it. I suppose one could become Quiverfull, but complaining about rationalists’ skulls as a Quiverfull person seems a bit like tossing stones from a glass .

            Jaskologist: It seems like your prediction is that polyamorous relationships will end poorly (whether or not there is an unintended pregnancy) while monogamous relationships will be happier (whether or not there is an unintended pregnancy). My prediction is that relationships (monogamous or polyamorous) in which there is a pregnancy and one partner wishes to keep the baby and the other wishes to abort it will end poorly. Can you explain why you think the former is more plausible? This seems quite strange to me.

          • reasoned argumentation says:

            Ozy –

            You are literally incapable of even describing the problem being pointed out – all you can do is read what Jaskologist wrote, think “crimethink – must stop considering it” then spit out an entirely different argument to dismiss. This isn’t even failing to see the skull pile – this is having a mental block that requires you to see skull piles as rosebushes.

          • Ozy Frantz says:

            reasoned argumentation: Yes, I admit I’m very confused! In my defense, no one in this thread appears to have provided any sort of justification, instead saying “well, it would obviously end poorly.” I hope if it is so obvious then it will be easy for you to explain your causal model!

            If it helps, I can explain my model of what went wrong here! The failures are as follows: the couple failed to use adequate contraception; Katie did not successfully predict her response to becoming pregnant and made a promise she couldn’t keep and preserve her mental health; the child’s father attempted to coerce Katie into an abortion; the child’s father failed in his ethical duty as a father to play a role in his child’s life. None of these are poly-related.

          • random832 says:

            None of these are poly-related.

            In fact, “a man arranges with another woman, who is not a party to any kind of relationship with his (other?) wife, to have “consequence-free sex” seems so not-poly-related that I question whether it should be called polyamory at all. At least to my understanding the ‘mainstream’ polyamory movement tries to distinguish itself from old-school exploitative polygamy (and from cheating / open relationships) by only admitting relationships where everyone has more or less equal status.

            This is of course orthogonal to whether it reflects well on the rationalist movement in general, or the rationalist movement’s “flavor” of polyamory in particular.

          • quanta413 says:

            I am simply fascinated to discover that apparently monogamous people never experience unintended pregnancy where the father and the mother disagree about what ought to be done about the fetus, and that no monogamous person has ever done unconscionable things due to being desperate and in an awful situation. How have you achieved this remarkable feat?

            The big problem is not that they disagree or that he was polyamorous. If the father and mother were monogamous, most American communities would certainly expect the father to pay up and support the child. And it would be considered morally despicable for the community to put pressure on the mother to abort whether the father was monogamous or not.

            I imagine why people are seeing added problems with polyamory is that because the father wasn’t her husband and she found herself unable to fulfill her original commitment to abort, it shattered her relationships and left her even worse off than you would expect this sort of thing to turn out on average.

            Polyamory has upsides, but it also has downsides.

          • Ozy Frantz says:

            quanta: Katie has deliberately chosen not to pursue child support because she thinks it’s wrong to force him to support a child he did not consent to. She is following her moral beliefs at a significant personal cost. These are Katie’s personal beliefs, and I do not think that the rationalist community as a whole has any consensus on child support.

            I do not think a disagreement about an unintended pregnancy is any less likely to shatter a relationship if the relationship is monogamous. Indeed, it seems very beneficial to me that her spouse was a different person than the person who was trying to coerce her into having an abortion. Score one for polyamory!

            random: That’s not what polyamory is.

          • reasoned argumentation says:

            Indeed, it seems very beneficial to me that her spouse was a different person than the person who was trying to coerce her into having an abortion. Score one for polyamory!

            Good thing that skull pile was actually a rosebush!

          • quanta413 says:

            Katie has deliberately chosen not to pursue child support because she thinks it’s wrong to force him to support a child he did not consent to. She is following her moral beliefs at a significant personal cost. These are Katie’s personal beliefs, and I do not think that the rationalist community as a whole has any consensus on child support.

            I am aware that it was Katie’s choice; but her choice was influenced by her choice of polyamory. I think it’s fair to argue that more money for child = good and thus it can be beneficial to hold a belief that the father should share even if it was an accident and you promised. The fact that Katie doesn’t hold that belief may be an argument against the law interfering against her will, but I don’t think it’s a good argument against the general principle that fathers have a duty to support their children.

            And most people aren’t terribly libertarian about these things and would prefer that people have beliefs they view as more likely to lead to a child who is well provided for.

            I do not think a disagreement about an unintended pregnancy is any less likely to shatter a relationship if the relationship is monogamous. Indeed, it seems very beneficial to me that her spouse was a different person than the person who was trying to coerce her into having an abortion. Score one for polyamory!

            Unless I misunderstood the original story, she and her spouse separated as well. It’s hard for me to imagine any evidence short of someone’s death (in which case I am very sorry I dragged this around even more here, it’s already somewhat unkind of me) or a god-like view into someone’s else’s past that could convince me that this wasn’t influenced by the pregnancy.

            And my personal view was that she lost two relationships instead of one which is worse. There’s only so much time to spend with people so I figure you’re either splitting time that would normally go to one spouse across multiple or your spending time that would be spent with platonic friends or alone by instead having more sexual relationships. But thanks to what you say it now seems to me that how bad this sort of thing feels and how it… scales? (I guess that’s the word) is really pretty contingent on your own psyche so I can see how this could go either way. And of course, if you lose only one out of two relationships this may be an upside compared to the monogamous model.

            But my other point (and I think most normal people would share my intuition) would be that you are more likely to end up with unbridgeable issues between parties in accidental pregnancies in a polyamorous setting because you can go from having only two parties involved to three. And maybe other people find differently about this, but I find the odds of an acceptable if unpleasant compromise or renegotiation drop very sharply as the number of parties goes from 2 to 3.

          • brentdax says:

            @Viliam

            Also, correct me if I am wrong, but in America abortion is considered a more essential human right than having food or education (at least judging by the revealed preferences, because many people are stupid or starving, but mere accusations that someone hypothetically could make abortions less convenient are used as a weapon during the elections)

            Access to food or education is not brought up in elections because it’s relatively uncontroversial. Access to abortion is highly controversial because some people consider allowing it to be a moral evil, while others consider forbidding it to be a moral evil.

            It’s like saying that the fact that there’s a lot of Second Amendment law but very little Third Amendment law means that Americans care more about owning guns than not having soldiers in their homes. No, it’s just that agreement on “don’t put soldiers in people’s homes” is so universal that nobody disputes it and the government doesn’t even try, while people have major differences of opinion about guns and the government sometimes does try to ban or restrict them.

          • The original Mr. X says:

            @ Ozy:

            What about this situation is caused by polyamory? Monogamous people often have unintended pregnancies.

            Unintended pregnancies are easier to handle when it’s your wife getting unexpectedly pregnant than when it’s your mistress.

          • Jiro says:

            I am simply fascinated to discover that apparently monogamous people never experience unintended pregnancy where the father and the mother disagree about what ought to be done about the fetus, and that no monogamous person has ever done unconscionable things due to being desperate and in an awful situation.

            The polygamous person here who has done awful things is not the woman who had become pregnant, it’s the other two who put pressure on the woman. They’re not “desperate and in an awful situation”.

            And polygamy is relevant because power imbalances are one of the problems people suspect about polygamy in the first place. It isn’t a defense to “polygamy encourages this” to point out that polygamy isn’t a necessary condition.

          • Kaj Sotala says:

            It’s not just how the participants reacted to the situation. The situation itself was created by the community through mores they were pushing.

            Like somebody said upthread, polyamory is not a new idea. If you didn’t even notice that skull, why should we believe you’ve noticed the others?

            Polyamory comes with its own set of problems, yes. But so does monogamy.

            If you want to conclude that the community is failing terribly by being accepting of polyamory (or for that matter any other set of social norms), it’s not enough to point out a single disaster to which polyamory arguably contributed. That’s like finding a single case in which Western-style mixed markets do worse than Soviet central planning did, and concluding on this basis that any community which doesn’t reject Western-style mixed markets in favor of Soviet-style central planning is failing horribly. You need to do a much more comprehensive analysis of the merits and drawbacks of both.

          • MicaiahC says:

            @quanta

            Unless I misunderstood the original story, she and her spouse separated as well. It’s hard for me to imagine any evidence short of someone’s death (in which case I am very sorry I dragged this around even more here, it’s already somewhat unkind of me) or a god-like view into someone’s else’s past that could convince me that this wasn’t influenced by the pregnancy.

            As someone who has firsthand knowledge of this story (e.g. not just what’s online about this) this is actually the exact opposite of the situation; the spouse left for entirely different reasons and the pregnancy, if it did anything, caused the spouse to be more supportive of Katie.

            (In addition, when I said someone thought the entire situation “was because of polyamory” I received a chuckle back from the spouse)

          • John Schilling says:

            In fact, “a man arranges with another woman, who is not a party to any kind of relationship with his (other?) wife, to have “consequence-free sex” seems so not-poly-related that I question whether it should be called polyamory at all. At least to my understanding the ‘mainstream’ polyamory movement tries to distinguish itself from old-school exploitative polygamy (and from cheating / open relationships) by only admitting relationships where everyone has more or less equal status.

            I am exceedingly skeptical about how equal that status really is.

            I’m old enough to remember when this was called “free love”, and perhaps in hindsight it’s good that I wasn’t old enough to have enjoyed it at the time because I did eventually see the damage it caused then. Mostly to young women, and what I’m hearing now sounds like a really bad flashback.

            “Old-school exploitative polygamy”, at least had rules to mitigate the damage. The high-status man whose mistress shows up pregnant, may quietly ask her to have an abortion (at his expense), but if she says no he doesn’t push it and he does pay child support or suffer severe legal and social consequences. The mistress’s peers, if no one else, ought to be supportive.

            The old rules were based on a sound understanding of how real people are actually wired. People do unpredictably bond with embryos they didn’t plan to create, to the point of seeing at least that one potential abortion as murder. People do unpredictably fall in love with their casual sex partners, to the point of sometimes suicidal despair when they find that, no, the partner still only sees them as a fuck buddy. And, yes, people get jealous, also unpredictably and sometimes lethally. Also, status equality in sexual relationships is impossibly difficult to pin down. Most of us are pretty good at coming up with rules to let people sometimes have sex and make babies while mitigating the harm caused by all of this.

            What the free love folks had then, what you all seem to be reinventing now, is not any sort of enlightenment or improvement, but a new set of rules for maximizing the number of orgasms experienced by high-status people based on the assumption that all the very real problems can be willed away by Pure Applied Reason. And now, when someone points out that no, they’re still being hurt even though everybody is playing by the ingroup’s rules, a policy of trying to shame them into silence and sending them off to the closest thing your society has to a convent.

            Bay Area Rationalist Polyamory isn’t big enough or old enough to have amassed the pile of skulls that Free Love did in its day, but it’s more than just one woman. And I’m seeing the same arrogant unwillingness to acknowledge the harm now as there was then.

          • Kaj Sotala says:

            What the free love folks had then, what you all seem to be reinventing now, is not any sort of enlightenment or improvement, but a new set of rules for maximizing the number of orgasms experienced by high-status people based on the assumption that all the very real problems can be willed away by Pure Applied Reason. And now, when someone points out that no, they’re still being hurt even though everybody is playing by the ingroup’s rules, a policy of trying to shame them into silence and sending them off to the closest thing your society has to a convent.

            ? Everyone in this discussion that I’ve seen has very clearly acknowledged that there was clear harm done and that what happened was bad. Your comment doesn’t seem to describe the discussion so far at all.

            That’s actually also the general feeling that I get from reading many of these comments: that they seem to be describing an entirely different reality from the one that I, or anyone that I know who does polyamory, actually lives in. These always make it sound like there’s this top cabal of (mostly if not entirely male) “high-status people” who go around having sex with everyone, leaving everyone lower-status out and used.

            Whereas my experience is much closer to Scott’s: that polyamory is so unremarkable as to be boring. There are just totally ordinary people who happen to have a few more relationships going on at once than usual. This experience is echoed by e.g. a couple I know, who after observing their polyamorous friends for several years figured that “well, if poly is this ordinary then we guess that we could do it as well” and opened their relationship, with no bad results that I’d have heard of.

            As for the part about “high-status men and their mistresses”, the ordinary situation is that it’s the women who have more partners than men do. This is actually a relatively well-known problem in poly circles: that if you’re a man, opening your relationship may suddenly mean that your girlfriend is getting into a lot of relationships while you aren’t. (or as Ferrett charmingly put it: So these dudes open up their relationship, expecting to be drowned in sex, and then are astonished when they’re left dry on a beach and their girlfriend is out swimming in seas of strange dick.)

            And then there’s the other side of this, which is that this can be great for low-status men. I’m saying this because I spent a long time being one (maybe still am? dunno), and up to age 28 or so, all of my romantic relationships had been ones where the woman already had a boyfriend/husband and dating her was only an option because of polyamory. In other words, if poly wasn’t a thing, my first relationship would have come about ten years later than it actually did. And while I admit that it wasn’t always so great to always be the secondary, it was still a hell of a lot better than not having any relationship at all.

            My interpretation of this is that poly is great for low-status people because poly makes dating them feel less risky to people who are already in committed relationships. If you were in a monogamous culture where you could only have one partner, you’d have much more of an incentive to make sure that they were as good as possible, because you can only have one. Whereas with poly, if you’re a woman (or man) who’s already in a relationship, why not date someone low-status if they’re otherwise nice?

            …assuming that you want to look at relationships and dating through a status lens in the first place. While I agree that status definitely does affect these things a lot, there’s also a lot that it doesn’t, and it seems like a common mistake to have a too status-centric view of relationships. Many relationships basically form because two people feel good in each other’s presence – some of that good feeling may come from status issues, but there are also other sources. Depending on the personalities involved, status differences may even inhibit those feelings of goodness, if the people would prefer to feel like they were on equal terms.

            (Incidentally, I always feel a little bit weird reading these comments characterizing polyamorists as these naive people who don’t really understand human nature and think everything can be solved by Pure Reason, and which then try to argue for this by making up simplistic models of human relationships in which everything seems to reduce to status…)

          • Galle says:

            Indeed, it seems very beneficial to me that her spouse was a different person than the person who was trying to coerce her into having an abortion. Score one for polyamory!

            Good thing that skull pile was actually a rosebush!

            Okay, I’m very confused. Either I’ve massively misunderstood this entire metaphor, or you’re saying that it’s a bad thing that this woman’s spouse did not try to coerce her into having an abortion. In which case, with all due respect, you’re basically saying that the path will all the rosebushes is actually full of skull piles, and we should instead take a safe detour by ascending Skull Mountain.

        • Deiseach says:

          You guys have robes? I knew about the T-shirts and Solstice meetings, but real actual robes?

          Now I’m jealous envious! 🙂

        • Skeeve says:

          If that’s the case, it’s only because people suck at making cogent criticisms. I suspect the real reason “rationalists” are called a religion is probably because they have been known to wear robes and chant ritual songs in an explicit bid to adopt religious-like practices.

          More charitably, I would say it’s more likely that people are conflating rationalists-as-“people who try to advance the art/philosophy of thinking correctly” with Rationalists-as-“members of the technocratic subculture that think Bayes’ Theorem is great”. After that it’s just a matter of pattern matching.

          And when you consider that this pattern matching turns up:

          A “prophet/messiah” (Eliezer Yudkowsky), a “bible” (The Sequences), “Burial customs” (Cryonics), a “god” (superintelligent FAI), an “afterlife” (the Singularity), and a “holy land” (the Bay Area)

          It’s really not surprising at all that the only ideas that most people can think of that match this pattern are either “religion” or “cult”.

    • tk17studios says:

      My interpretation here is that Scott is saying “Among the many criticisms we receive are a frustrating number that are actually straw. It’d be cool if we could get fewer of those, and more of the good ones.” Yeah, he didn’t emphasize the good ones in this particular post (though he did go out of his way to point out that there’s no assumption of rationalists having all the right answers, and that there are almost certainly new mistakes being made), but I think it’s okay for a single post to have a single ask.

      In this case, that ask was “Please, more of the criticisms that actually land, and less of the noise that drowns out the useful critical signal.”

      Y’know, if I’m going to put words in his mouth, and all. I could be wrong.

      Edit: By the way, I’m genuinely curious in your response to my reading, Ashley, if you have the time and are willing to spare it.

      • HeelBearCub says:

        though he did go out of his way to point out that there’s no assumption of rationalists having all the right answers, and that there are almost certainly new mistakes being made

        Well, that’s pretty close to being obviously wrong, simply because it is hyperbolic.

        But given that EY spawned a certain kind of modern rationalism, it’s also wrong in another way.

    • MugaSofer says:

      No, it’s a weak man argument.

      • Ashley Yakeley says:

        OK, this is a more accurate term.

      • bbeck310 says:

        The weak man argument would be “these bad arguments against rationality are wrong; therefore rationality is right.” I don’t think that’s what Scott is saying. The fact that he makes the same point about economics and psychology should be telling–does anyone seriously believe that Scott thinks current economists and psychologists have it all figured out?

        This just seems to be “People, please stop making terrible and outdated arguments so we can talk about much more interesting criticisms?”

        • Jiro says:

          The weak man criticism is “these bad arguments against rationality are wrong; therefore the arguments against rationality are wrong”. You don’t actually need a “therefore rationality is right” in there; the weakman is being used to attack one’s critics

          • Jliw says:

            That doesn’t seem right to me. Addressing a weak argument only means the weak argument is wrong, and if people are making it, it should be addressed.

            The condition suggested by bbeck310 — that it must be stated or implied that the weak argument(s) are the only ones — is reasonable; if someone explicitly acknowledges that the weak argument isn’t the best one out there, as Scott has here, the accusation of weakmanning seems obviously wrong.

        • “People, please stop making terrible and outdated arguments

          Maybe the rationality movement could take a lead……

  2. Nathan Taylor (praxtime) says:

    Thanks for posting this. I’ve been reading the back and forth posts. And had been thinking about the spock analogy as a good frame. But now I see the (embarrassingly obvious of course in retrospect) point that the rationalist people have thought about it 10x or 50x more than I had.

    There’s a softer version of this, where like Caplan argued, it’s about an aesthetic (along with certain kinds of personality types) that are drawn to the movement. But of course no doubt there are many posts on this as well. So let me dig into that a bit more.

  3. sov says:

    I’m excited to read the soon-to-be-printed articles from the people that say “the name rationalism implies that rationalists think they’re perfectly rational” when they find out there’s a charity called Cops
    For Cancer
    or that Microsoft “””Windows””” is actually an operating system and not a literal window.

  4. dave35 says:

    Glad you’ve come around on Libya! Maybe add that one to your short list of Mistakes? (Unless imissed a follow-up piece on your old Livejournal.)

    But this was pleasantly encouraging. Thanks as always.

    • michaelblume says:

      He wrote a thing somewhere about updating against interventionism in general because Libya seemed like a good idea going in and a mistake in hindsight — I was actually wishing today I knew where he’d written that so I could post it.

      ETA: Aha, Google likes me today

    • nelshoy says:

      Is Libya now a universally acknowledged mistake?

      From what I know about it currently, it seems fractured, but mostly peaceful, kind of like Somalia.

      “Mostly peaceful” being in comparison to Syria. Since the rebels in the actual Libyan Civil War was able to depose Gaddafi so quickly, is it reasonable to assume they would have probably been able to carry out a lengthy campaign anyway without US intervention, just on a longer timescale with more bloodshed for an equally dismal final result?

      • Enkidum says:

        I think “this country is currently better than Syria” is a pretty appalling standard to use. At a minimum, I think the standard to judge military interventionism should be “this country is not orders of magnitude worse than pre-intervention days”. By which standard, every single country that we know the US has successfully intervened in over the past few decades fails (I guess the last case where this is not true would be the Serbian intervention, though I’m sure many would disagree with me about that example).

        I’m under no illusions that, e.g., Assad, Hussein, Ghaddafi, the Taliban, etc are/were not monsters. But we’ve done the quite amazing trick of making things so much worse in their countries that it is destabilizing the entire world order. Interventionism is not looking good.

        • Zodiac says:

          I think a better standarrd would be “the country is not worse than it would be if the intervention had not happened”. Of course that is not something that can be meaningfully assessed.

          • Enkidum says:

            That is a better standard, and I don’t think it’s impossible to assess. It’s not possible to measure as cleanly as my previous standard (which obviously isn’t perfectly clean either), but someone with a decent knowledge of the situation in a given region can make informed arguments for what might have happened without intervention. And to the extent that I’ve heard those people who I trust, US intervention fails according to that standard, almost universally.

            The sole exceptions over the past century that I can think of: WWII and Korea (which are obviously big ones), and… uh.. Serbia I guess?

            I don’t want to argue for blanket anti-interventionism. Aside from the examples I just gave, I think the Tanzanian invasion of Uganda to depose Idi Amin and the Vietnamese invasion of Cambodia to depose the Khmer Rouge, were entirely justified and made the world a substantially better place. But US interventionism has a very, very bad history.

            (I suppose things like placing massive numbers of troops in friendly countries is a kind of interventionism I mostly approve of, but it’s not what we usually mean.)

        • nelshoy says:

          >I think “this country is currently better than Syria” is a pretty appalling standard to use.

          Why? As far as I can see that’s the most likely alternative. It’s another mostly Arab Middle eastern country that entered civil war in an attempt to depose their dictator at about the same time. Seems like a pretty decent point of comparison. The Syrian civil wars is lasting many years, gaddafi was deposed in 8 Months. Without US intervention, this presumably would have taken longer, and that’s a bad thing? The only way intervention is worse is if otherwise Gaddafi wins, but I find that kind of unlikely given the brevity of the war.

          As for other recent interventions being bad, I think Kuwait is pretty glad they aren’t Iraqi right now. There are also cases where noninterventionalism looks to have led to very bad outcomes for those involved, see the Rwandan Genocide.

          • Enkidum says:

            Syria is possibly the single worst country to live in on earth right now, if not the worst it’s certainly in contention. Saying “our intervention led to this country not being the single worst place on earth” is kind of a low bar, I would have thought.

            Libya is currently far worse than it was under Gaddafi. Iraq is far worse that it was under Hussein. Syria is far worse than it was pre-civil war. Afghanistan, or at least large chunks of it, are as bad and probably worse than they were under the Taliban. These specific countries are literally destabilizing the entire world right now, and I think that American interventionism in each of them bears a great deal of the responsibility for this.

            I’m not an anti-interventionist by any means. As you say, Rwanda is an example of a situation where there should have been a strong military operation, and I gave another few examples above. But by and large, the US and other western powers seem to be supremely incompetent at it.

          • HeelBearCub says:

            @Enkidum:
            “Libya under Gaddafi” maps to “Syria under Assad”.

            Syria is still under Assad.

            What we don’t have direct access to is the counter-factual where the civil war continues.

            Now, a compelling argument can be made that the civil war would be been brutally efficient and ended quickly. But we don’t know that, and I haven’t see anyone really try and fisk the idea that the Libyan civil war would have ended very quickly (although I think that was probably likely).

          • Enkidum says:

            I’m saying that both the Syrian and Libyan civil wars are the result of Western intervention. Said intervention is clearly not the sole cause of these wars, but I think it is pretty clear that they would have gone very differently, or not started at all, without our meddling. In both cases, the end result is worse than the start.

          • cassander says:

            >Why? As far as I can see that’s the most likely alternative.

            The most likely alternative was that Qaddafi won the civil war he was about to win, and Libya ends up looking like pre-civil war Syria, which frankly, is not all that different from pre-civil war Libya.

            >The only way intervention is worse is if otherwise Qaddafi wins, but I find that kind of unlikely given the brevity of the war.

            It was extremely likely. He had penned up most of the people opposed to him in Benghazi and was about to invade the city to root them out. At the time of the intervention, the pro-intervention people stressed the need to intervene quickly, before it was all over.

          • nelshoy says:

            @enkidum

            Syria is the worst place on earth because of a drawn out civil war. If US intervention in Libya only served to shorten civil war, that’s a Good Thing. I’m trying not to treat American Interventionalism as a discrete thing. Let’s try to take each case on it’s merits, temporarily ignore what the American government did previously. Iraq wasn’t in a state of rebellion, and I think everyone here’s in agreement that it was a mistake. Egypt also had a rebellion and deposed their dictator all by themselves. Would you have supported US intervention to keep Mubarak in power?

            @Cassander

            I don’t know a lot of the nitty gritty details, which is why I was asking. I kind of had a low prior for Qaddafi easily winning the war without US help, since he was deposed relatively quickly and it’s hard to believe some airstrikes could topple him so easily when historically the US has struggled hard to take out regimes through traditional military means. Libya just seemed like a nudge through a terrible transition state that quickened a process that was going to happen anyway. But I guess there were some pretty crucial decision points? In that case I admit that intervention was a bad idea, at least in execution.

          • Enkidum says:

            @Nelshoy

            Agreed about treating each intervention at least somewhat distinctly. But I don’t think we can (or should) treat, say, every intervention in the Middle East since WWII as independent – they’re part of an ongoing and largely disastrously misguided policy that really could end up toppling the Western world in the long run.

            I don’t know much about the tactical situation vis-a-vis Gaddafi winning or losing without airstrikes, but I do know that life in Libya is far worse than it was under him, and I strongly suspect that, as cassander says, the outcome would have been much better for the people of Libya without intervention.

            I definitely would not have supported intervention to keep Mubarak in power. I might have supported intervention to keep Morsi in power, but honestly I think it was such a clusterfuck that I don’t know that there’s anything we could have done at that point. I do think that our decades of support for Mubarak have totally screwed over Egypt (cf virtually all dictators and their countries in the Middle East).

            Somewhat tangentially…
            I think that after 911 there was a grand historical moment that could have been seized by a competent American government. There was massive support internationally and within Afghanistan for a large-scale occupation and rebuilding of the country. With the right kind of rhetoric, we could have gone in with an explicit decades-long commitment and a Marshall Plan of sorts, and, critically, not invaded Iraq. But instead we wasted our time burning poppy fields, paying off warlords, pissing off every major player in the region, and essentially ignoring the needs of the actual people. Bunch of goddam amateurs.

          • Deiseach says:

            we could have gone in with an explicit decades-long commitment

            Problem right there. The American administration and public weren’t in any mood for that kind of long-term commitment; it was “go in fast, hit ’em hard, wipe out the bad guys (just like the movies about how we won the Second World War)”.

            The idea that no, you’re here for the next thirty-forty years overseeing a complete rebuilding from the ground up? Nobody wanted to commit to that kind of money or manpower or, to be frank, occupation (a lot of that is based on “but we’re not a colonial power, we’re the plucky rebel underdogs who beat the big colonial power” image in popular history). Also learn from history! What do you think Britain and Russia were doing in Afghanistan playing The Great Game all that time and getting not very far in the end?

            Back when the liberation of Iraq was being pushed forward, I was blue in the face posting everywhere that this would not be a Second Vietnam (as a lot of gloomy prognostication was forecasting), it would be America’s Ulster because you go in like that, you have to be prepared for the long haul or else you leave it worse than you found it. Fast and cheap solutions only work in the movies.

          • John Schilling says:

            If US intervention in Libya only served to shorten civil war, that’s a Good Thing.

            US intervention in Libya only served to lengthen the civil war, by about five years and counting. We’ve been through this before. The war was almost over, at what now looks like a laughably small death toll, when France and the US decided to intervene.

            Our intervention only served to make sure the Guy Everybody Hates, didn’t win. Yay us.

          • Enkidum says:

            @Deiseach:

            The government certainly wasn’t in that kind of a mood, because it was composed of fools. A decent statesman could have made the argument, I think. But they are/were in vanishingly short supply.

            I am probably (definitely) being overly naive here, but I think there were differences between this possibility and the Great Game, namely the support of a large number of the populace. But you’re probably right, in which case this is further grist for the anti-interventionist mill.

          • nelshoy says:

            @ John Schilling

            Technically, US intervention has prolonged the Korean civil war for 60+ years, but I don’t see anyone complaining about that. Libya is divided but there isn’t a lot of active fighting going on. I’m definitely of the opinion Qaddafi > current situation > active civil war a la Syria.

            @ enkidum

            It just seems like a lot of important American foreign policy questions have no good answers. When the US decides on a lesser of two evils, there’s still evil left over, but now it’s America’s fault. I think most interventions are probably wrong headed, but think some like South Korea have been pretty unequivocally good. I just can’t stand this attitude where America supports a bad dictator and we’re helping him oppress people, America removes a bad dictator and we’re destabilizing the region. Why are you so convinced that supporting Mubarak would have been a bad idea? Did you have enough information to be sure it wouldn’t dissolve into a chaotic mess afterwards that leaves most people worse off? I sure didn’t.

          • Enkidum says:

            I just can’t stand this attitude where America supports a bad dictator and we’re helping him oppress people, America removes a bad dictator and we’re destabilizing the region.

            There’s a very obvious third option that you’re not bringing up, namely neither supporting nor removing the dictator. This is, in general, the right choice IMHO. I’d say that intervention is only justified in the presence of immediate peril to the intervenor, or massive human rights violations in the country that can clearly be made better by intervention. There are very few cases where this is the case.

            Why are you so convinced that supporting Mubarak would have been a bad idea? Did you have enough information to be sure it wouldn’t dissolve into a chaotic mess afterwards that leaves most people worse off? I sure didn’t.

            We did support Mubarak, to the tune of billions of dollars, right up until 2012. We are now about to start supporting Sisi in the same way. Both work in the way that a lid on a pressure cooker works. But at some point it’s going to blow up (and did).

          • cassander says:

            @nelshoy

            Most US interventions aren’t against countries with active civil wars. When we have done so, in Libya, Afghanistan, we’re very effective at toppling regimes. And John Schilling is right, the U.S.intervention extended the war, it didn’t shorten it.

            @Enkidum

            If you’re going to spend a ton of money on a huge rebuilding edify, Iraq was a far better target for that effort than Afghanistan was. We know this because we did eventually launch huge reconstruction efforts in both places, spent remarkably similar amounts of money in both, and achieved far more in Iraq than Afghanistan.

      • akarlin says:

        Since the rebels in the actual Libyan Civil War was able to depose Gaddafi so quickly, is it reasonable to assume they would have probably been able to carry out a lengthy campaign anyway without US intervention, just on a longer timescale with more bloodshed for an equally dismal final result?

        Gaddafi was winning by the time the NFZ was imposed. He’d have likely wrapped things up in a few more months.

      • Nornagest says:

        is it reasonable to assume they would have probably been able to carry out a lengthy campaign anyway without US intervention, just on a longer timescale with more bloodshed for an equally dismal final result?

        The rebels made a strong initial showing, but most of it dissolved by the time NATO airpower came through; by May 2, the start of the NATO intervention, they’d lost Misrata and been pushed back to the suburbs of Benghazi. It’s not impossible that they could have persisted as a guerrilla force, but in terms of conventional warfare I’d definitely have bet against them.

  5. AnonYEmous says:

    aspiring rationalists

    how about:

    “aspirators”

  6. Unsure says:

    On a minor and related note, does rationalist opinion consider HPMOR to be among-st the list of mistakes? Because there is plenty of bad writing in that work which makes rationalism look terrible. Not exactly the most sophisticated thing to attack, but it has likely reached a wide audience by now.

    • Scott Alexander says:

      I enjoyed reading it. I think it may have been a mistake in the sense that now anyone who dislikes anything associated with rationality says “Oh, you’re the group that’s entirely about writing Harry Potter fanfics and thinks it’s the most important thing, right?”. But it also attracted a lot of neat people, so many it was worth it. From my perspective in an existing movement I’m not going to criticize the steps needed to make it grow.

      And from a different perspective, screw anybody who wants to dictate what kind of fiction people can or can’t write for PR reasons.

      • Unsure says:

        I enjoyed it too when I was younger. My primary point about it is the irony that the so-called rational decisions of its characters tend to be irrational, and the work generally having enough plot holes and sexism to make the rationalist movement look bad.

        I don’t dispute that any analysis which draws conclusions from HPMOR isn’t a fair analysis of the movement, though I do wonder the flaws of the work should be said to be reflect badly on the author himself.

        I agree that there is absolutely nothing wrong with writing fan-fiction for P.R reasons though, as long as it’s actually good fiction in the first place.

        • daystareld says:

          So, HPMOR is not a perfect story. There has been extensive criticism of HPMOR from within the rationality community, as can be seen here:

          https://www.reddit.com/r/HPMOR/comments/3096lk/spoilers_all_a_critical_review_of_hpmor/

          I include myself as someone who thoroughly enjoyed and continues to enjoy the story, and can still find flaws in it (the top comment there is mine).

          But of the criticisms of the story, “plot holes, irrational actions and sexism” don’t seem to be justified ones, to me. Irrational actions are usually called out within the text, because the characters aren’t perfect and are allowed to make mistakes (indeed, if they didn’t that would be an even bigger issue). Accusations of plot holes tend to fall under the umbrella of “things that I didn’t quite understand” (not accusing you of that, but saying what I’ve observed). And sexism seems to just revolve around the females in the book not having a central enough role, which is not enough for me personally to call something sexist, and the story lampshades itself.

          So if there’s anywhere that you’ve written about these flaws in the story in more detail, I’d appreciate being able to read them, if I can.

          • reasoned argumentation says:

            The actual problem with HPMOR is that while it’s supposed to be rationalist Harry actually totally stops doing any investigation into things about a third of the way through and just starts making guesses which, since he’s an author insert, are mostly right. He decides that it’s safest if he doesn’t share his research then assumes that since no one around him explains how the magical world works that no one knows – rather than concluding that magical researchers came to the same conclusion that he did and are keeping their results secret. Of course, since he’s an author insert, he’s right – other wizards are just idiots (except for his other author insert wizard, of course).

            The author mistakes memorizing social science results for being “rational” – it’s fan fiction that got bitten by the replication crisis.

            A good (although extremely) long critique and review can be found here:

            https://forums.spacebattles.com/threads/the-wizard-of-woah-and-irrational-methods-of-irrationality.337233/

          • random832 says:

            The author mistakes memorizing social science results for being “rational” – it’s fan fiction that got bitten by the replication crisis.

            I remember one bit (it was in a conversation with Draco, I think it was about heritability of magic [on an unaccountably naive Mendelian model] and supposed inferiority of muggleborn wizards) where Harry openly declares that if you get an experimental result you are not allowed to perform any other experiments to test the same hypothesis or any related hypothesis, because of the supposed natural inclination to only do this to results you don’t like until you get one you do like.

          • daystareld says:

            “The actual problem with HPMOR is that while it’s supposed to be rationalist Harry actually totally stops doing any investigation into things about a third of the way through and just starts making guesses which, since he’s an author insert, are mostly right.”

            This is a mostly fair point that a lot of people have made in the subreddit. I don’t think quite all of his logical leaps are as lucky as portrayed, and he continues to get a number of them wrong, but it’s definitely not as satisfying as the initial premise of actually investigating magic and learning things by researching what works and what doesn’t as he tries to figure out why. My guess is EY realized that the story was going to be a billion chapters long and just started skipping that stuff for the plot, which I feel somewhat sympathetic to, since I’m fighting the same urge in my pokemon rationalfic.

            “A good (although extremely) long critique and review can be found here:

            https://forums.spacebattles.com/threads/the-wizard-of-woah-and-irrational-methods-of-irrationality.337233/

            Ugh. I’m sorry I stopped reading at the first post… when they spend so many words on explaining why the story is bad because it doesn’t respect the source material enough to their satisfaction (really? they’re upset that Petunia left Vernon because of shallowness? Like Petunia was some amazing character that’s being terribly maligned by this representation?), I just want to shake them and say “Do you know what a FANFICTION IS?!”

            I saw their whole paragraph about being upset because EY didn’t read the whole series and how they like Luminosity more, but I find it unpersuasive in making up for the irritation. I may be oversensitive to this as a fanfic writer myself, but it’s seriously just really offputting as a critique, and it looks like it’s going to keep popping up throughout the whole thing every single time any character or aspect of the world does anything not like their canon self. There’s even a couple points where I think it would be a justified critique, but they seem ready to jump on it at the drop of a hat, and if it irritates me this early it’s probably going to become torturous later.

            But I don’t want to throw the baby out with the bathwater, so if you have any particularly salient criticisms from there, please feel free to highlight them.

        • Procyon says:

          As someone generally unimpressed by Yudkowsky who nevertheless enjoyed HPMOR, I construed the irony you mention as the whole point of the work. Essentially, that this is what you get when a self-important child has a wildly exaggerated view of his own intelligence: a poor understanding of why certain norms exist and lots of bad decisions. To be fair, Harry doesn’t get punished that much for any of these decisions and ends up with a mostly positive outcome, but not before quite a bit of internal self-flagellation for not being smart or rational enough. I didn’t find the amount of plot armor to be too offensive or out of line with similar works.

          Now, I’ve heard claims that Yudkowsky actually did mean for HPMOR to be a kind of guide to human rationality, as opposed to something closer to the opposite. That would be pretty funny if it were true! But I think the work speaks for itself regardless, and the author’s intent doesn’t matter too much here.

          • daystareld says:

            Right, for me a lot of people saying “This kid is supposed to be the uber rationalist? He makes so many mistakes!” are kind of massively missing the point. Double extra negative points if they also say “HJPEV is too perfect!” Like, you can’t have it both ways: either you appreciate a flawed character or you don’t.

            I think it comes from the idea that a lot of people think Harry is meant to be this inspirational figure, when to me that’s very clearly not the case. HJPEV makes mistakes and is called out on them in the story, and yeah, he suffers consequences for them, fairly often. He’s an *aspiring* Rationalist, and a young one: not a perfect embodiment, and EY never meant him to be that.

          • Evan Þ says:

            I think the style of the story pushes readers toward thinking Harry’s supposed to be an inspirational figure. We see him making mistakes, but he hardly ever suffers significant consequences, so the story doesn’t seem to recognize them as mistakes. We see him being arrogant, but he’s arrogantly insisting on things the author supports, like the power of Science!. And what’s more, didn’t Eliezer say the story was meant to inspire us by that?

          • Jiro says:

            Right, for me a lot of people saying “This kid is supposed to be the uber rationalist? He makes so many mistakes!” are kind of massively missing the point. Double extra negative points if they also say “HJPEV is too perfect!” Like, you can’t have it both ways: either you appreciate a flawed character or you don’t.

            It is possible for a character to be presented as being perfect even while he makes mistakes, if the author doesn’t characterize the mistakes as mistakes and doesn’t think we shiuld either. Also, remember that there are various categories of mistakes (and correspondingly, various categories of perfection) and it is possible for a story to show him kain g misyakes in one area but being too perfect in another area.

            Also, Harry tends to have plot armor. In the real world, going around with no social skills saying “I know better than you” will fail badly, regardless of whether you actually do know something they don’t.

          • daystareld says:

            “, so the story doesn’t seem to recognize them as mistakes. ”

            I’ve said it before and I’ll say it again: I think a lot of people misremember just how many times Harry makes mistakes in the story and is called out on them and suffers consequences for them. I regularly get surprised when I see people say it only happened “once or twice” or “a few times.” By my last count it was well over 30.

            I’m going to have to document it on my next read through, whenever that is, and make a post about it.

            “It is possible for a character to be presented as being perfect even while he makes mistakes, if the author doesn’t characterize the mistakes as mistakes and doesn’t think we shiuld either.”

            Of course, but I’d contend that most people are pretty bad at assuming what EY meant to be mistakes and what he didn’t, just off of anecdotal experience.

            “Also, Harry tends to have plot armor. In the real world, going around with no social skills saying “I know better than you” will fail badly, regardless of whether you actually do know something they don’t.”

            Insofar as he loses allies and fails to ingratiate himself with many students, I think he DOES fail badly. And he suffers from being alone/being lonely quite a bit in the story, even after he gets his army.

          • Evan Þ says:

            Please do make that list! I’m especially interested in the first time he gets significant consequences – IIRC, it isn’t until McGonnegal restricts his Time Turner. (I don’t count almost being sorted into Slytherin, because it’s only “almost.” And I don’t count “failing to ingratiate himself with many students,” because it isn’t called out and portrayed as a consequence of that, nor does it clearly affect him in what he’s trying to do.)

          • FeepingCreature says:

            I think Yudkowsky intended HPMOR to be a guide to rationality without being a perfect example of rationality. I suspect Harry is more intended as a reader insert than (as some shitty criticisms think) an author insert.

          • deciusbrutus says:

            From memory, the first time Harry gets called out for doing something bad is when the sorting hat chastises him for being a bully. It’s not made explicit that Mcgonogal gives him less slack because of his antics regarding his money and spending at Diagon alley, but it’s there.

            The first flaw HJPEV faces is his inability to resist being clever. He only manages that when forced to avoid doing clever things that might destroy the world, and then only for that subset of clever things.

            Sometimes the clever things that he thinks of actually work. Sometimes he uses dark arts of persuasion, like convincing Draco that he had sacrificed his belief in blood purity to science. Sometimes he thinks that a clever idea to perform a jailbreak is “of course lets do it”.

          • Evan Þ says:

            He’s called out – then and other times – but he doesn’t suffer consequences. The Sorting Hat’s criticism has no consequences for him; even his momentary sorting into Slytherin is retracted a moment later and he gets into Ravenclaw after all. The only lasting effect would be what he decides to do differently because of that criticism, which doesn’t count as a narrative consequence.

      • Anon256 says:

        I enjoyed reading HPMOR but feel that it essentially amounts to false advertising for the rationalist movement. Specifically, it portrays rationality and science as a path to power (including political power), when in the real world they really aren’t. Its role in attracting people to the community has increased the fraction of people who are interested in acquiring power, but the tools studied and taught by the community are as useless as ever for actually doing so.

        • deciusbrutus says:

          Rationality is about doing what works. Using rationality for power looks exactly like using the methods that work best for power.

          Gaining power has been refined a lot over the millennia, so there’s little that rationality-specific focus can do over power-specific focus.

      • bizdakka says:

        I would never have read this blog, or known anything about this community, had I not encountered HPMOR.

      • MostlyCredibleHulk says:

        I agree with your last paragraph, nobody should *dictate* anything. One is free to critique, though.

        And I think the critique of the hpmor/rationalism relationship is not along the lines “*all* rationalists care about is HP fanfics” – that would be an idiotic one indeed, and any one who seriously advances such argument is an idiot. There’s however a completely non-idiotic (at least IMO) critique that goes along the following lines: *When* rationalists (of course, this is generalizing, as far as you can judge about multiple people by actions of one) write fanfics, they do it in a manner that shows lacunas in their thinking and obvious literary mistakes, that makes it look terrible as an apology of rationalism. And that reflects on other arguments, maybe not fairly, but it does. As much as hpmor is perceived as being an argument for rationalism (maybe not a logical argument but it’s not uncommon to argue for ideas by creating art promoting them) it is not a very good argument. It’s like a comedian combing his hair in a particular way and acting like a buffoon to criticize Trump – it may be hilariously funny (though too often it’s actually not) but it’s not a really good argument against Trump policies. And I think if one wants to criticize Trump policies one must be particularly careful to avoid ever mentioning comedians as contributing anything to that goal.

    • fictional robotic dogs says:

      it’s the kind of book that, if you dislike it, you REALLY dislike it. but fwiw it continues to be well received on goodreads. wish goodreads made it easier to track ratings over time, but a quick glance at the last week:

      5/5: 18
      4/5: 11
      3/5: 3
      2/5: 1
      1/5: 1

    • nelshoy says:

      I had a fun time with it, but even though I’m a big fan of EY and his writings I do find it kinda embarrassing for rationalists. He’s free to write as he wishes and ideally shouldn’t be judged for it, but I think it does serve to lower the status of the community. Status plays a massively important role in recruiting people to help you accomplish goals, and EY’s goal is literally trying to Save The World.

      Yeah, HPMOR’s gotten new people interested, which is great. But I think it’s also put off many others, and is a way easier target for ridicule IMO than actual LW beliefs.

      Could you ever see a famous high status person like Elon Musk identifying as a rationalist? That would potentially make a way bigger difference for the community and AI risk than HPMOR. But think of the media scrutiny! Terminators and basilisks are bad enough, but throw Harry Potter fanfiction in there and LW is just something to point and laugh at. Rationality started out a low status group with weird beliefs, and HPMOR at best does nothing to improve that.

      Is this all way too much responsibility to hold a guy to? Probably, but it follows from his own beliefs, and I worry his personal war on status is damaging to his higher goals.

      • sketerpot says:

        I’m finally starting to see what Eric Raymond was talking about when he wrote about movements becoming independent of a charismatic founder by declaring him a nut and no longer relevant. Maybe it’s helpful in the long run? It’s kind of ugly to watch, though.

        • tk17studios says:

          There are a number of us that deliberately pump against this. In a comment below, I note that Eliezer’s no longer particularly representative of the broader rationality community, but that’s not to say that he’s outside of it, either.

          Eliezer is not a nut, and Eliezer is not irrelevant. I owe him a lot, and so do several thousand other people. There are definitely people that disagree, but making statements like this in public is part of how I push back against them.

          Edit: Also, I don’t deny that Harry Potter fanfiction is already pretty inherently low status, but the vast, vast, vast majority of people who get a status boost out of sneering at it haven’t had a fraction of its impact on the world. As a guy who was a nerdy outcast in middle school, I take that as fairly solid consolation. I’d like to live in a world where we judge actual impact above sneerability anyway.

          Edit edit: the above all sounds like I’m defending against an attack from sketerpot. I’m not—sketerpot clearly wasn’t making any sort of attack on me or people like me. I more took it as a chance to say words that were only vaguely in response. Thanks for the opening, sketerpot.

          • nelshoy says:

            Agreed. Like I said, I wish people didn’t care about what kind of art you consume and create in your spare time. My criticism can be condensed to “pick your battles better”, and I’d hate to ostracize or disavow someone who’s done so much for us over a petty complaint like that.

          • I note that Eliezer’s no longer particularly representative of the broader rationality community

            Something which insiders know, and outsiders do not. The kind of mistake outsiders are making is understandable.

        • Reasoner says:

          Agreed. It’s plausible that the world would be better off if MIRI fired Eliezer. Reasoning:

          * Prospects for AI safety generally don’t look that good. Therefore it makes sense to try risky (high-variance) strategies.

          * MIRI itself represents a tiny fraction of the world’s top math and CS talent. MIRI’s individual contributions are probably tiny compared to a small shift in the desirability of working on AI safety for the rest of the academic ecosystem.

          * Academia is a red queen’s race. Reputation is the currency of academia. AI safety has lower reputational stock than it would otherwise due to Eliezer’s antics. Eliezer’s reputation as a public figure cannot be repaired (and even if it could, Eliezer would resist whatever steps are necessary).

          * Eliezer did for AI safety research what Timothy Leary did for LSD: he made it popular and disreputable. The disreputable aspect is not inherent to AI safety. It’s a guilt by association thing.

          * If MIRI fired Eliezer, that would be heard around the internet. It would represent a step towards the “gentrification” of AI safety. This wouldn’t do much to reduce MIRI’s research output. It doesn’t seem like Eliezer is super involved in MIRI’s research nowadays. And even if Eliezer is doing valuable research work, I’m sure he could find people to support him outside of the structure of MIRI.

          The best counterargument: a non-amicable divorce could be harmful to the current ecosystem? Anyway, I think it would be pretty reasonable for MIRI to fire EY next time he does something substantially crazy, and maybe even before that.

    • alexschernyshev says:

      Just a small note: in almost all cases when I actually talked to people about what they didn’t like about HPMOR, the specific criticisms were provably wrong. The “Harry is an author’s self-insert” thing has been addressed multiple times before, so I’m not hearing it as often as I used to. I’d very much like to see a more detailed critical review which doesn’t boil down to a variant of “Harry is not behaving like a normal kid would” (duh) or “Harry is often wrong even when he thinks he’s being rational” (duh) both are which [Spoliers, rot13] gur prageny cybg cbvagf bs gur svp.

      • Deiseach says:

        I certainly hope Potter-Evans-Verres isn’t an authorial self-insert because he’s such an objectionable little toad I want to feed him toes first and inch by inch to the lake monster 🙂

        But that probably is part of the problem; I’ve certainly read the “No, he’s meant to be this annoying know-it-all in the start but once you get to a certain point all this gets turned on its head and he learns humility by finding out that he’s been so freakin’ wrong all along”. The problem is, he’s such a toad up to that point that I for one would rather spork out my eyes than keep reading to the part where he gets hit by the dropping penny.

        But eh, fanfic is down to personal taste. For someone else, Harry Potter AU may be the very thing they are longing to read, but it’s not my particular cup of tea (even the ordinary Potter fanfic never enticed me in). So me not liking it says nothing more than YKINMK and shouldn’t be taken as a critique of the writing style, subject matter, or execution of content.

        • Nornagest says:

          I kinda get the impression that Eliezer’s approach to this evolved as the story went on. In terms of the bare bones of plot, calling it a “comes of age and learns humility” story isn’t really wrong; Harry does turn out to have been wrong or naive about a lot of things, and this does turn out to be important to his eventual happy ending. But this isn’t remotely telegraphed in the early chapters; even from reader perspective, there’s no indication then that he’s supposed to be seen as anything other than awesome.

          It’s hard to call that anything but bad writing if the plot was always supposed to go in that direction — surprises are okay, but there need to be enough hints that you think “oh, I should have seen that coming” — but it fits well if Eliezer realized too late that he was writing a plot that only works if everyone but the leads is an idiot, or that Harry needed a character-development arc. Which, let’s be fair, is pretty common in episodic formats.

          And yeah, Eliezer is that kind of guy IRL.

          • hf says:

            This is an example of a ridiculous criticism, either blatantly false or defining away the entire plot. Harry’s identity is central to that plot. EY foreshadows it at least as early as the child-services freakout (though it may actually be less blatant after revision, I forget). It was in my mind as an explicit possibility at least as early as Harry’s conversation with the Sorting Hat, which talked about his flaws and told him that if his scar held anything like a ghost, “it would be part of this conversation, being under my brim.”

            I think any honest critic will grant that Harry’s identity changes the whole story – as does the fact that Voldemort’s thinking was flawed both practically and morally. I also think any honest critic will grant that all this was likely intended from the start.

          • Jiro says:

            This is an example of a ridiculous criticism, either blatantly false or defining away the entire plot. Harry’s identity is central to that plot.

            Harry’s identity is properly foreshadowed. But Harry’s identity only explains why Harry acts like an arrogant know-it-all. It doesn’t explain why Harry gets away with being an arrogant know-it-all. The fact that he can act like that and not be treated as a disciplinary problem (or even just like a person with no social skills) makes him more like a wish-fulfillment Mary Sue than someone being realistically shown to have Voldemort inside his head.

          • random832 says:

            It was in my mind as an explicit possibility at least as early as Harry’s conversation with the Sorting Hat, which talked about his flaws and told him that if his scar held anything like a ghost, “it would be part of this conversation, being under my brim.”

            Something that had zero evidentiary value to the reader for his situation (vis a vis souls and horcruxes etc) not being the same as canon, because there was no third party to the conversation in canon.

          • Nornagest says:

            I think any honest critic will grant that Harry’s identity changes the whole story – as does the fact that Voldemort’s thinking was flawed both practically and morally. I also think any honest critic will grant that all this was likely intended from the start.

            I don’t disagree with any of this, but I also don’t see it as relevant to my criticism above. Maybe “anything other than awesome” was overstating it; Harry’s occasional callousness, his narrow focus, his penchant for evil-overlord theatrics were all pretty clearly meant to be reinterpreted in light of his, er, existential status. But those aren’t especially central to the early chapters, and except for some of the theatrics they’re not what we’re supposed to admire him for. I’m pretty sure readers were meant to take his early relationship with the setting (“child prodigy pulls back the curtain on a mad world”) at face value, and I’m also pretty sure Eliezer was angling for an analogy to rationalists in the real world (bits of the same worldview are scattered throughout the Sequences). It’s only later that some of that got walked back.

          • hf says:

            Jiro, since that’s a different criticism, I’ll only say you didn’t seem to respond to the fact that the author meant you to read the story more than once. You might want to read the following as well.

            Nornagest: in Chapter 2 Harry exclaims that what he’s seen would allow non-locality or “FTL signaling”. Within Rowling’s world this is correct; there are non-local FTL Time-Turners. McGonagall alludes to this in the same chapter, and I think you’ll grant this was intended from the start. Harry has a chance to make an accurate prediction. He does not.

            Later he buys a soda drink which repeatedly confuses him. He tells himself he wants to know how it works, that he has to investigate this (his emphasis) and that he should try an “experimental test” on occasion. He notices that his initial thought as to how the drink works does not make sense. Harry has a chance to make an accurate prediction. He does not.

            In Chapter 13 (this is the author giving you a rough idea of how long you should wait for shoes to drop), Harry has experiences which I immediately attributed to time travel. Eliezer added a note to that chapter assuring people that it made sense and they should try to solve the puzzle, which tells me that he expected everyone to get it right off. Harry has a chance to make an accurate prediction. He does not. (He does guess that Dumbledore controls the game, but that doesn’t seem nearly true enough.) This is also when we learn that he has a self-recognition code, which I mention because it is bloody important.

            Later in the same chapter, a painting outright tells him that “the one who awards or takes points is always you.” Harry has a chance to make an accurate prediction. He does not.

            Going over the events of that chapter, as it were, he notices something wrong with his actions related to my earlier point. As I explicitly said before, this is connected. It is not a case of “narrow focus.” Ask yourself what Voldemort believed his own goal to be. Then ask when he ‘died,’ what happened in the next nine years plus four months, and what that implies in the most natural reading of the story.

            Harry does make a clever and useful suggestion in the next chapter, and seems properly impressed with the import of time travel in general. It takes him until Thursday to try something that he seems confident will work, and produce a new discovery; only after the fact does he see that reality went easy on him.

          • Nornagest says:

            Okay.

        • Jiro says:

          I’ve certainly read the “No, he’s meant to be this annoying know-it-all in the start but once you get to a certain point all this gets turned on its head and he learns humility by finding out that he’s been so freakin’ wrong all along”. The problem is, he’s such a toad up to that point that I for one would rather spork out my eyes than keep reading to the part where he gets hit by the dropping penny.

          The big problem is that having such behavior eventually fail is not realistic. It should immediately fail.

          Having it eventually fail makes it seem more like the author changed his mind than that the character was meant to be flawed all along.

          • Evan Þ says:

            Realism aside, it means we only get to the consequences after dozens of chapters of him by all appearances succeeding. Guess which one sticks in our imaginations?

          • Deiseach says:

            The big problem is that having such behavior eventually fail is not realistic. It should immediately fail.

            Yeah, biting your teacher when you’re in third class shouldn’t be treated as an amusing quirk, it should be treated as “okay, you really need to be taught how to behave in a civilised manner and if you can’t learn then maybe you need professional intervention”. Biting is a normal developmental stage you go through (and then grow out of) when you’re aged two to three, not nine years old and in third class. I can’t help but wonder what Harry Three-Names would have done if my mother’s cure for biting had been applied 🙂

            As an aside, are American primary schools different? Over here, there wouldn’t be a separate maths teacher, there would be one teacher for the class who taught all the subjects. Or am I misunderstanding and it only means ‘maths teacher’ in the context of why Harry Three bit her? Never mind that one explanation might be that she mightn’t have known the word logarithm but only refer to it as “this is what the log of a number is”, but she would know the mathematical concept okay.

            A lot of my resistance is because of the suspicion I have that Nornagest mentions; Harry Triple-Decker was intended to be The Only Sane Rationalist and Ultimate Bossy-Boots Know-It-All Who Would Be Proven Right from the very start, but as the story went on and reader feedback came in, the author had to swerve and adjust course so that Harry Triple-Barrelled-Surname would get an attitude adjustment.

            And this is pure nitpickery of the worst kind, but “I’m Evans-Verres and he’s Verres-Evans because we’re just that unique and special and ultra-precious about signalling how progressive and right-on about inverting and subverting the patriarchal custom of the woman taking the man’s name we are” rubbed me up the wrong way. Pick a surname and stick with it, and thank goodness they didn’t spawn: would the kid have been named Intelligencia Evans-Verres-Verres-Evans? Or possibly Brainella Verres-Evans-Evans-Verres?*

            And do we ever find out if Harry’s parents were plain Mr and Mrs Potter or were they also Potter-Evans and Evans-Potter? I have a feeling they were too sensible to mess around with “We positively need two bijou separate sets of surnames, one for each of us, personally customised”.

            *Names ripped off from Private Eye’s “Mary Ann Bighead, a parody of journalist Mary Ann Sieghart, often writes columns trumpeting her own brilliance and that of her daughters Brainella and Intelligencia.”

          • Nornagest says:

            As an aside, are American primary schools different? Over here, there wouldn’t be a separate maths teacher, there would be one teacher for the class who taught all the subjects.

            No, there’s usually one teacher for everything (except gym and music) until sixth grade or so, when it starts getting broken down into subjects. Details differ between programs but it’d be very rare to see a separate math teacher in third grade, at least outside of private schools.

            And it’s implausible that logarithms would come up that early, but that at least can be written off as HJPEV reading his dad’s math textbooks and being his charming self.

            The names thing was probably supposed to be satire roughly along the lines of Rowling’s “Privet Drive”, but Eliezer wouldn’t have had a native understanding of the British connotations.

          • Randy M says:

            The most plausible way for logarithms to come up in an elementary school discussion is talking about the Richter scale for earthquakes, I believe, to explain why the difference between 9 & 10 is more than that of 3 & 4. How likely kids are to be familiar with this scale depends on where they live, of course.

          • Deiseach says:

            HJPEV reading his dad’s math textbooks and being his charming self

            Yeah, I was imagining Harold Thrice-Blessed With Nomenclature doing his plum in the mouth “Pedagogue, pray instruct me – or rather, my sub-par class mates, for naturally I already know all about it! – in the logarithmic method if you would be so kind” routine and getting a You wha’? reaction, whereupon he sinks his teeth into her, under the impression that she is ignorant and not that he has been a toffee-nosed git. (Also, I rather doubt the adopted son of an Oxbridge professor is going to the local bog-standard comprehensive, unless the Verres-Evans-Evans-Verreses are radically signalling their leftist cred, which not even actual Labour Party Corbynista politicians do).

            Yudkowsky could have used Brit-picking help (some of his jokes don’t come off but induce wincing) but then again, writing it in the vein of “Hogwarts High” rather than actual British schooling etc. is very much what I’d expect from Americans doing HP fanfic, so he was being mainstream there 🙂

          • rlms says:

            To be fair to Corbyn, he has principles on this issue: he got a divorce when his wife wanted to send their son to a selective school.

        • The original Mr. X says:

          But that probably is part of the problem; I’ve certainly read the “No, he’s meant to be this annoying know-it-all in the start but once you get to a certain point all this gets turned on its head and he learns humility by finding out that he’s been so freakin’ wrong all along”. The problem is, he’s such a toad up to that point that I for one would rather spork out my eyes than keep reading to the part where he gets hit by the dropping penny.

          Yeah, that’s pretty much my reaction, too. I tried reading HPMOR once, but quit halfway through because Harry was just acting like a smug, obnoxious Mary Sue all the time and not suffering any real consequences for doing so. Sure, people say this gets better later in the story, but a good story should be enjoyable from chapter one, not from chapter sixty.

    • carvenvisage says:

      Yeah that ending was a travesty. I really thought the story was going somewhere.

  7. drossbucket says:

    I have a horrible feeling that I’m both of your improv annoying people :/

    I’m honestly pretty ignorant about economics, so I know that it’s better to keep my trap shut about that one. But with the rationalists I have at least put the time in! I’ve read a lot of the original Less Wrong content by now, read all the Slate Star Codex posts, engaged a reasonable amount with rationalist-adjacent tumblr and poked around a number of the other associated blogs.

    And I still just don’t understand, unfortunately (those are links to my version of being the annoying person). I like the community – that’s why I spent all this time on it – but I still just don’t see how it’s reconciled its interest in “domain expertise, hard-to-define-intuition, trial-and-error, and a humble openness to criticism and debate” (to quote your annoying person) with the sort of framework it started out in in 2008 or so, which was highly focussed on very formal mathematical models of cognition. I’m not sure how far the rationalists really have left “the early days of our own movement on Overcoming Bias and Less Wrong”.

    I know that I’m missing a lot of nuance from not hanging around one of the biggest communities in person, and maybe people do have sophisticated stances on how to reconcile the two. If so I want to know about them! But I think it’s still really hard work to get this information from the internet, so sometimes I just get frustrated and post ranty comments.

    Update: I should maybe clarify that I haven’t read any of whatever this latest internet argument about rationalists is, so I’m missing context here. Maybe the arguments there really are terrible.

    • tk17studios says:

      Any rationalist detractors, skeptics, and critics can email me at my work email (duncan at rationality dot org) and I’ll happily send you a ~200pg handbook of a fairly solid representation of this community’s up-to-date take on rationality, in exchange for you filling out a survey now and again six months later (you get the book and join the control group!).

      It’s really left the early days of our own movement on Overcoming Bias and Less Wrong.

      • drossbucket says:

        Interesting, and thanks! Have emailed you.

        I assume from the email address that this will be related to the CFAR side of things. Do these ideas have broad traction in the wider rationality community now, or are they localised to a small part of it?

        • tk17studios says:

          CFAR is generally pretty solid as both a magnet for trending community ideas and a shaper of community interest. I think that, due to both effects, we’re something like my-gut-tells-me 85% “up to date” on what the broader rationality community is paying attention to.

      • alwhite says:

        I’d be interested in reading the book but I don’t consider myself a skeptic or critic of the movement. Would you still accept me as part of the control group?

      • Barely matters says:

        I also emailed you with interest.
        Thanks for this offer!

    • marvy says:

      I don’t understand your question, but I’m going to try to answer it anyway. There is no way this can go wrong! (sarcasm) For bonus points, I also don’t hang around in person, so whatever nuance this makes you miss, I’m missing it too. Unlike you, I don’t know that it’s better to “shut my trap” when I don’t know what I’m talking about. Hence, the next few paragraphs.

      Suppose you were to ask, say, Eliezer Yudkowsky, whether a perfect reasoner should “theoretically” need to know anything other than Bayes’ rule? What might he say? (This keeps getting better: I’m trying to answer I question I don’t understand by putting words into the mouth of famous people who I don’t even know.) I think he’d say something like this: if you have a good way to describe your hypothesis space, and an infinite amount of computing power, then using Bayes’ rule and almost nothing else, you could rapidly increase your knowledge and understanding of the world, using something like AIXI (see Wikipedia for more about AIXI). This sounds like “a formal mathematical model of cognition”; perhaps that’s the kind of thing you had in mind. If not, you’ll have to clarify your question.

      But what’s the actual goal here? The goal is not to figure out an algorithm which we could use to program a robot to draw correct conclusions, assuming that robot has access to infinite computing power. There are several goals actually, but one of them is that we want an algorithm that we can actually implement on harder that we have available, and that makes as efficient use of that hardware as possible. The hardware that’s most relevant here is the modern CPU: Cerebral Processing Unit, more commonly known as the human brain 🙂

      (Hey, Scott’s not the only one who can’t resist a pun. It’s not my fault he’s better at it.)

      Which brings us to things like “domain expertise, hard-to-define-intuition, trail-and-error, and humble openness to criticism and debate”. Can we formally prove that all those things are good ideas? Probably not, depending on how strict your standard for “formal is”. But that’s okay, most mathematical proofs are not maximally formal either. Let’s back them up informally instead. One at a time:

      1. Domain expertise. Even a perfect thinker should respect domain expertise, because domain experts have seen information you haven’t. For instance, if you haven’t ever seen a bacteria under a microscope, you should listen to biologists who say they know what they look like. If you’re not an ideal thinker, perhaps because you don’t think infinitely fast, you should also respect the fact that they’ve thought about this domain far longer than you have.

      2. Hard-to-define-intuition. This goes back to what I said earlier about exploiting hardware resources as well as possible. What is a hard-to-define-intuition? It’s your brain telling you “I think this is the answer, but I can’t or won’t tell you why”. So, your brain computed the answer “for free”. Should you trust it, or discard the answer for fear it leads you astray? Depends! The ideal thing to do is collect some statistics about which sort of situation your intuition tends to be right, and pay proportionally more attention to your intuition in those cases. There’s that Bayes’ rule again! And sometimes, the way you formed that intuition is that your brain did some calculation akin to Bayes’ rule itself, and just didn’t tell you about it. And maybe over time your intuition improves (because your brain got exposed to more data and thus performed many Bayesian updates). Yet another reason to trust domain experts: there intuitions are better, and Bayes rule gives us at least one reason why 🙂

      3. Trial-and-error. Again, even an ideal reasoner needs this a bit, and everyone else needs more. Trial-and-error is just a special case of getting more information about the world. For instance, Thomas Edison tried many different designs for a lightbulb before finding one that worked. You might be tempted to say, with a perfect knowledge of chemistry and a powerful computer, he wouldn’t have needed to actually build them, just simulate them. Fine, but first you need a perfect knowledge of chemistry, which probably requires chemistry experiments, which probably involves trial and error. Furthermore, even if you’re just simulating, that’s still trial and error, just faster since the computer is doing it so you don’t have to wait an hour to find out that your light bulb design will burn out in way too fast. Even things like solving Sudoku puzzles is trial and error. The best algorithms make fewer errors and catch them sooner, but I don’t think there will ever be a Sudoku solver that just goes directly to the solution, in the sense that if you want to solve for x in “x^2 + 7x – 9 = 0”, there is a way to do it “directly” that doesn’t feel like trial and error at all.

      4. Openness to criticism and debate. This one is actually pretty useless for a perfect thinker who never makes any mistakes and thinks infinitely fast. But even if you never make mistakes, you probably only think finitely fast. So let’s say you’re a philosopher and you’re trying to solve a tricky problem, such as, say, the ultimate answer to life, the universe, and everything, to pick an example at random. One thing you can do when faced with a hard problem is parallelize. Concretely, you might be lucky enough to find other people who also want to solve this problem. Now, if this were an “easy” problem like sweeping the floor, then you just say “I do the left side of the room, you do the right side”. But in this case, the problem is so poorly understood that you don’t even know if the terms left side and right side make sense. It’s more like sweeping the floor of a space ship in zero gravity when the lights are turned off, or something. So if someone spends ten years to discover that a promising seeming line of attack is a dead end, and someone else discovers that a line of attack that looked hopeless is actually yielding some tasty low-hanging fruit (hurray for mixed metaphors!), it would be nice if they were to tell you about it. But if someone comes up to you and says “this line of attack is hopeless; save yourself 10 years”, you may need some convincing. So you give your reasons why you think it’s not. And they tell you they thought they same thing 7 years ago, but it turns out that there’s a subtlety with XYZ and so it doesn’t work. And maybe you point out some things they haven’t thought of, and maybe they point out things you haven’t thought of, and this sounds a whole lot like a debate. So even if all people thought exactly the same way, debates seem like a quick way to get each other up to speed on each others progress, so if before the debate you X and I know Y, then after the debate we both know X and Y, on a much deeper level then if we just skipped straight to the conclusion. Since debates of this form will often contain statements of the form “It seems to me that you are wrong” (or else what is there to debate), openness to criticism becomes important.

      And that’s all. If you or anyone else has read this far, I apologize for wasting you time instead of hitting delete like I should have after writing all this.

      • Suppose you were to ask, say, Eliezer Yudkowsky, whether a perfect reasoner should “theoretically” need to know anything other than Bayes’ rule? What might he say? (This keeps getting better: I’m trying to answer I question I don’t understand by putting words into the mouth of famous people who I don’t even know.) I think he’d say something like this: if you have a good way to describe your hypothesis space, and an infinite amount of computing power, then using Bayes’ rule and almost nothing else, you could rapidly increase your knowledge and understanding of the world, using something like AIXI (see Wikipedia for more about AIXI).

        But EY does not talk about Bayes only in the context of ideal reasoners. For instance, he thinks Bayes should replace science. Perhaps in his mind “Bayes” is a ragbag of heuristics that non-ideal reasoners could and should use — but to everyone else, Bayes in a mathematical rule. They are naturally going to hear him as recommending an algorithmic thingy as the only epistemology anyone needs, because of they are not party to his idiosyncratic definition. The misunderstanding is down to the way he expresses himself.

        • FeepingCreature says:

          Eliezer does not think that Bayes should “replace” science, Eliezer thinks Bayes could fill the holes around science; he thinks that Bayes is the computationally expensive general case of which science is the approximate but well-understood “simple” instance. (This is in the context of people seriously claiming that you shouldn’t consider any arguments that are not scientifically proven, which would rule out all speculation about qualitatively different futures.) It’s like accusing relativity of wanting to replace Newton’s laws; if it works as advertised, the simpler laws will just fall out as a special case.

          • That’s not relevant. My point was about the (mis)use
            of “Bayes” as a piece of terminology.

            In Science or Bayes, EY says that the use of Bayes instead of Science would have led to MWI being accepted earlier. But his justifications for Many Worlds are not based on Bayes as a method of mathematical probability, they are based on handwaving conceptual reasoning (which is, incidentally, taken wholesale from the work of David Deutsch, who is not a Bayesian!).

            So Bayes in that context does not mean maths…but to an outsider, it would mean maths.

          • hf says:

            Actually, AKA1Z, even in the early posts you’re talking about you can go read Eliezer talking about improved versions of Solomonoff Induction – specifically a version that assigned probabilities to sequences of data, which does sound like it would mathematically favor “MWI” over most if not all other interpretations.

            Now, this business of improving SI is an open problem. Slightly more recent posts make this clear, and imply an argument for MWI that is not yet mathematical because we’re in the process of formalizing it. Feel free to engage with the actual work being done.

            I predict that someone in the next twenty to forty years will come up with a definition of Bayesian “naturalized induction”, and insofar as we can apply it to quantum mechanics – perhaps in a simplified case – it will say that a person living in MWI would experience the Born rule.

          • Actually, AKA1Z, even in the early posts you’re talking about you can go read Eliezer talking about improved versions of Solomonoff Induction – specifically a version that assigned probabilities to sequences of data, which does sound like it would mathematically favor “MWI” over most if not all other interpretations.

            “sounds like it would” -whiich is to say , no mathematical proof has been offered, and instead what we have is conceptual handwaving that such a proof is possible.

            Now, this business of improving SI is an open problem. Slightly more recent posts make this clear, and imply an argument for MWI that is not yet mathematical because we’re in the process of formalizing it. .

            Is it or is it not misleading to say you have Bayesian proof of something, when in fact you have only handwaving about the future possibility?

            Feel free to engage with the actual work being done.

            Well, I tried before and guess what happened…

        • Viliam says:

          For instance, he thinks Bayes should replace science.

          I thought his point was that science is “officially” only about testing hypotheses (I’m simplifying this a lot), but the question of “where do these hypotheses actually come from?” is kinda taboo.

          The process of generating a hypothesis does not have to be “scientific” at all — your way to scientific fame can start by having a dream about a mythological snake, as long as it allows you to make measurable predictions, and the experiments confirm them.

          So, where do the scientific hypotheses come from? Officially, anything goes… but intuitively, just generating random strings, and experimentally testing the ones that happen to make sense linguistically, would most likely not result in a great scientific career. There must be some way that is, at least, better than completely random. But we can’t call that “science”, because science is the word used for what happens after the hypothesis is proposed.

          Bayes is an explanation how this could possibly work.

          • I thought his point was that science is “officially” only about testing hypotheses (I’m simplifying this a lot), but the question of “where do these hypotheses actually come from?” is kinda taboo.

            Well, Ok, everyone is seeing something different in that ink blot. But I am a trained scientist and I never noticed any taboo. OTOH, you don’t have courses in hypothesis formulation because no one has boiled it down to some algorithm or teachable set of techniques. Which might be a problem but isn’t the same problem.

            The process of generating a hypothesis does not have to be “scientific” at all — your way to scientific fame can start by having a dream about a mythological snake, as long as it allows you to make measurable predictions, and the experiments confirm them.

            That’s very widely understood.

            Bayes is an explanation how this could possibly work.

            ?????

            Does “possibly” mean within realistic computation limits? It’s known that Bayes in combination with some other things does “work” at generating and testing hypotheses algorithmically, but only if you ignore computability. But that is a theoretical discovery with no obvious practical applications, and EY seems to be talking about doing science practically, as far as I can interpret his ink blot prose.

      • Douglas Knight says:

        AIXI is all about Bayes’s rule, quite explicitly. That’s probably not what you meant, but why did you say something so precise that it was simply wrong?

      • drossbucket says:

        Thanks – I’m very happy to receive detailed answers to my really vague comment. I considered being more precise than “a formal mathematical model of cognition”, but realised I’m not exactly sure what specific model MIRI are interested in these days. I haven’t read their logical induction paper, for instance. However I assume they are still interested in the general probability-and-logic cluster of ideas, and are still interested in the attempt to explain cognition as some sort of formalisable reasoning process involving explicit mental representations that are then transformed by mathematical rules.

        I don’t personally think any of this is going to fly, for the same sorts of reasons David Chapman doesn’t think it’s going to fly. How are we defining this ‘hypothesis space’? What is the process by which these abstract representations take on meanings in the world? Why do we expect explicit rules like this rather than a more opaque black box process that happens to produce reasonable heuristics? Why are we even assuming that cognition is localised as some sort of ‘representations in the head’, rather than at least partly arising out of interactions with the environment that don’t need to be explicitly ‘stored’ anywhere?

        Chapman talks about this at length in a far more coherent way than I would ever manage here, and actually has some sort of expertise in the area that may lead people to take him seriously. My interest is more from the other side: using the experience of successful reasoning in a particular domain (my personal hobby horse is mathematical intuition) as hints towards what a theory of cognition should maybe be like. Mathematical intuition tends to draw on a wide range of human abilities – Thurston’s On proof and progress in mathematics is wonderful on this and discusses e.g. language, spatial sense, association and metaphor and the ability to think through processes in time).

        I suppose it’s perfectly possible that all this mess is built on top of some kind of clean Bayesian formal reasoning process, but it’s not obvious to me why the idea is so compelling.

        I’ve also never really understood the relevance of the Solomonoff/AIXI stuff. We don’t have infinite computing power, so as you say we are going to need ideas that work with the hardware we have available. MIRI’s intuition seems to be that some of the ideas are still useful for thinking about agents with finite computing power, and I’ve never quite grasped why.

        • FeepingCreature says:

          A programmer analogy.

          You have to build a complex system. You can either try to think about how the complex system would work and then just code for a few months and see if you get anything useful out at the end. Or you can try to build a simpler system and hope that you can upgrade it into the complex system. Somebody watching from the outside who knows nothing of systems design might say “I don’t see why this person is so sure that the toy problem they’re solving is going to scale to the real problem.” That’s not it at all. It’s just that we know that tackling the big problem directly very probably won’t work, whereas tackling the simple problem and trying to work our way up to the complex one conceivably might. It’s not just a question of what problem to solve, it’s also one of what path can we take to solve it, and the path of “solve a simpler problem and see if it illuminates the more complex one” has a lot of evidence behind it.

          • drossbucket says:

            Yes, this is fair. I’m a big fan of toy models, extracting out individual interesting questions, finding the simplest non-trivial example of what I’m interested in, etc. It still doesn’t really help me understand why they’re building these particular sorts of toy models, or why they’re so excited about them.

          • hf says:

            “Excited about them” how? Also, see my comment right below this.

    • hf says:

      Since your comments talk about MIRI, I’ll just respond to that:

      Current “machine learning” techniques could be a flash in the pan. Now, they could perhaps be capable of creating AGI. We’ve had evidence in that direction since 2008. But in putting together a theory of AGI that doesn’t kill us, it seems like a good idea to start with the abstract laws governing all rational minds, since those would apply to the next hot paradigm if that’s the one that goes anywhere. Remember that timelines from people at MIRI range from a few decades to more than fifty years from now for the median advent of AGI.

      MIRI should nevertheless analyze “machine learning” as well. They are doing so. They started that about as soon as they had the resources to do so.

      Perhaps you’d like to clarify your objection?

      • it seems like a good idea to start with the abstract laws governing all rational minds,

        My objection is rational. Hiumans aren’t rational, in the technical von Neuman sense, and can still be dangerous. An assumption that all AIs worth considering will operate under vNM rationality underpins these “universal laws”, yet is not a given.

  8. Philosophisticat says:

    I think the rationalist community has its share of idiosyncratic prejudices, which are more or less predictable given its demographics and its origin – excessive concern with IQ, attraction to solutions, problems, or arguments that involve future technology, strong desire to put things in terms of numbers or equations even when it is inappropriate to do so, etc. Because of the way communities and memes work, this leads to certain ideas, such as utilitarianism, getting uptake disproportionately to their merit.

    That said, those idiosyncrasies have also, I think, led to some things given attention that deserve it and which are overlooked elsewhere, and I don’t think other communities are better in this respect. Individual rationalists are inconsistently self aware and intellectually humble (reading, say, Eliezer Yudkowsky talking about philosophy is, for example, typically cringeworthy), which can grate more than usual given their explicit concern about these virtues, but to say that they exhibit these inconsistently is to say they do it a lot more than most people, and some members of the community, like our host, do it quite well indeed.

    • John Nerst says:

      I don’t mean to jump on you here (not saying these are your criticisms), but the problems mentioned seems pretty similar to the complaints Scott criticizes in the post, i.e mostly question-begging “I disagree”:s. Saying “they do bad things” (or care about the wrong things) without specifying why those things are bad (or why those things aren’t important) isn’t exactly substantive.

      What qualifies as cringeworthy also is quite a bit in the eye of the beholder. I cringe every day at things I read or hear (sometimes including academic texts), and EY:s writing comes pretty far down the list of the badly argued. That fits into a general pattern I might be partially imagining, but rationalists seem to be held by such critics to a much higher standard than anyone else (which you also say). A lot of it isn’t so far from “eww, nerds!”, let’s find fault!

      Think of this hypothetical exchange:

      Rationalist: “The world doesn’t work rationally at all! Let’s try to be more rational!”

      Critic: “Everyone already knows that but you, and nobody cares. You shouldn’t either.”

      Sums it up I think. But the critic is wrong, IMO (not completely wrong, just not completely right): we’re not at all aware of how irrational we are, typically, and a lot could be gained if society would run more rationally, in fact it runs more rationally today than historically and is much better for it.

      But then again, “eww, nerds!”.

      • Philosophisticat says:

        I don’t think that Scott’s criticism of Cowen and the other objectors to rationalism, at least in this post, is that they aren’t willing to engage the first order questions about whether utilitarianism, concern about A.I. risk, and other rationalist shibboleths are correct.

        My comment wasn’t intended to give anyone who was sympathetic to any particular rationalist idea a reason to reject it. Obviously, if someone thinks that, say, utilitarianism is correct, they won’t find the rationalist tendency towards it any sign of failure. I would hope that most rationalists are self-aware enough to accept the general point that the pattern of concern of rationalists as a community is distorted in some way, so that an idea’s uptake among rationalists does not perfectly correspond to its justification, and that this has something to do with the cultural, historical, and demographic features of the rationalist community. The nitty gritty substantive disputes (of which I have relatively few with rationalists!) are for elsewhere.

        In any case, I meant my comment more as a defense than an attack. All communities are subject to those kinds of distortions, and rationalists are going about things in basically the right way.

        What bugs me about Yudkowsky’s writing, by the way, isn’t mostly about quality of arguments – I’m sure he’s hit more than miss, and everyone pulls a stinker now and again. But in contrast to, say, Scott, he often lacks adequate intellectual humility, so when he misses it’s very embarrassing to someone who can tell.

        I think an “eww, nerds” critique of rationalism would be very strange coming from me, for more than one reason.

        • Procyon says:

          Yeah, I remember reading some posts where Yudkowsky set up a straw philosopher Bob, claimed Bob’s position was untenable, and then, in my view at least, failed to properly refute Bob. It was kind of cringeworthy!

        • John Nerst says:

          Oh, I didn’t mean that “eww, nerds” was coming from you, more like an undertone in many other criticisms.

          Overall I’m not that bothered with EY being overconfident sometimes (I guess that’s a personal thing) because to a certain extent in certain contexts I share his apparent feelings of “am I the only one that thinks this is obvious? I feel like I’m taking crazy pills here!” (justified or not) which I suppose makes it easier to forgive the occasional overstepping. YMMV.

    • Nancy Lebovitz says:

      To be more specific, there’s too much trust in psychological studies, and I believe this is not just because of respect for science, but also because of a desire for simple solutions.

      The rationalist community is probably doing better than most people on this– there are a good many rationalists who at least know about the problem and try to not be influenced, but psychological studies are sufficiently unlikely to be replicated that I think they should get almost no trust.

      Which ideas from psycholigal studies seem to be solid? I think loss aversion is sound, isn’t it? What others?

      • Enkidum says:

        Depending on who you’re listening to, published psychology studies have somewhere between a 30-60% chance of being replicable (which is to say, true, or at least true-ish). As numerous people (including, if I remember correctly, Scott) have pointed out, this is orders of magnitude better than chance (where “chance” is something like “the probability of randomly-formulated statements about psychology being true”).

        There are tens of thousands of psychology papers published every year. Even if we assume that only 10% of them are true/replicable (which seems far too conservative), this is still thousands of truths.

        Now as for “ideas from psychological studies”, it depends on precisely what you mean by this. But there are certainly plenty of effects that have been replicated hundreds or thousands of times, because they have become standard tools in the research arsenal. To name three that I have personal experience with: contextual cueing in visual search, test-retest reliability criteria for unusual traits such as synaesthesia, and deficits associated with attentional set-shifting. These, and many thousands more, are real/true according to whatever sane standards of reality/truth you would like to apply.

        A good rule of thumb is to treat anything that has been published once without replication in a decent journal as an interesting hypothesis worthy of serious consideration, anything which has been replicated hundreds of times as true, and to use a sliding scale for anything in between.

        This isn’t to say that there is nothing wrong with the publication mill in psychology, and certainly you should basically treat virtually all statistics as ballpark estimates, rather than anything precise. There are plenty of problems with the field, but that is true of any field, and it certainly doesn’t mean the whole thing is bunk.

        • Douglas Knight says:

          Each subfields has its own replication rate. The relevant studies are those of systematic errors. I think Kahneman’s work is solid, but in his popular book he quoted a lot of social psychology that has since failed to replicate.

      • Douglas Knight says:

        What trust in psychology studies do you mean? What simple solutions?

        Eliezer told people to study the books edited by Kahneman and Tversky and I think that those have held up. But the way he talked about them didn’t seem to me to be about about addressing specific biases, but about the general need to be skeptical of your own thought processes.

        • Nornagest says:

          Kahneman is better than most but he’s far from flawless. For example, Thinking Fast and Slow leans fairly heavily on priming results, which IIRC have not consistently replicated.

          • Douglas Knight says:

            That’s what I said. The books edited by K&T of original research by their collaborators has held up.

            I have not read that book, so I don’t know how heavily it used priming. I suspect that it used it as a flashy and easily communicated example, not as a cornerstone for a theory. Nor, as I said, do I think Eliezer really used specific examples, either.

    • carvenvisage says:

      >reading, say, Eliezer Yudkowsky talking about philosophy is, for example, typically cringeworthy

      Any chance you could expand on/explain that?

  9. Steve Sailer says:

    “This criticism’s very clichedness should make it suspect.”

    Why?

    • tk17studios says:

      Because if you’re criticizing someone with a cliché, there’s a good chance they’re already aware of the content of your criticism (it being a cliché, and therefore the sort of thing people have heard before).

      • Salem says:

        It depends on how you think people and organisations tend respond to criticism. If they are mostly responsive and effective, then a cliched criticism will likely be outdated. But if they can’t or won’t change, then a cliched criticism will likely be true – others having noticed the same thing is evidence that your criticism is accurate.

        It’s a cliche that Scientology is a cult, that Forever Living is a pyramid scheme, that Peter Beardsley is ugly, that Somalia is a dysfunctional mess. All arguable, but none refuted merely by noting that they are common criticisms.

        Can’t / won’t is, in my view, at least as common as fixing the problem, but I don’t know how to prove it.

        • tk17studios says:

          But it’s also useless to criticize Scientology as a cult, &cet, because they already know the content of that criticism.

          • Salem says:

            It’s useless (at least on its own) if your aim is to reform Scientology. It’s useful if your aim is to warn off prospective recruits.

          • peterispaikens says:

            It *is* useful to e.g. criticize Scientology as a cult etc, because while *they* know the content of that criticism, the subset of population that are their potential recruits don’t know that.

            The audience of criticism is often not the person/group/thing that you’re criticizing but other people.

            The actionable purpose of criticism is often not to change the criticised person/group/thing but to inform others that they should avoid it and choose something else instead.

            “Foobar sucks because of cliched reasons X,Y and Z; avoid it at all costs” is constructive, actionable advice even if X,Y or Z won’t ever change – because people can rightfully choose to avoid it.

      • Because if you’re criticizing someone with a cliché, there’s a good chance they’re already aware of the content of your criticism (it being a cliché, and therefore the sort of thing people have heard before).

        Report comment

        Good point. So is it rational o the rationalist community to criticise things they don’t like, such as religion and “postmodernism” with cliched arguemnts?

    • alchemy29 says:

      I also think that is odd. Common criticisms can be correct – criticisms against anti-vaxxers, flat earthers, creationists for example. Why should we give specific groups the benefit of the doubt in their ability to grapple with criticism? I would argue, only if we already believe that they have some mechanism for self correction.

      Economists are in a somewhat better position than rationalists in that regard – many economists work for private companies that would only pay them if their models were worth something. But many economists are not under as great a pressure to be correct, particularly ones in academia and I think the cliche criticisms are at least partly correct (looking straight at Bryan Caplan).

  10. Joe says:

    I suspect much of the reason people today continue to associate rationalists with the views espoused on LessWrong circa 2008 is because rationalist folk today do in fact still frequently include links to LessWrong articles from 2008 in their current writings.

    If these old views have fallen out of favour, why keep bringing them back up?

    • tk17studios says:

      The fact that people still regularly link to individual examples of philosophy or argument from ~2008 does not contradict the statement “the community doesn’t endorse the general scope and content of the ~2008 zeitgeist.”

      People don’t keep bringing up the ideas that have fallen out of favor, which is a set that includes the majority of the ~2008 era stuff.

      • Joe says:

        In which case, why not do a rewrite? The problem with linking to a small number of old posts you do like, embedded in a much larger collection of posts you mostly don’t like, is that people will, quite naturally, see all of the links to the surrounding content and assume that’s part of what you’re pointing to.

        • tk17studios says:

          That’s fair. I think the “why not” is “it takes a ton of time and effort to recollect, rewrite, and rehost a newly curated selection.” I guess people jumping to conclusions is the opportunity cost of that action.

        • Richard Kennaway says:

          Eliezer has done this already.

          • Evan Þ says:

            Has he edited that? I thought that was just a compilation of everything from c. 2008.

          • Richard Kennaway says:

            I have not actually read it, but I have read the whole of the Sequences, and I saw the discussion of AItoZ at the time. My understanding is that it is an edited selection. Whether edited to the extent of calling it a rewrite is less interesting that would be a list of that majority that tk17studios thinks is wrong.

      • teageegeepea says:

        I guess I’m rather out of it, because I don’t actually know which parts have fallen out of favor since 2008.

    • If you want people to think that you have drawn a line under certain outdated vies, why not make a big public announcement?

  11. reasoned argumentation says:

    You want a criticism? Here it goes:

    The pictures of Spock don’t exactly hit the mark but they don’t totally miss either. It’s just that “rationalism” as practiced is a giant “act like Spock to come up with complicated rationalizations for your emotional urges”. Example:

    https://www.facebook.com/yudkowsky/posts/10151804857224228

    Are there any completely reliable methods of weight loss besides mega-liposuction and adipotide?
    By “completely reliable” I mean that their theoretical and pragmatic efficacy is not subject to revocation by quirks of metabolic disprivilege. So “starve yourself” doesn’t work because its pragmatic efficacy relies on your fat cells being willing to relinquish lipids before your body cannibalizes muscle tissue and otherwise starts doing serious damage to itself, which your fat cells can just refuse to do if you’re metabolically disprivileged.
    Mega-liposuction and adipotide don’t care if your fat cells are malfunctioning and refusing to release lipids. They just physically kill or remove fat cells. Anything else like that, or which operates at a similar level of disregard for metabolic disprivilege?

    Interventions that operate orthogonally to malfunctioning fat cells or other metabolic disprivilege only, please. I will delete comments suggesting diet or exercise.

    Sure, he’s acting sort of Spock-like “its pragmatic efficacy relies on your fat cells being willing to relinquish lipids before your body cannibalizes muscle tissue and otherwise starts doing serious damage to itself, [Captain]” but at the same time it’s just an argument for him not wanting to lift or eat less.

    • tk17studios says:

      It’s reasonable for you to look at Eliezer’s profile as representative, since he’s one of the founding pillars of the community under discussion. But I note that a) that’s not particularly representative of the tone and content of his FB feed generally (it contains lots of silliness and jokes and a mix of the extremely serious and the strikingly odd that vastly outweigh this more Spock-esque example), and b) he’s not particularly representative of the rationality community as a whole anymore.

      So it feels like a stretch to me? As someone rather deeply embedded? It’s like you’re saying “examples of the bad thing still exist!” while Scott’s trying to say “the bad thing is nowhere near as prevalent as you’d think if you assumed all the criticism was representative and proportional!”

      There’s good and useful critique out there. But it’s being lost in the sound of all this straw shuffling around.

    • goddamnjohnjay says:

      Eliezer has the same problem as Malcolm Gladwell, bright guys whose writing quirks are so distinctive that they wind up pissing people off.

      • Svejk says:

        Malcolm Gladwell’s problem is that a large number of longform journalists are eating their livers out in envy of his success. If he wore a Cosby sweater with “igonvalues” as a tessellated text pattern through the lobby of the New York Times, the Grey Lady would exhaust the resources of its dental plan paying out on bruxism complaints. Eliezer may have the problem that too few well-placed people envy him.

      • lol uttering Gladwell and EY in the same sentence is an act of cosmic injustice .EY has the actual knowledge to back what he is saying (most of the time); Gladwell is just grasping in the dark

      • Eponymous says:

        Eliezer has the same problem as Malcolm Gladwell, bright guys whose writing quirks are so distinctive that they wind up pissing people off.

        Fortunately, he has about 50 IQ points on Gladwell, so his quirks are more forgiveable.

      • hlynkacg says:

        …and here I was going to say that comparing Yudowski to Gladwell was selling Gladwell short.

    • Freddie deBoer says:

      “Tell me how to solve this problem. Comments that mention the only ways to solve this problem will get deleted.”

      • Deiseach says:

        It’s easy to say “diet and exercise”. What’s the best diet? Well every publisher who has had a windfall bringing out the latest “this will make you lose lbs and keep them off” diet book for the past thirty years is thanking Mammon that there isn’t one particular diet that works for everyone and can’t be improved upon. As for exercise, now the thinking is that it makes you fitter but it won’t shift weight of itself, not unless you’re doing the equivalent of training for triathlons. (Something I personally have noticed, as I’ve had to walk everywhere since I can’t drive and even though I get the miles up I don’t get the inches down).

        I’m fat all my life. I’ve heard “diet and exercise” all my life. I’ve had a doctor recommend me the Rosemary Conley Hip And Thigh Diet back when that was The Smash Hit Diet of the Moment, I’ve recently had a consultant nephrologist recommend (sight unseen, not willing to see me for an appointment unless my kidney function degrades to a certain point, going only by information in my GP’s letter) that I go for bariatric surgery (which for various reasons I’m not thrilled about), I’ve heard about the high-fat low-carb diet, the low-fat high protein diet, the Atkins diet, every diet that’s come down the pike.

        Getting weight off is half of the struggle. Keeping it off is the other half and the harder one. Yo-yo dieting is definitely a thing. To repurpose the joke about stopping smoking “Losing weight is easy, I’ve done it hundreds of times!”

        • The Nybbler says:

          It’s easy to say “diet and exercise”. What’s the best diet?

          Less than the fat person is eating at the moment. Typically a lot less.

          As for exercise, now the thinking is that it makes you fitter but it won’t shift weight of itself, not unless you’re doing the equivalent of training for triathlons.

          Yes. As with the previous one, the public health establishment has done people an enormous disservice with their messaging. Just a small amount of exercise makes you much healthier, they say. Well, yes; a little bit of exercise, a brisk walk, is probably far better than lying around in bed, but most people (even most fat people) already do that, doing a little bit more won’t help that much.

          Same goes for food. Public health officials talk about “healthy eating”, which seems to translate in people’s minds to eating some leafy greens as well as all the saturated fat, junk food, etc. Doesn’t work, especially when you dump fat (“but it’s olive oil, it’s healthy!”) on top.

          Getting weight off is half of the struggle. Keeping it off is the other half and the harder one.

          Yeah, because once the weight is off, you’re still hungry all the time.

          • bbeck310 says:

            Interventions requiring the equivalent of taking on a part time job and having the willpower of a saint are by definition not for everyone, and are probably not an effective medical recommendation.

          • Edward Scizorhands says:

            Do you think our pre-farming ancestors had hunger pangs every single day they didn’t have three big meals? Wouldn’t that make it hard to hunt?

      • cant blame him. he’s tired of the same trite advice

    • Scott Alexander says:

      See eg http://amptoons.com/blog/?p=22049 . I think Eliezer is basically right about this one. Will be reviewing Guyenet’s book on the neuroscience of body weight soon and it should hopefully convince you.

      • Evan Þ says:

        I would be very interested in that review as well, since I basically agree with reasoned argumentation (the user, as well as the concept).

      • reasoned argumentation says:

        Thank you. That link is a perfect example of the kind of terrible reasoning I’m trying to point out.

        The objections listed were:

        1) NO ANECDOTES PLEASE
        2) SIGNIFICANT AMOUNTS OF WEIGHT LOST
        3) WEIGHT LOSS WHICH LASTED AT LEAST FIVE YEARS
        4) MOST PARTICIPANTS DIDN’T DROP OUT
        5) NOT A STUDY OF ONLY SUCCESSFUL DIETERS
        6) PLEASE DON’T TELL ME ABOUT THERMODYNAMICS

        Very science-y, right? But no, not really because you’re not looking for a diet that can take 100 fat people and make them all thin – you’re looking for a method to make one particular fat person (you) not fat. Taking those objections in order:

        1) [Anecdotes] Anecdotes are perfectly fine because they tell you that something is possible. Sure, there are likely circumstances that caused that person to succeed exceptionally well but the solution isn’t “dismiss the data point” – it’s “understand the circumstances and see if they apply to me”. When you go to weightlifting fora there are hundreds of “anecdotes” describing how lifting weights and consuming protein will reshape your body. As far as I know there are zero anecdotes about lifting not leading to gainz.

        2) [Major weight loss] “Everything or nothing” is almost always an excuse to not try at all. Can’t damage that self image by trying and failing so better not to try at all since success isn’t guaranteed.

        3) [Sustained weight loss] This one is a trap (combined with two later steps). Fat people are people who got fat in the first place. That some of them got thin for a while is interesting. That lots of those people got fat again isn’t that interesting – they were fat to start with – maybe people’s schedules changed at work, maybe they got stressed and turned to food for comfort, maybe one of a billion things that happen in people’s lives happened. The interesting part isn’t the failure rate, the interesting part is distinguishing the long term failures from the long term successes. Paying attention to the rates is pure “scientism” / cargo cult science – observing the forms of science (check the rates) without considering why you check the rates – you check the rates so you can design a program to get as many in the success bucket as possible given your constraints. However, in this case you care about an n of 1 which is either going to be in the success bucket or the failure bucket (not strictly true of course – being more fit than you would have been otherwise is still a success so there are shades of success and failure – but to simplify). Getting at the object level here the distinction between “restricting calories works for a time until hunger overwhelms willpower” is one reason for failure that might be unavoidable – “eating fewer carbs works until the person is tempted by the delicious carbs” is a totally different type of reason for failure. The solution to the latter failure mode is built right in – the former, not so much.

        4) [Drop out rates] The given reason for looking at drop out rates is the assumption:

        This is a problem because the people who drop out of a weight loss program are not a random selection – they are more likely to be the people who found the program wasn’t doing anything for them.

        Well, maybe – but if the study is well done then someone dropping out because they didn’t lose any weight shouldn’t be put in the “dropped out” bucket they should be placed in the “failed to lose weight” bucket. Of course, social science is rarely done well but then why does your objection to weight loss advice center around the lack social science support? That really looks suspicious – a way to avoid taking action that the arguer is likely to find unpleasant combined with a risk of self-image damaging failure. It’s a fall back from “show me some ‘science'” to “well, the science is bad”. Start with “the science is bad” if that’s your position – but it’s not the position – it’s a rationalization.

        For a steelmaned version of the objection – “lots of people dropped out of program X because it’s incompatible with human nature” – that’s actually a good objection. On the other hand if 15% of people didn’t drop out then maybe it’s possible for people to follow it (maybe it’s not and only people who are one standard deviation from the norm on some measure can follow that program). Keep the context in mind though – maybe the person knows they’re not 1 stdv from the norm in trait x that allows success on that program – if that’s the case then find another program where you are. The context is individual weight loss – not “design a solution for everyone for all time”.

        5) [Don’t study only successes] Sure, good advice as far as it goes but look at this in context with the other objections – don’t take anecdotes, don’t look at why some people drop out of studies, don’t look at programs that succeed for lots of people to a limited degree. Don’t only study successes is good advice – “don’t study how successes differ from failures” is terrible advice.

        6) [Don’t talk about thermodynamics] That people start to talk to you about thermodynamics or POWs isn’t because they’re trying to convince you of the merits of the starvation diet – it’s because they’re reacting to you presenting arguments that impliy that weight loss is impossible. It’s a reductio ad absurdum to an argument you’ve made – not a point they’re trying to make.

        [Back to the object level] The solution is simple but hard – lift, eat protein, cut carbs. You can go a touch easy on the third part if you don’t mind being a bit doughy. Sure, most people who were the type to get fat in the first place are going to find this hard to do but most people who are in that category find everything in life hard because they’re below average in intelligence, motivation and willpower – which is exactly why* people want to lose weight – they’re sending signals they’d rather not be sending. What does “rationalist” Elizer do about it? Does he do the gwern thing of trying every diet and exercise program and checking the results? No. He turns to social justice language to reclaim status using his high verbal IQ – rationalizing with a veneer of cargo cult science plus SJ.

        * Part of the reason anyway – I’m sure they don’t find it pleasant to be winded after climbing a flight of stairs.

        • Ilya Shpitser says:

          “[Drop out rates]”

          This is a standard issue, with standard solutions in statistical analysis.

        • Evan Þ says:

          I agree with your larger conclusion, but I think your criticism of point (3) misses why Ampersand et al are looking for studies specifically showing sustained weight loss. One common criticism of diets is “yo-yo dieting”: someone follows the Special K Diet of eating just a bowl of cold cereal for lunch and dinner, loses thirty pounds after however many months of this, declares success and goes back to eating normally, and then promptly regains the thirty pounds plus maybe a little more. A well-designed study should continue tracking participants after the conclusion of the diet to catch undesirable outcomes like this.

          The simple explanation for this, IMO, is that the person’s “normal eating habits” mean she eats too many calories to maintain her post-diet weight, so of course she gains some weight. A more complicated explanation, however, might be that the Special K Diet is unhealthy and the weight loss is inherently unsustainable. I think that’s not the case with a literal “cold cereal and skim milk 2x/day” diet, but it probably is with some; I don’t know enough biochemistry to be sure.

          • reasoned argumentation says:

            Point 3 has the surface appearance of a reasonable critique but without investigation as to why the weight loss wasn’t sustained it’s meaningless.

            Special K + skim milk 2x per day diet? Unsustainable because you’ll literally die if that’s all you ate. Eating steamed fish and buttered vegetables only? Unsustainable because the people who tried that diet couldn’t resist the tasty doughnuts. Different category.

            The “fails to show sustained weight loss” is inevitable (for some people) for every diet. The why is the interesting part.

          • Evan Þ says:

            Exactly: you need to investigate why it wasn’t sustained, and preferably demonstrate by example some way it could be sustained.

            (On the tangent of Special K + skim milk, I was referencing the old weight loss campaign they ran: eat their cereal with skim milk for breakfast and lunch, and eat a medium-sized healthy dinner. Never had to try it myself, but I liked their cereal back then, so I saw it on the side of the boxes pretty often. And hey, if you get the “medium-sized healthy dinner” part right, it sounds like it’d work… but that’s a big “if.”)

        • Deiseach says:

          Sweet God Almighty, what is this obsession with weight lifting? I suppose if you’re a guy who wants to look like someone stuck a bicycle pump up your backside and inflated you like a frog, it has some appeal, but this mantra of “lifting…lifting…lifting” annoys the ever-loving heck out of me.

          And I suppose there are some women who like men with the bicycle pump look. I’ve never found weight lifters attractive, whether it’s the Olympic competition guys, the “crêpe skin competition” guys or the ordinary guys who spend time in the gyms with the machines and the seasonally adjusted routines and the whey protein powders and creatine and who, to be frank, look to me like beef cattle reared and conditioned for slaughter – the same beefy, soft musculature that doesn’t say “strength” to me but does say “inflated frog”.

          Apologies to those who love their weights. I just don’t like the look or the cultus around it. Plainly, as someone who is “below average in intelligence, motivation and willpower”, I haven’t got the mental power to understand the virtues of the rule.

          • Nornagest says:

            You need to train like a bodybuilder to look like a bodybuilder — which basically means treating the gym like a part-time job and controlling your diet to a degree that would make even the weirdest and most restrictive fad diets look half-assed and uncommitted. And even then, a lot of people need chemical help. It’s really not a great idea.

            On the other hand, weight training is the best way to make yourself stronger, which has lots of benefits. Especially if you’re into athletic hobbies, but even if you’re not: in the context of losing weight, it’s important because it’s probably the best option for increasing your lean body mass, which translates directly into the calories you’re burning at rest.

          • lvlln says:

            It’s extremely difficult to look like someone stuck a bicycle pump up your backside and inflated you like a frog even if you lift. I lift 2-4x a week most of the year, mainly for strength and injury prevention, and I am not buff at all.

            I think some people have an obsession with lifting when it comes to weight loss, because extra muscle mass tends to increase basal metabolic rate. From what I’ve read, though, the extra calorie consumption per unit muscle gained is small enough that other ways to lose weight are easier. Gaining muscle mass is hard.

            My intuition is that lifting is good in general for overall fitness, injury prevention, and looking better, but when it comes to weight loss, caloric intake is by far the most important thing, and the effect of any exercise you do, whether it be lifting or cardio or other, is dominated by the calories. That’s mainly from my personal experience: I struggled with being overweight/obese for many years until I decided to just severely limit calories and managed to lose 60lbs in 9 months. I did run during that time, but I found that my rate of weight loss didn’t seem to be much affected by my how much or often I ran, and as long as I kept the calorie restrictions in place, the weight loss continued at about the same rate.

            But obviously thermodynamics isn’t helpful for everyone, even if it was for me. Resisting the hunger can be very difficult, and I honestly have little idea why I was able to do it, because I never considered myself to have particularly strong will in that regard.

          • Edward Scizorhands says:

            I lift because I want to add muscle mass so I will stil be healthy in old age.

            It’s really really hard to get that “bicycle-pump” look, and I probably started too late in life to achieve it any way.

          • Aapje says:

            @lvlln

            Muscle is heavier than fat, so the people who do gain muscle may actually get heavier. AFAIK there is a genetic component to how easily you gain muscle and you can change that by using steroids (don’t, btw).

            IMO, weight is a bad goal anyway, fat percentage is much more important.

          • valiance says:

            Nornagest and lvlln hit the nail on the head: it is very difficult to look like the guys you linked. Very few people want to or can achieve that. Dollars to donuts most, if not all, the men you would find attractive lift weights on a regular basis. Perhaps not. But “lifts weights regularly” covers a wide swathe of bodytypes; from hardgaining ectomorphic skinny nerd all the way to the aforelinked Lee Haney.

            I’m sure these two things have nothing to do with one another, but I was just reminded that (13 years ago anyway) about 20% of the male US population lifted weights regularly, and something like 20% of men are rated by women as above average in looks (if okcupid data is anything to go by).

            http://www.cdc.gov/nchs/fastats/exercise.htm

            https://theblog.okcupid.com/your-looks-and-your-inbox-8715c0f1561e

        • carvenvisage says:

          >When you go to weightlifting fora there are hundreds of “anecdotes” describing how lifting weights and consuming protein will reshape your body. As far as I know there are zero anecdotes about lifting not leading to gainz.

          Do you think people are gonna jump on bodybuilding.com to write posts celebrating that the plan didn’t work and they’re still fat and dissapointed with themselves and the world?

          But anyway you’re missing the whole point here. The guy isn’t asking ‘how to lose weight’ in general, or for you to swoop him and save him from his own ways, he’s asking for specific information that might be helpful, so your thesis on how to lose weight in general is off topic.

          You can give the same advice for how to be good at anything: how do I get good at maths, business, tiddlywinks, being a stunt man, etc. It’s all the same basic process, but to what extent you’re willing to dedicate yourself to it and throw yourself into it, is determined by your current abilities and priorities. You’re basically taking it as a given that EY should be way more desperate to lose weight than he is, but that’s none of your business. The guy isn’t begging for help, he’s asking for specific information.

          Also, anecdotally I can eat as much as I want and exercise as little as I want and not get fat, and maintain some decent strength, as well as put it on pretty fast. Sure maybe ‘how much I want’ is less and more respectively, but if I recall correctly that’s the kind of thing EY was interested in, looking for a way to short circuit the process and make it easy(er). So where’s the contradiction. EY wants to lose weight but hasn’t? You realise lots of people vaguely want things they haven’t yet made happen?

      • Viliam says:

        Anecdotes are Bayesian evidence, and Bayes trumps Science, right? So here is my anecdote:

        I was fat most of my life. So is my nearest family. Diets didn’t work; except for one that made me lose a little weight temporarily, but I spent most of the day thinking obsessively about food, which was not sustainable. I hate all sports, and what’s the point anyway; if you keep exercising for an hour and then eat an apple, I heard you get all those calories back. I spent decades like this.

        Then I had two options: either accept Eliezer’s reasoning, also supported by my own experience, or… try harder and smarter (ironically, inspired mostly by texts Eliezer wrote on other topics). I am lucky I tried the harder and smarter way, and had a supportive environment. These days, I still have some fat to lose — and I am planning to, — but people who haven’t seen me for a while keep spontaneously complimenting me about a visible change.

        I did three things, not sure about the exact degree each of them contributed to the outcome:

        First, I had my blood checked. I had some iron deficiency, so I started taking pills. Made a lot of difference at the beginning; later it became less of a difference and now I only take a pill once in a month; maybe the problem is already mostly fixed. — To explain what iron deficiency can feel like from inside: You feel tired, despite not really doing anything hard. If this was your normal, and you take your first iron pill, you feel as if you are a superman, or as if the gravity was lowered; suddenly it starts making sense why other people are full of energy.

        Second, I started to eat a lot of unprocessed vegetables. Like, some days maybe 50% of what I eat are unprocessed vegetables, without exaggeration. The main challenge was to find a solution where I don’t have to keep buying and preparing those vegetables every day, but someone does it for me.

        Third, I started doing strength training, really seriously. Aiming for every day, in practice more like every other day on average. The first important step was buying my own weights, so that I don’t have to go to a gym, because that would be a waste of time I couldn’t afford daily. The second step was a switch to exercising using my own weight (link). That means I can exercise different parts of my body intensely, without having to go anywhere, or having an exercising machine at home. And it takes me only one hour a day, any hour during day or night, and I can e.g. browse the web between the sets.

        Other things I tried to do but failed: Fixing my sleep cycle, which would probably give me even more energy. Not eating tons of chocolate. In both cases, my willpower was insufficient in long term, and I didn’t find a smart way to do it sustainably. I mention this just to say that I achieved success even without doing everything correctly.

        Probably an important factor was that I precommited to “do the right thing” even if there would be no result. Like some kind of exercise in virtue ethics. And it made sense because for the first month, there was probably no visible outcome. And one month is a lot of time to wait for feedback on something you are doing daily.

        In hindsight, I see many things that I was doing wrong in the past. Probably the worst thing was that as a solution for losing fat, almost everyone recommended some variant of eating less, so I kept thinking about this class of solutions. Wrong! As Eliezer correctly says, eating less mostly makes you feel weak, and in extreme cases unable to think about things other than food. Such suffering may help you signal great virtue, which is probably why everyone keeps recommending this, but signals of virtue are not what you should be optimizing for.

        Instead, strength exercise makes you feel strong, so if you are already inspired to become stronger, this is how you do it completely non-metaphorically. But you should optimize to make the exercise simple and safe, because we are trying to win, not to signal virtue. Exercising using your own body weight is in general more safe; and cheaper; and you don’t have to go anywhere. And the key to eat more unprocessed vegetables is to add something tasty to them (try many things and find what works for you), and eat as much as you want. Again, not trying to signal virtue by starving yourself or eating something you dislike.

        Also, psychologically… focusing on “becoming stronger” is positivie, focusing on “losing fat” is negative; focusing on “trying a tasty veggie recipe” is positive, focusing on “eating less” is negative. It’s not enough to do the technically right thing; you also have to make your own mind support the process.

        Anyone, feel free to do a peer-reviewed study on this. I told you all my secrets. (Well, except how to find a group of friends who will support you in the process. But if you make a study, the participants can support each other.)

        As a sidenote, I may be imagining things, but it seems to me that people perceive strength exercise as something… politically incorrect. It’s like “right-wing people lift, left-wing people do cardio”, but of course it sounds stupid when you say it like this. I suspect it could be about signalling class: right-wing people don’t shy away from lower-class behavior, and lifting heavy things is what many poor people do for living.

        • reasoned argumentation says:

          Exercising using your own body weight is in general more safe; and cheaper; and you don’t have to go anywhere.

          I dunno man:

          https://www.youtube.com/watch?v=-c8ZWA2sFm4

          That body weight stuff looks pretty dangerous to me.

          As a sidenote, I may be imagining things, but it seems to me that people perceive strength exercise as something… politically incorrect. It’s like “right-wing people lift, left-wing people do cardio”, but of course it sounds stupid when you say it like this. I suspect it could be about signalling class: right-wing people don’t shy away from lower-class behavior, and lifting heavy things is what many poor people do for living.

          Lifting weights isn’t low class, it’s sexist.

        • TheEternallyPerplexed says:

          Scott Adams of Dilbert fame has a bunch of tricks in his book how to eat healthy without a having to do a lot of preparing meals, tricking the mind and desires, etc. Worth a glance, imo.

        • Zodiac says:

          I am fascinated by this and will save this post somewhere in case I ever get to the point where I want to try it myself.
          I do however have one potentially too personal question: How much did you weight at what hight? I am mostly asking because I have noticed that most people have very different understanding of when somebody is chubby vs fat.
          I apologize if that is too personal.

          • Viliam says:

            (It’s perfectly okay; it was my decision to share the story here, and I feel pretty proud about my achievements. It would be hard not to, with everyone in real life giving me positive feedback.)

            I am 180 cm tall; my weight was around 93 kg previously, now it’s 87 kg. But — and I believe this is a very important point — mere weight does not tell the full story, because one kilogram of fat weighs exactly as much as one kilogram of muscles. I did more than merely “lose 6 kg”.

            Optimizing for lower weight could even be harmful, because you might lose muscles by starving yourself, or temporarily lose a kilogram or two by becoming dehydrated, and the metric would declare that a success, while your health was actually damaged. (I suspect many diets do exactly this.) Losing weight is a bad goal. A better mindset is that you try to become more healthy (and increase your expected lifespan), and also stronger and attractive (maybe less important, but hey, these things correlate); and losing some weight comes merely as a side effect.

            Before writing this comment I actually had to measure my weight, because I stopped watching it on purpose. As long as I gain muscles and lose fat, I don’t really care about the total weight. (It’s like adding two numbers that correctly should be subtracted.)

            I don’t care about the “chubby” vs “fat” distinction. I am not saying I was the fattest person ever, just that my body was sometimes a source of inconvenience to me, and it was gradually getting worse: I got easily tired, had more difficulty manipulating things, was perceived as less attractive than now. And also, I don’t have a proof for this, but I probably had a greater risk of some health problem happening (although, luckily, nothing happened). There is still a lot of space to improve, but that’s what I’m planning to do, and based on recent developments, I feel quite optimistic about it.

            (And when, maybe two years lated, I become a walking mountain of muscles, I expect many people to say: “Yeah, that was pretty easy for him; some people are just lucky to be born with a perfect metabolism.” — By which I am not suggesting that genes play no role; just that their role is probably exaggerated in most cases. Maybe some people achieve the same outcome with half or tenth of the effort, but I still regret not having the knowledge I have now ten or twenty years ago.)

          • Zodiac says:

            Thanks a lot for sharing.

          • Barely matters says:

            Yeah, that was pretty easy for him; some people are just lucky to be born with a perfect metabolism.

            Can confirm this.

            When I was fat people told me I was a lazy cunt.
            When I got into seriously good shape people said I was lucky to be so naturally fit.

            It’s all sour grapes.

        • Deiseach says:

          As a sidenote, I may be imagining things, but it seems to me that people perceive strength exercise as something… politically incorrect. It’s like “right-wing people lift, left-wing people do cardio”, but of course it sounds stupid when you say it like this.

          Oh, I’m the common clay, so I have no bias about right-wing or low-class (indeed, I think my bias is the other way: people who go to gyms/have equipment to exercise are more liberal or slightly higher in class). I think mainly my kneejerk grumpiness to being told “lift! lift! lift!” is that (a) I have treetrunk calf muscles from walking and cycling everywhere all my life. This means that, for example, when I was in jobs that involved wearing wellingtons I had to wear the men’s boots because my feets too big. Muscle mass there didn’t and doesn’t mean fat came off the hips and stomach and bosom. Same with all the lifting and hefting I did; strong enough in the arms when younger but not getting me svelter by any means (b) the people I grew up amongst who did hard physical labour were blocky and stocky and strong, so I don’t have the association “muscles mean strength, fat means weak and lazy”, I have the association “muscles mean copious spare time to work on getting muscle and not real working strength muscle”.

        • Aapje says:

          @Viliam

          As a sidenote, I may be imagining things, but it seems to me that people perceive strength exercise as something… politically incorrect. It’s like “right-wing people lift, left-wing people do cardio”, but of course it sounds stupid when you say it like this.

          Pick up artists tend to really like lifting.

          • Viliam says:

            There must be some kabbalistic connection between “picking up” and “lifting”.

      • Barely matters says:

        Scott, I agree with you on a million things, but this isn’t one of them.

        So, what does Eleizer’s personal trainer recommend for his weight loss? Because if that guy is throwing up his hands and saying “Well shit! We’ve tried everything and nothing works!” then I’m much more inclined to believe this is anything but straight up Ignatius Riley levels of rationalization. If there’s no personal trainer, this feels like a central example of “Did not do the due diligence before complaining”.

        I’m trying to make this next bit as snark free as possible, and to phrase it delicately. But has he considered approaching HungerHacking the same way that he does polyhacking or orientation hacking? The latter two seem like much, much more difficult tasks to accomplish than maintaining dietary discipline in the face of low level hunger from operating at a small caloric deficit.

        Failing at something so straightforward and commonplace (Though admittedly not easy. Which shouldn’t be a problem for someone in the practice of “Systematized winning”) really injures my faith in his abilities that I’m less able to measure. And that definitely generalizes onto the movement that still seems to believe in him.

      • valiance says:

        JayMan has some convincing –and startling– stuff which supports Eliezer’s use of the term “metabolic disprivilege”:
        https://jaymans.wordpress.com/obesity-facts/

        Obesity is very difficult to impossible to treat. The most common prescription, and indeed the prevailing conventional wisdom, is that “lifestyle” changes are the best solution. This typically means diet and exercise. However, this has been extensively studied. Across the population, diet and exercise, each individually and in tandem, are completely useless to treat obesity, in the long term.

        In the case of exercise, randomized controlled trials (RCTs) don’t even show a short-term benefit. One 2007 meta analysis by Franz et al looked at the results of all sorts of different interventions. For exercise-alone prescriptions, it found that the treatment groups lost no weight at 6 months (well, less than 2 kgs, but even this number comes only when you look at those who remained in the study). Indeed, after a year, the control groups actually lost more weight than the treatment groups. The total weight change was small and close to zero throughout.

        In the case of diets, particularly the most common low-fat and low-calorie diets, a very large meta-analysis of RCTs with a combined N > 60,000 (of which ~48,000 came from a single mammoth trial) and a study duration of 2.5 – 10 years, found that diet was completely ineffective for weight loss. The subjects showed no aggregate permanent weight loss at the end of the study period. The largest of these studies, the one by Howard et al (2006) found little change, a total loss (over 3 years) of less than 1 kg (and a difference between control and treatment groups of 1.29 kg, favoring treatment).

        As for diet and exercise combined, several studies in both previous meta-analyses look at trials which tested both together. The result was the same: little to no significant aggregate weight loss, especially after longer periods of time.

        https://jaymans.wordpress.com/2013/08/18/even-george-w-bush-has-heart-disease/

        That original research, published in a landmark 2010 study, looked into the genetics of why some people respond to endurance exercise so robustly, while others do not. Some lucky men and women take up jogging, for example, and quickly become much more aerobically fit. Others complete the same program and develop little if any additional endurance, as measured by increases in their [VO2] max, or their body’s ability to consume and distribute oxygen to laboring muscles.

        For the 2010 study, Dr. Timmons and his colleagues genotyped muscle tissue from several groups of volunteers who had completed 6 to 20 weeks of endurance training. They found that about 30 variations in how genes were expressed had a significant effect on how fit people became. The new test looks for those genetic markers in people’s DNA.

        As Timmons’s data show, a significant fraction of the people show little to no improvement. A sizable minority (~10%) are negatively impacted by exercise!

        More at the links, of course. With links to relevant material etc.

        I’m glad Scott brought up amptoons because this issue of “metabolic disprivilege” is something that fat-acceptance activists have been talking about for years. It’s easy to make fun of them–I myself have done so in the past–but maybe they have a point?

        • The Nybbler says:

          The diet studies counted were only those with a 2-year follow-up period, with the diets themselves lasting “a few months to 1 year”. Of course they gained the weight back. There can be no end date to a diet if you want to keep the weight off. This isn’t “metabolic disprivilege”.

  12. blacktrance says:

    While this kind of response is correct in the details, it concedes too much – the critics say “Rationalists say they’re so good, but they aren’t!” and some of this is along the lines of “We don’t think we’re that good”, which is weak. For example, while it’s true that rationalists aren’t perfectly rational and generally don’t claim otherwise, let’s not fall to the vice of humility – they’re significantly more rational than is typical, even among similar demographics. If anything, there’s an excess of self-doubt and self-criticism, and the founding nature of a willingness to be contrarian has sadly faded.

  13. ryukendokendow says:

    I find Will Wilkinson’s critique quite irritating…

    “Bayes Law is much less important than understanding what would ever get somebody to ever care about applying Bayes Law”

    “I see no interest among rationality folks in cultivating and shaping the arational motives behind rational cognition”

    “Good things, like reason, come from co-opting and re-shaping base motives”

    I see no interest among non-rationality folks in cultivating and shaping the arational motives behind irrational cognition, and Bayes Law is much less important than the base motives which lay behind Will Wilkinson’s whole complicated screed about rationalists. The problem is quite basic–the behaviours of the rationalists scream out ‘low status’, ‘nonexistent levels of social panache’ to everyone who is watching, but if that is what you really feel then just be straightforward and just taunt people already, instead of couching it in all this moralistic rhetoric.

    On the other hand, I really don’t think most people here behave the same online and offline–there is a special persona associated with communication here, that may not match up with real social life. Maybe not for the most committed, or public-facing, members, but certainly for the huge halo of observers and incidental participants. I especially appreciate the role rationalists have played in getting us closer to the truth and disseminating information on various topics pseudonymously, ‘behind closed doors’, so to speak, and I suspect many other people–some quite famous–also appreciate this service that the community renders. But, because of the whole ‘low status’ thing no one will be caught defending the community publicly–all the incentives point in the other direction.

    • cactus head says:

      I don’t dispute that there are people who like to bully the low-status, and that a large part of the mocking of rationalists stems from that desire, but things can’t all be that bad for us. Between all the polyamory, and the high IQ silicon valley types endorsing rationalist stuff, and highly visible people like sinesalvatorem and theunitofcaring on tumblr, I think the rationalist community is doing pretty well in terms of status.

      No matter how high status a subculture is, there’ll always be naysayers who care nothing about that culture’s standards of status. The example I have in mind is left-wing or liberal university students who are fairly well-off, on top of the latest social justice happenings, and have some kind of journalism job lined up–perfectly respectable within the blue culture they’re embedded in, but a lot of people on the right will call them nu-males and SJWs no matter if the students are undergraduates or postdocs.

    • liquidpotato says:

      “Bayes Law is much less important than understanding what would ever get somebody to ever care about applying Bayes Law”

      I thought this criticism to be spot on. I certainly don’t see it as a taunt on low-status behaviour. At work, I sometimes write tools to automate tasks that are extremely useful to me, and since it was really useful for me, I thought I’d also spread the tools around my co-workers in the hopes that it will help them as well.

      It turns out that there is a huge gap between making, refining the tool to be useful and getting people to care enough to incorporate that into their workflow. Any disruption to an existing workflow necessarily means an initial slowdown in efficiency as the new tool/method installs itself. If there are no sufficiently compelling case made and if the tool is not packaged in a way easy for people to pick up, the tool may as well not exist.

      In that sense, Wilkinson’s argument is spot on. Presumably a lot of effort is spent on making, refining and perfecting these mental tools. But tools are meant to be used by people, and if people are not interested in using the tools, then their existence is much diminished. Therefore, absolutely, understanding that there is a need to get someone interested in using Bayes Law is more important than Bayes Law itself.

  14. Besserwisser says:

    Economists having very simplified views about the world is basically a meme amongst academics at this point. My professor in economic geography responded to a quote he brought up providing a definition of economic geography by saying “well, he is an economist…”. There are also stereotypes about a divide between economic and social geographers but this hasn’t really been my experience with the few I’ve seen so far. Not that I necessarily agree with the direction geography is moving in and that seems to be about the same direction economics does have.

    • I keep seeing examples of homo econimicus– there’s some truth to it..

      • Besserwisser says:

        The standard response to examples where people supposedly aren’t acting as a homo economicus is “well, they were obviously optimizing for something else”. See that study about poor people making actually good choices by going to check cashing stores.

  15. Salem says:

    Caplan’s criticisms seem hardest to deny.

    Peak LW was annoying and wrong about a lot of things, but it was also phenomenally productive, focused and challenging. The community coalesced in that era because it was genuinely changing minds and being provocative in the best way. And then it ran out of steam. Because sure, people are aware of the criticisms, but they haven’t really answered them so much as accepted them and retreated from their most advanced positions. The rationality wave broke, and on a clear day, with the right sort of eyes, you can still see the high water mark.

    So now it’s a fractured diaspora, linked and governed mostly by aesthetics.

    The level of thinking here, the genuine attempts at truth-seeking, is extremely high. But it would be much higher if we could get past the aesthetics.

    • Ilya Shpitser says:

      Wanted to register my appreciation of that Hunter S. Thompson paraphrase.

    • Mediocrates says:

      It was a little weird to see Caplan basically dismiss utilitarianism by way of a link promising “many well-known, devastating counter-examples”, which led to a… study guide? homework page? where those examples are immediately followed by some reasonably compelling utilitarian rebuttals.

      Like, maybe you’re not ultimately moved by those counter-counter-arguments, but are they so obviously, laughably weak that this link serves the knockout punch Caplan clearly intended? Does anyone think that Mill’s Utilitarianism is accurately described as a “hasty, dogmatic reject[ion]?”

      • Was he dismissing utilitarianism as false, or pointing out that isn’t a done deal?

        • Mediocrates says:

          I guess that particular post isn’t really an outright dismissal, but Caplan’s written before that he’s a sort of deontologist.

      • Philosophisticat says:

        He’s right that utilitarianism is subject to many well-known, devastating counter-examples, but it was a weird link for him to choose. My guess is he googled “utilitarianism criticisms” and linked to the first thing he found without reading it carefully.

        • Mediocrates says:

          I thought so, too, but it looks like he posted that same link 8 years back, and with near-identical wording. I guess he has it bookmarked under “utilitarianism, devastating counter-examples of”.

          (The link in the older post is broken, but clearly points to a previous version of the same page.)

        • wintermute92 says:

          He’s right that utilitarianism is subject to many well-known, devastating counter-examples

          Can you source this?

          I’m not being snarky, I would honestly love to see it. I understand utilitarianism as generating a lot of horrifying conclusions, but a lot of utilitarians meet those with “yeah, and are we wrong?”

          Saying that utilitarianism allows utility monsters is interesting, but not really a rebuttal. (“Yeah, and? Most people live like this in practice, people who are less stoic get more resources.”) The repugnant conclusion has been both accepted and denied by various people, I find it challenging but far from devastating. (It’s better with the companion problem attacking average utilitarianism, but still not ironclad.) Failing Pascal’s Wager is substantially harder, but shared by a lot of other decent-looking ways of making decisions.

          …what’s the well-known, devastating stuff? I’m honestly not thrilled to be a utilitarian, but I am one and I’ve never seen something knock it down all that well.

  16. tmk says:

    If the defense of current rationalism is to distance itself from circa 2008 LW High Yudkowskianism, then it’s very unclear what rationalism is now. Since the fall of LW, rationalism is so fragmented that the Yudkowskian roots are all that holds it together.

    I have been meaning for years to write down my beef with rationalism. There is so much I am drawn to in rationalism, but there are fundamental flaws. The big one is the far right politics. Much of it is so obviously wrong and horrible, that a philosophical system that fails to filter that out cannot claim to be the path to enlightenment.

    • Ilya Shpitser says:

      Let me try to be more positive. Let me say what you I _like_ about rationalism:

      (a) Certain customs, specifically the steelman, and the taboo. These are excellent argumentation tools. More generally, trying to argue in good faith is a great part of the culture. Raising the sanity waterline is a great project for any community.

      (b) I think a big chunk of the community is well-meaning and “good people.” This is important — regardless of ideas floating around, they require human heads, and the type of human you attract matters a great deal.

      There are lots of bad things, but in my view they can all be traced to the fact that rationalism is _also_ a global social club for folks who might otherwise have difficulty having such a club. Having a social club is very valuable for humans because we are social, and we need that sort of thing in our lives! But having a social club also means you are in thrall to social club dynamics, like the founder effect, like peer pressure, tribalism, etc. My “sympathetic outsider” advice for rationalists has always been to treat it more like a job and less like a social club (sort of like what academics do).

      I don’t think rationalists have far right politics. I think you might be thinking of our edgy friends (very very few of which split off from LW in the early days, but they are not formally “in full communion” with rationalists per EY’s ruling a while back).

      • Nancy Lebovitz says:

        One of the things I like about the rationalist community is the respect given for admitting mistakes and changing one’s mind in response to new information.

        • Ilya Shpitser says:

          Absolutely — but some practice this more than others (because it’s so hard…)

      • AnonYEmous says:

        Certain customs, specifically the steelman,

        100%. I’d rather argue with an idealized opponent, just like i’d rather fight an opponent who’s up to the standards expected by the community (not elaborating on that, I have shameful gaming hobbies that can be used to track me). Because I know I’ll have to one day, and it teaches me about that argument or playstyle.

        Also as to far right politics: if a group of people calling themselves rational and seeking rationality end up on a certain political sphere…well, that’s not in and of itself vindication of any type of politics, because those people can still be wrong, whether because irrational or because “rational” isn’t the correct measuring tool. But you could at least consider that you might be wrong – either about the sphere or its justification.

        (This also leads me to another thing I like about the rationalist movement: the distinction between what is likely and what is actually true, is well understood.)

      • a) Certain customs, specifically the steelman, and the taboo.

        Not uinique–known as charitable interpretation and unpacking in mainstream philosophy.

        • Ilya Shpitser says:

          Yes, there is not a lot novel in the rationalist circles. But that’s ok! Old good ideas are also good to use.

      • tmk says:

        I agree. There are some really good tools and practices in rationalism. That’s why I keep reading all this stuff. The faults are mostly overconfidence and regular human flaws, that rationalism fails to counteract.

        You are right that most rationalists are not far right. Every survey shows very few are. I am just disappointed that mainstream rationalism fails to counter those elements. The little I see is just standard lefty arguments, not making much use of rationalist tools. If you want to raise the sanity waterline and end up with a bunch of insanity, something is wrong.

        • Ilya Shpitser says:

          Well, if the project didn’t work on young adults, perhaps go full Jesuit, and get em while they are young?

  17. Richard Kennaway says:

    Various people, including Scott, have said here that LessWrong got a lot of things wrong in its early days. None of them have said what. A commensurate set of examples would help us to be actually talking about something.

  18. Freddie deBoer says:

    Internet rationalists share this trait with a lot of previous intellectual movements: they are vastly more effective at criticizing others than at understanding themselves. If you asked me to rank my perception of the level of self-knowledge of the median members of various internet constituencies, I would feel compelled to place rationalists near the very bottom. I find the project quite useful for thinking through certain kinds of bad reasoning; I find the converts almost impossible to talk to.

    Also you guys for fuck’s sake, the Singularity is not science and that “Demon” whatever the fuck Yudkowski is always talking about is like something a schizophrenic would come up with. It is so hard to take other parts of your project seriously when you develop these utterly fanciful imaginary constructs and then talk about them as though you have actual tangible proof of their reality.

    • Deiseach says:

      Speaking as a non-rationalist, I never knew anything about Less Wrong and had my exposure to rationalism (or Rationalism) been via Eliezer Yudkowsky, I would probably be very much of your opinion.

      Whatever about the Singularity, I am not going to knock people’s personal religious/spiritual beliefs (and anyway, we all have mildly embarrassing enthusiasms we went overboard about in our younger days that we may have more mature and considered opinions about a few years down the line). But what I find here is a group of people that are interested in all kinds of things, that there is no One True Path To Rationalism and that if somebody wants to have a discussion about the Singularity or AI risk that it can happen with both the pro and the con side getting to put their points and generally nobody going off in a huff.

      I don’t know if most of the people on here are capital R Rationalists but many do try to be rational in their approach to understanding “why do I think this? why do I believe this? am I being honest about the reasons or am I just rationalising a bias or preference? what is the best way to make decisions?”

      And the big one, whether you approach it as an ethical or philosophical or religious question: what is the way to live a meaningful life?

      Besides, where else are you going to get godawful pun streaks, discussions about battleships, pop and high culture references, and the chance to get exposed to a lot of different viewpoints outside one’s own bubble in a handy one-stop site? 🙂

    • AnonYEmous says:

      since socialists advocate for what is more or less a false god and can’t admit it to themselves because that would mean that there is no god, are you even further down the list?

      bonus question: didn’t your blog say you wouldn’t be talking about politics, and then start ranting about Trump and charter schools and the Mercers?

      • Enkidum says:

        Isn’t there some rule on this blog about not being an asshole?

      • Freddie deBoer says:

        By “ranting” you mean making data-driven policy arguments about education, a topic on which I am very well qualified, which I specifically said was going to be part of a blog on education.

        • Zodiac says:

          Do you already have the URL for the blog? I’m very interested in reading it once it starts and would hate to miss it.

        • AnonYEmous says:

          Overall, the Mercer Family Foundation’s donations are a veritable Who’s Who of reactionary conservatism, with large donations going to the Heritage Foundation, the Cato Institute, the George W. Bush Foundation, the Barry Goldwater Institute, the Manhattan Institute…. Breitbart.com

          How does Success Academy justify taking money from people who fund such hateful rhetoric? I don’t know.

          Might be time for an enterprising reporter to pick up the phone.

          Guess I shouldn’t be surprised that you broke a promise to stop talking about politics, to be honest.

          To the thrust of the article itself, and this is what bothered me: progressive ends are rarely achieved by progressive means, which is the problem you run into. An easy example is, to harp on a topic you already expressed no interest in discussing, capitalism vs. socialism; capitalism having lifted huge portions of people out of poverty and socialism having returned some of them.

          Now I’m not so sure that charter schools work, as such. But if they do, then maybe you should just let conservatives win and let everyone benefit, instead of complaining that conservatives like it.

    • Internet rationalists share this trait with a lot of previous intellectual movements: they are vastly more effective at criticizing others than at understanding themselves.

      Intellectual movements tend to be the opposite: they tend to reject absolutist thinking and ‘weak man’ arguments, and are quite hard on each other (there is perhaps a tendency for Rationalists to be too charitable to opposing views, although the EY ‘worship’ may be an exception to this). You see this nitpicking on the intellectual far-right too.

    • alwhite says:

      @Freddi deBoer,

      they are vastly more effective at criticizing others than at understanding themselves

      This kind of statement becomes self-defeating really fast. How is your comment not falling into this very trap as you say it?

    • Spookykou says:

      If you asked me to rank my perception of the level of self-knowledge of the median members of various internet constituencies, I would feel compelled to place rationalists near the very bottom.

      As someone who is only toe deep in the internet, would you be willing to elaborate on this point. I am unaware of any other internet constituencies, but I find the level of self-knowledge and personal understanding here on SSC a welcome reprieve from my day to day interaction, if there are better places I could go, I would be interested.

    • Deiseach says:

      that “Demon” whatever the fuck Yudkowski is always talking about is like something a schizophrenic would come up with

      Maxwell’s Demon? A 19th century thought-experiment by a Scottish physicist, not generally regarded as being a nutter (I learned about it in secondary school science classes). If there is some other demon Yudkowsky talks about, I don’t know enough about LessWrong to recognise it (granted, he does take concepts and run with them or create his own riffs on them, so one of those may be what you mean).

  19. andekn says:

    Man, now you’re just handing ammo to your enemies. Just imagine the 10000+ times reblogged article:
    “Scott Alexander says, and I quote, ‘Economists think that they can figure out everything by sitting in their armchairs and coming up with ‘models’ based on ideas like ‘the only motivation is greed’ […]All they ever do is talk about how capitalism is perfect and government regulation never works, then act shocked when the real world doesn’t conform to their theories.'”

  20. Ilya Shpitser says:

    ” They don’t pooh-pooh academia and domain expertise”

    Really, Scott? I can find three examples of prominent folks in the community doing just that, starting with the “diseased discipline” on down. By your lights is that just youthful indiscretions?

    The rationalist main man Eliezer is _explicitly allergic_, re: reading and writing academic papers.

    I think the difference here is, you don’t do this. And your idealized headcanon of the community is the same — but I don’t think the community really lives up to this standard. In fact, while I find quite a few things to like about rationalists, this specific issue is one I always thought the community had and was annoyed about.

  21. joyousandswift says:

    I’m not sure if I’m a member of the rationalist community. I only learned about this site about a year ago, and before that, I had never even heard of the rationalist community or Less Wrong.

    For what it’s worth, I think this site is spectacular. It’s as smart as it gets. Far more intelligent and consistently insightful than MargRev, and certainly better than Noah Smith or Will Wilkinson. I’ve never read a blog that made me think so many times, “Damn, I wish I had written that.”

    No community is a monolith. To the extent that I have a criticism of the Less Wrong community, it’s that it doesn’t always live up the values espoused in wonderful pieces such this one. Of course, about what community could you not make the same criticism?

    Reading Tyler Cowen’s post about this community makes me think that he should be lowered in status, particularly in comparison to this site. It was just such a ham-handed, uncharitable, blunderbuss criticism. Part of me thinks it was just an attempt to poke a bear at the circus and have the spectacle centered on him.

    Either way, when all this nonsense has passed, I think that you will end up looking better at the end of it.

    • Ilya Shpitser says:

      He’s attacking my tribe, lower his status!

      • joyousandswift says:

        Thank you for reminding me why I don’t usually comment.

        • bellinghamster says:

          After lurking for 2 years, I created an account just to thank you for your comment. You said exactly what I’ve been thinking: “Far more intelligent and consistently insightful than MargRev, and certainly better than Noah Smith or Will Wilkinson”.

          I don’t know about the “rationalist” community (have never been grabbed by many older linkbacks to LW and Overcoming Bias), but SSC and its commenting community has virtually no peer on the internet, imho.

          It’s not a “tribal” thing, it’s a *actual quality* thing.

      • I loled hard at this

  22. Look. I’m the last person who’s going to deny that the road we’re on is littered with the skulls of the people who tried to do this before us. But we’ve noticed the skulls. We’ve looked at the creepy skull pyramids and thought “huh, better try to do the opposite of what those guys did”. Just as the best doctors are humbled by the history of murderous blood-letting,

    hmm… unfortunately you made the mistake inadvertently of associating Rationality with murderous dictators (unless someone else specifically made this critique). Bad choice of title and example imho, unless this is an example of Poe’s law?.

    The example of doctors and medicine is good one..in the past, medicine was iatrogenic, but great advances have been made in treating disease and prolonging life, which is an example of science succeeding.

    • Enkidum says:

      It’s a pretty standard critique, and Scott specifically mentions Marx and the Soviets (you could also add the Chinese communists, the French Revolution, etc). If you want to rule such nasty examples out of the rationalist tribe, you’re probably guilty of the no true Scotsman fallacy.

  23. J Mann says:

    Here’s my complaint about rationalism, which is I admit somewhat specific.

    In HPMOR, Harry lectures everyone about rationality a lot, but ultimately, he solves his problems by being smarter, more creative and better educated than his opponents. If he uses the rational principles that he introduces, I don’t see it. Maybe rationality helped him to get so smart, but maybe he’s just really smart.

    One of Harry’s students in rationality does in fact adopt EY’s philosophy whole hog in a way that changes this character’s life dramatically. Rather than show us how rationality leads to positive results, this character then mostly disappears from the story.

    ——-

    My overall opinion is that rationality attracts unusually interesting and smart people, which is its primary virtue. Its secondary virtue is that the community has some values and tools that tend to lead towards effectively discussing and hopefully solving problems, although at the cost of hundreds of thousands of words.

    • AnonYEmous says:

      wasn’t one of his biggest-used abilities to transfigure down to the atomic level?

      Sure partly that’s just being better educated – he knows about atoms! but it takes him a while to actually do it, because he tries to rationally analyse magic and so forth.

      • J Mann says:

        I don’t know that Harry’s analysis of magic is particularly Yudkowskian – it’s pretty much stuff that Sir Francis Bacon would recognize, if Bacon were well read on atomic structure and had very little regard for his own life.

    • Evan Þ says:

      “One of Harry’s students in rationality does in fact adopt EY’s philosophy whole hog…”

      Are you talking about Draco? It doesn’t seem to me that he adopts Yudkowskian Rationality any more than Harry himself, and I don’t see much evidence that he goes much farther than even Hermione.

      • J Mann says:

        My recollection is that the last time we see Draco before the climax, Draco has become some kind of rationalist answer to Sherlock Holmes, who applies Rationality to strip through mysteries like, well, Sherlock Holmes, but that he drops out of the plot immediately after that until the denouement, so we never get to see how that works.

    • carvenvisage says:

      Yeah HPMOR is geniusfic for sure.

      Like the Ender’s game series except Harry’s domain is a broader than ‘violence and tactics’, or ‘escalation and manipulation’ etc like various characters have there.

      Which btw makes sense in story because, nvm spoilers

  24. needtobecomestronger says:

    Economists think that they can figure out everything by sitting in their armchairs and coming up with ‘models’ based on ideas like ‘the only motivation is greed’ or ‘everyone behaves perfectly rationally’.

    Huh, did we go to the same university? That sounds exactly like my old monetary economics teacher, who among other things claimed that buying lottery tickets is a perfectly rational act if you just draw participants’ utility curves in such a way that they are effectively risk maximisers.

    In fact I think that there are many fields where leading academics are spouting things which are *blatantly crazy* in way that’s obvious to anyone with a smidgen of common sense, but which goes totally unacknowledged by those within that field. The last time I went to a meetup of philosophers I jokingly asked how many pages of their PHD they spent on defining the concept of ‘truth’ – the answer, without a trace of irony, was “three”. The latest I heard from social studies was that ‘race does not exist’ and there’s ‘no correlation between IQ and crime’, and there was a whole room full of IQ 140+ people all nodding along like this was a totally reasonable thing to say.

    I fully approve of the rationalist project, but a bias that you all have is that you tend to make things too complicated, too meta, too shy of obvious solutions. I agree that you’ve gotten much better at this, but I still remember how everyone insisted on using the “principle of charity” to reconstruct totally indefensible views as completely different arguments. I remember when Yudkowski proudly declared that he voted libertarian during the W. Bush election instead of Democrat because he “didn’t want blood on his hands” (because democrats are against free markets, I guess?) I remember his ‘politics is the mind-killer’ post being used to argue that only ‘rational’ arguments like free-market economics should be discussed, and not anything as ‘political’ as global warming – which culminated in Robbin Hanson claiming that noise externalities are not a problem because people can just individually work out contracts with the noise-makers and pay them money to stop making noise.

    I remember that even after describing how well-kept gardens die to pacifism, no mods were appointed (Because an upvoting system is like free markets!) and the community was allowed to be overrun by schoolyard bullies who openly advocated for block-downvoting those with “undesirable political views” (i.e. anyone to the left of Hitler). And then, when everyone with a grain of common sense noted that “golly gee gosh, there seem to be some strange people on that there forum”, the community replied without a trace of irony that everyone only thinks that Less Wrongers are asocial weirdos because clearly the critics must have all watched Spock on Star Trek and that’s the real problem.

    I can easily imagine a feminist Scott Alexander having written the following post instead:

    Look, I’m the last person who’s going to deny that the road we’re on is littered with the failings of those who tried this before us. But we’ve noticed that. We noticed when the first feminists expressed hatred and disdain for black people, and we learned from that. We noticed when they accepted black people but not gay people, and then when they accepted gay people but not transgender people, and we’ve learned from that. The best feminists are humbled by this and so we know that we still might have missed some small minority that’s currently being oppressed by the hateful force of patriarchy, and we are always on the lookout to ban more offensive speech just in case anyone out there might still have hurt feelings. I hope that maybe having a community dedicated to carefully checking its own privilege and trying to minimize offensive speech in every way possible will make us have slightly fewer horrendous mistakes than people who don’t do that.

    Listen. I don’t think that the people criticizing the Rationalist community are criticizing us for not being rational enough, any more than that we criticize feminists for not being feminist enough. I think these people took one look at us, saw all the junk I described above and immediately lumped us in the same category as Ayn Rand followers and Silicon Valley in general – i.e. those strange nerdy people who are constantly inventing weird reasons for why it’s definitely okay to torture people in some cases.

    But that’s a PR problem second, and a genuine problem with the movement first.

    • but the Rationality community isn’t really a ‘movement’…it’s not trying to win a popularity contest, where things like PR matter. Rationalism should be kinda esoteric and ‘nerdy’; otherwise, it risks become like any other ‘boring’ political forum where the same predictable stuff is repeated over and over.

      • needtobecomestronger says:

        Less Wrong was pretty explicitly founded on the idea that rationalists should “win”, and the original motivation for creating it was not only effective altruism in general but specifically the idea that creating a base of rationalists would create a greater recruiting pool for the Singularity Institute. So I would say that it was certainly intended as a movement, even if some members prefer not to take part in that aspect of it.

        Edit: Wow, how did I manage to mispel the names of both EY and Robin Hanson in a single post? I am impressed with myself.

      • Deiseach says:

        My impression as an outsider who came late to the party and never hung around LessWrong is that a particular group coalesced around a particular person who was aiming for a particular purpose, and once he got that or near enough to it, he peeled off and followed what was his primary interest and goal.

        And that’s fine, because everyone is perfectly entitled to say “Okay, I’ve had enough of this game, I’m leaving, have fun guys!”.

        But Rationalism/rationalism having produced all the other blogs and groups and people going forth and spreading the message and no longer being tightly tied to “this one site and this one group” is a good thing, because it means the idea/movement/cult/philosophy (take it as you will) is alive and healthy and thriving. It’s spreading, even if that means changing in ways that were not considered or if considered were not thought optimum, because growth is change. The very fact that you’re getting outsider criticism is because outsiders are becoming aware of your existence. This is a hopeful sign!

        It’s exactly what is not happening with Effective Altruism (again, a view as an outsider who came late to the party). Looking at the last couple of conferences organised, to me there seems an unhealthy emphasis on networking, on “if you’re interested in getting into the field, come along and meet possible employers” and a turning-inwards in speaking to the self-selected little group(s) who are becoming incestuously clannish. I know that sounds very harsh, but I don’t see EA as growing, changing, getting into the mainstream, becoming noticed, and spreading in the same way. (And Peter Singer as a guru never made me warm to the movement anyway).

        • rlms says:

          Hmm, I have the exact opposite impression. In terms of books published, physical groups of affiliated people, media exposure, and endorsement by famous people I think Effective Altruism is much more successful. I could easily see it going mainstream in the next few decades, but I don’t think internet rationalism has much more room to grow.

    • quanta413 says:

      Honestly, I didn’t think the critics were particularly inaccurate or uncharitable. Noah Smith was mostly just talking about how the people attracted by the rationality movement are sometimes… odd or rude (and hardly in a way out of the ordinary for the internet). And Tyler Cowen… well Tyler Cowen’s writing is kind of silly. You can’t take him too literally so to speak.

      But even without knowing about the stuff you wrote, my impression of less wrong and Eliezer was definitely not a great vibe. A little bit kooky and off for sure, and mostly just much more arrogant than Scott or the people here are (and it’s hardly like ego is lacking here). On the other hand, I feel here is pretty great! Even the people who irritate me sometimes actually seem pretty genuine and relatively less interested in just verbally stomping opponents than most places on the internet. I also get the impression the groupthink level is relatively weak here. I would ballpark that more conventional left-winger posts + far left-winger posts are outnumbered roughly 2 to 1, which is a small miracle (even considering Scott’s moderation policy) when most places that ever discuss politics rapidly self segregate to ratios more like 10 or 100 to 1. And the right and libertarian wings here covers a really weird and broad portion of the spectrum.

    • Listen. I don’t think that the people criticizing the Rationalist community are criticizing us for not being rational enough,

      I can vouch that some are, because I’m one of them. There is no reason for all you critics to be on the same page … politcal movements may have critics towards both the right and the left — and there are people who think that Dennett ism’t reductionist enough….

      • ChetC3 says:

        I’ll second this. The Rationalist community certainly talks a lot about how rational it is, but when it comes to demonstrating that in their writing and behavior…

        • carvenvisage says:

          …they suck lol!

          Thanks for your sophisticated and refined contribution.

          And for signposting it to us by wearily trailing off while perhaps fanning yourself and discussing the finer points of Sartre or Foucalt. We get it, you took english lit, you’re wise.

    • carvenvisage says:

      >The last time I went to a meetup of philosophers I jokingly asked how many pages of their PHD they spent on defining the concept of ‘truth’ – the answer, without a trace of irony, was “three”

      What’s the problem with that?

      Actually hearing that increases my confidence in the field. Isn’t the whole point of the ‘field’ to question intuitions and try to ground things in the most fundamental way? Thought they’d moved away from that.

  25. Pete says:

    Fixed this for you, Scott: “If any moron on a street corner could correctly point out the errors being made by bigshot PhDs, why would the PhDs never consider changing?”

  26. vV_Vv says:

    Nobody is perfectly rational, and so-called rationalists obviously don’t realize this. They think they can get the right answer to everything just by thinking about it, but in reality intelligent thought requires not just brute-force application of IQ but also domain expertise, hard-to-define-intuition, trial-and-error, and a humble openness to criticism and debate. That’s why you can’t just completely reject the existing academic system and become a self-taught autodidact like rationalists want to do.

    So you claim that this criticism is unfair.

    But what about a certain rationalist guru with no academic credentials or demonstrated domain expertise (but allegedly with a very high SAT score) claims to have found the solution to the problem of the interpretation of quantum mechanics, a problem that eluded physicists such as Einstein, Bohr, etc. for over a century, and he further claims that the solution was obvious and anybody who does not agree with it “does not have enough g-factor”? What about when he claimed, again with no demonstrated expertise, to be better than professional VCs at predicting which startups would succeed? And don’t get me started on his claims about cryonics…

    Do you think that the quoted criticism is unfair in this case? Do you think it was fair in the past but it does not apply it anymore because mistakes were made but now the Rationalist Movement™ recognized them and moved on?

    • The criticisms are not fair of most rationalists, but they are fair of one very prominent one. Controlling who your leaders and spokespeople are is part of controlling the message.

    • drachefly says:

      > claims to have found the solution to the problem of the interpretation of quantum mechanics

      No, he didn’t. He claims, very plausibly, to have read the solution, its already having been worked out by actual physicists, over a period of decades. Einstein, Bohr, etc were dead well before this work was done.

      He also said that the solution was obvious in retrospect only, not prospectively.

      And the g-factor post you linked… note that the community downvoted that REALLY HARD. He overstepped, and was called on it. And that happened right away, so it wasn’t something we’d look back on and say we learned. So as TheAncientGreek said, this evidence doesn’t generalize.

      • coreyyanofsky says:

        18 upvotes and 23 downvotes counts as REALLY HARD downvoting relative to EY’s usual, but it’s not REALLY HARD in absolute terms — not like some of the dogpiles I’ve seen, anyway. (Point is, 18 upvotes isn’t exactly universal opprobrium.)

      • No, he didn’t. He claims, very plausibly, to have read the solution, its already having been worked out by actual physicists, over a period of decades. Einstein, Bohr, etc were dead well before this work was done.

        But MWI has not been worked out in the maths sense .. the derivation of the Born rule is still an unsolved problem. I think you are missing that the I in MWI stands for interpretation, and interpretation means a conceptual understanding of existing maths. Also neither Einstein nor Bohr had anything in particular to do with MWI.

      • vV_Vv says:

        No, he didn’t. He claims, very plausibly, to have read the solution, its already having been worked out by actual physicists, over a period of decades. Einstein, Bohr, etc were dead well before this work was done.

        Technically, Bohr was still alive and active when Everett published the first version of the MWI, though quantum decoherence was introduced after his death. Anyway…

        He also said that the solution was obvious in retrospect only, not prospectively.

        And many physicists disagree.

        And the g-factor post you linked… note that the community downvoted that REALLY HARD.

        -5 doesn’t look that “REALLY HARD”, and that comment was probably the lowest point of the debacle.

        Anyway, I’m not claiming that everybody in the the “rationalist” community mindlessly follows EY as a cult leader. I just wanted to point out that the kind of criticism of the “rationalist community” that Scott considers unfair does actually apply to one of the most prominent and founding figures of the community.

  27. HaakonBirkeland says:

    The fundamental issue is with the very nature of writing. People who read and publish things on the internet take a lot of things for granted. This applies equally to Scott Alexander or Tyler Cowen. Both authors assume that:

    1.) What they write can be meaningfully interpreted by someone else.
    2.) There is purpose to their writing.

    How do these authors, let alone anyone else on the internet, receive knowledge from what the read and see on their computer devices? What is lost in the process of translating lived experience to language?

    What is the difference between writing about World of Warcraft and writing about European presidential elections?

    Is it rational to interpret the symbols on a computer screen as reality? What is different about experiencing World of Warcraft compared to the European presidential elections through a computer screen? What are the limits of what can be interpreted and experienced through written language? What are the limits of what can be communicated through written language?

    For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.

    Phaedrus 274e–275b

    Would Socrates have a blog if he were alive today?

    Does it make any more sense to argue on the internet about the intricacies of European presidential elections than about World of Warcraft?

    • HaakonBirkeland says:

      As a follow-up question, does it make more sense to declare oneself a Rationalist or a Neoliberal and defend your position on Twitter than it does to declare oneself a Paladin or Death Knight and defend your position on World of Warcraft? How is the former character class more real than the latter?

    • rlms says:

      What makes you think arguing about World of Warcraft is pointless?

      • It isn’t pointless, but the interesting argument is between the warmongers who want the Alliance to fight the Horde and vice versa and the reasonable people who realize that conflict between the factions only helps the Litch King, or the Legion, or whoever the current big bad is.

  28. Urstoff says:

    Under what conditions is it possible for an outsider to ever level a legitimate criticism at a movement?

    • The Nybbler says:

      When the outsider understands the movement well enough to see problems with it? I mean, that sounds kind of tautological, but I’m not sure what you’re getting at otherwise. One doesn’t have to be a member of the Na a woo-woo cult to see that the woo-woo cult has gone pretty wrong.

      • Urstoff says:

        Seems like Scott was promoting the principle that if you, an outsider, think of a criticism, it is overwhelmingly likely that insiders have thought of that same criticism and hashed it out and either incorporated the legitimate parts of the criticism or found the criticism to be wanting. In that case, as an outsider, unless you know as much as a well-versed insider, then you should have a low confidence that your criticism is a good one. Given that it is very rare for an outsider to be motivated enough to research a movement as much as a well-versed insider, then it seems exceedingly rare that an outsider will ever have a legitimate criticism (that is, it can happen, but it almost never does).

        Compare, for example, a field that everyone likes to make fun of: X studies, with their various methods of autoethnography and whatnot that seem absurd to outsiders. Instead of poking fun, RealPeerReview-style, at all the seeming nonsense that gets published in that field, should we assume that insiders know those criticisms well and have dealt with them, and thus their dismissal of outsider criticism is not sticking their heads in the sand but just the same type of frustrated response a rationalist would have when someone says that Spock is not a good model of human behavior?

        • Seems like Scott was promoting the principle that if you, an outsider, think of a criticism, it is overwhelmingly likely that insiders have thought of that same criticism and hashed it out and either incorporated the legitimate parts of the criticism or found the criticism to be wanting. In that case, as an outsider, unless you know as much as a well-versed insider, then you should have a low confidence that your criticism is a good one.

          But if rationalists consistently believed that, they would have to withdraw their criticisms of philosophy, etc.

          But the actual dynamic, is: insiders know standard objections, have answer to standard objections, think answer is good. Outsiders think answer is bad, and therefore objection stands.

          • Urstoff says:

            If that’s the actual dynamic, then Scott’s post basically boils down to “smart people make dumb criticisms of things they haven’t bothered to look into”, which is true as far as it goes (and rationalists can be as guilty of it as anyone), but I was trying to draw a broader epistemological point out of it.

          • Kaj Sotala says:

            But the actual dynamic, is: insiders know standard objections, have answer to standard objections, think answer is good. Outsiders think answer is bad, and therefore objection stands.

            In my experience, “insiders know standard objections, have answer to standard objections, outsiders are uninterested in learning answers to standard objections because it feels satisfying to think of yourself superior to a big group of people, and actually having to engage with those people’s arguments would get in the way of that” is way more common.

        • Ozy Frantz says:

          Compare, for example, a field that everyone likes to make fun of: X studies, with their various methods of autoethnography and whatnot that seem absurd to outsiders. Instead of poking fun, RealPeerReview-style, at all the seeming nonsense that gets published in that field, should we assume that insiders know those criticisms well and have dealt with them, and thus their dismissal of outsider criticism is not sticking their heads in the sand but just the same type of frustrated response a rationalist would have when someone says that Spock is not a good model of human behavior?

          As someone who got my degree in X Studies, YES PLEASE THAT WOULD BE A VERY GOOD IDEA.

        • dndnrsn says:

          I did religious studies, and considering modern theology (not, strictly, religious studies, which is largely secular, but you get a good dose of theology doing religious studies) tends to fall into two categories (as something many people think is nonsense – or, of course, think part is nonsense, since different religions have their own theologies – I think theology can be considered “x studies”) they vary in how they deal with outside criticism. Some heads get stuck in sand, others don’t.

          It’s dangerous to assume “ha! Those dummies don’t know anything; if they did, they wouldn’t be studying it!” but it is also dangerous to assume “they must know what they are talking about, so any criticisms must already have been dealt with.”

          To give an example, consider the different Christian and Jewish responses to 19th century onwards scholarship on the authorship of scripture and other issues coming from textual criticism, etc. Some denominations (mostly liberal, but including some conservative denominations) seriously grapple with the issue that a lot of what was traditionally thought about who wrote what was wrong. Others just dig in their heels and deny the scholarship’s validity (this is almost always conservative denominations). Others still just sort of ignore the whole issue (this is mostly liberal denominations) – they aren’t literalists, but they aren’t especially curious either.

          In any academic pursuit, you’re going to have some people who seriously consider criticisms, and others who get them out of the way by hook or by crook because their personal beliefs/congregation/faculty position depends on it.

        • Kaj Sotala says:

          Seems like Scott was promoting the principle that if you, an outsider, think of a criticism, it is overwhelmingly likely that insiders have thought of that same criticism and hashed it out and either incorporated the legitimate parts of the criticism or found the criticism to be wanting. In that case, as an outsider, unless you know as much as a well-versed insider, then you should have a low confidence that your criticism is a good one. Given that it is very rare for an outsider to be motivated enough to research a movement as much as a well-versed insider, then it seems exceedingly rare that an outsider will ever have a legitimate criticism (that is, it can happen, but it almost never does).

          This principle is admittedly inconvenient, but in my experience it is entirely correct, and the faster that everyone internalized it, the better.

          My experience has been that *each time* that there has been a sizable community that lots of smart-seeming people support, but which has well-known objections to it that have led me to dismiss the community… then the very moment when I started actually looking for the community’s strongest responses to those standard objections, it became obvious that there existed strong responses which the outsiders were totally ignorant of.

          And I have *also* been in several communities that made some counterintuitive claims, had lots of people dismiss those claims based on what seemed to them like obvious objections… and been immensly frustrated by the fact that we’d spent enormous amounts of time analyzing those objections and making what I felt to be very strong counter-arguments, but basically none of the critics had even bothered looking up what our answers might be. (If they had even read the answers and then disagreed with them, then they would at least have been *trying*. But they were literally just going with the obvious objection and then making *absolutely no effort* to find out whether we might even have tried to answer those objections.)

          • Urstoff says:

            I think the principle would be find adopted as a norm, but given the intense shift towards global skepticism and agnosticism on most subject areas it would necessitate, I don’t think it’s a norm that could ever be adopted among the population at large, much less among smaller, more epistemically fastidious communities.

  29. Cecil Harvey says:

    I am genuinely curious. I’m a traditionalist Roman Catholic (very strongly formed by Chesterton), and I don’t entirely understand why a rationalist would care about other people. I’ve only started reading your blog fairly recently, so please forgive me if you’ve written extensively about this.

    To me, when I was at a crossroads decades ago, exploring multiple different religions and philosophies, there were only two paths that made sense to me: fully embrace Catholicism and all of the consequences of its philosophy and tradition, or conclude that there is no Creator, thus no real teleology, therefore no meaning to my actions, and I should become a nihilist and work towards maximizing hedonic pursuits.

    I’m not a sociopath — I do feel empathy — but I was looking at it from a perspective that I felt was rational. If there is no teleology of man, why ought I view man, either myself or others, as worthy of time and effort?

    I believe you are genuine in your altruism. I don’t think you’d write what you write otherwise. But I must ask, why? I’d be grateful for a real explanation.

    By the way, you are on the way to convincing me on a basic income guarantee and related topics. I am a reactionary monarchist, but not a modern conservative. And though I am reactionary, I don’t think we can (or should) put technology back in the box, and even a good and righteous Catholic king would have to deal with robots displacing workers and the level of specialization and globalization that modern communication allows. I also strongly believe in a sense of noblesse oblige, and the wealthy and privileged paying for the basic needs and health of those who cannot makes sense.

    • Enkidum says:

      As I’m sure you are well aware, there are literally thousands of books dedicated to precisely the questions you’re asking. One of the common threads of these, as I’m sure you’re also aware, is that maximizing short-term hedonism tends to have a strong detrimental effect on long-term hedonism, so if we’re interested in maximizing overall pleasure, we need to think long-term, and moderation and cooperation become the order of things. (This is the basic insight of Epicureanism.)

      Another common thread would be that “fake” teleology that is just as motivating as “real” teleology is good enough for most people. This has been one of the central messages of, e.g., Daniel Dennett (see Darwin’s Dangerous Idea for probably his clearest statement of this).

      Yet another one is simply that we care about others because we feel like it, due to a combination of upbringing and genetics. I don’t want to beat other people up, because I like other people. What more do I need, on a personal level?

      It’s worth noting that societies that are not largely based on cooperation will fall to pieces, and so most of us tend to end up being socialized to care (at least somewhat) about others. Which is a good thing (at least if you’ve been brought up in one of these societies).

      There’s plenty of attempts to find a rational grounding for being nice to each other, Kant and Mill being two of the most famous examples, I suppose. Surely none of this is news to you. So what’s the real question?

      • The original Mr. X says:

        As I’m sure you are well aware, there are literally thousands of books dedicated to precisely the questions you’re asking. One of the common threads of these, as I’m sure you’re also aware, is that maximizing short-term hedonism tends to have a strong detrimental effect on long-term hedonism, so if we’re interested in maximizing overall pleasure, we need to think long-term, and moderation and cooperation become the order of things. (This is the basic insight of Epicureanism.)

        First of all, whilst that may be true on a societal level, on an individual level you’ve given me no reason not to shaft my moderate, co-operative neighbours if I can do so without anyone discovering.

        Secondly, whilst you may disagree, most people’s moral intuitions seem to include some degree of categorical force — X is just wrong, period, not “X is unlikely to advance some goal you happen to have”. Even if I accept that moderate and co-operative behaviour is likely to increase my hedonism, that still doesn’t get me to a moral system as it’s usually intuited.

        Another common thread would be that “fake” teleology that is just as motivating as “real” teleology is good enough for most people. This has been one of the central messages of, e.g., Daniel Dennett (see Darwin’s Dangerous Idea for probably his clearest statement of this).

        Missing the point. The question is “Why should we be good?”, not “Why do we think we should be good?”

        Yet another one is simply that we care about others because we feel like it, due to a combination of upbringing and genetics. I don’t want to beat other people up, because I like other people. What more do I need, on a personal level?

        Lots of people obviously don’t “feel like it”, though. If my personal “combination of upbringing and genetics” leads me to want to commit genocide, what are you going to say to me? “It looks like your preferences are different to mine”?

        It’s worth noting that societies that are not largely based on cooperation will fall to pieces, and so most of us tend to end up being socialized to care (at least somewhat) about others.

        Again, that does nothing about the free rider objection. Society isn’t going to stand or fall based on whether I manage to scam some little old lady out of her widow’s pension, so if I can get away with it, why not?

        Which is a good thing (at least if you’ve been brought up in one of these societies).

        That statement only makes sense if you have some sort of criterion for judging what is and isn’t good, which you don’t, as far as I can see.

        • Again, that does nothing about the free rider objection. Society isn’t going to stand or fall based on whether I manage to scam some little old lady out of her widow’s pension, so if I can get away with it, why not?

          Society doesn’t want lots of people getting away with it, so it sets up rules where no one does. That’s where your obligation comes from.

          • The original Mr. X says:

            Back in Nazi Germany, society set up rules that everybody had to hand over any Jews they knew to the authorities. Back in Soviet Russia, society set up rules that anybody who heard a family member saying something counter-revolutionary had to report them. Had I lived in these countries, would I have been obliged to hand over Jews to the Gestapo or shop my parents to the KGB?

          • In the sense that you might have been punished for not doing so. But if you understand societal rules as intended to fulfi a purpos,e yuo don’t have to accept them as absolutes, even in the absence of some fundamental moral law that is part of the universe.

          • carvenvisage says:

            >In the sense that you might have been punished for not doing so

            and what sense is that exactly?

        • Enkidum says:

          First of all, whilst that may be true on a societal level, on an individual level you’ve given me no reason not to shaft my moderate, co-operative neighbours if I can do so without anyone discovering.

          Morality is not a personal matter. It only makes sense in the context of a society. And I think many people agree that it can only be fully justified when taking the society into account.

          Even if I accept that moderate and co-operative behaviour is likely to increase my hedonism, that still doesn’t get me to a moral system as it’s usually intuited.

          I think this is just simply wrong? Surely most people would agree that Epicureanism, Utilitarianism, Rawls’ theory of justice, etc, contain some elements of a moral theory? Clearly a lot of people disagree with your intuitions on this one.

          Missing the point. The question is “Why should we be good?”, not “Why do we think we should be good?”

          Nope. Dennett’s point is that you get good-enough-for-any-real-purposes teleology from the real world. Those “purposes” include “justifying morality”.

          If my personal “combination of upbringing and genetics” leads me to want to commit genocide, what are you going to say to me? “It looks like your preferences are different to mine”?

          I would say that I will do anything, up to and including murdering you, to stop you.

          Again, that does nothing about the free rider objection. Society isn’t going to stand or fall based on whether I manage to scam some little old lady out of her widow’s pension, so if I can get away with it, why not?

          Society is going to stand or fall based on whether it actively penalizes defectors. If you need more of a grounding than that, Kant seems a good point to start?

          • The original Mr. X says:

            Morality is not a personal matter. It only makes sense in the context of a society. And I think many people agree that it can only be fully justified when taking the society into account.

            What’s your point here?

            I think this is just simply wrong? Surely most people would agree that Epicureanism, Utilitarianism, Rawls’ theory of justice, etc, contain some elements of a moral theory? Clearly a lot of people disagree with your intuitions on this one.

            As I said, a moral theory needs to have some sort of normative force — “Maximising happiness [or whatever] is the right thing to do”, not “If you, personally, happen to want to maximise happiness, this might help you do it”.

            Nope. Dennett’s point is that you get good-enough-for-any-real-purposes teleology from the real world. Those “purposes” include “justifying morality”.

            If Dennett thinks teleology doesn’t actually exist, then it can’t actually justify anything, morality included. If he does think teleology exists, I’m not sure why you’re using him in an argument that we can have morals without teleology.

            I would say that I will do anything, up to and including murdering you, to stop you.

            Note that you don’t say that I’m actually doing something wrong, because, under your view, right and wrong don’t really exist. All you have left is an appeal to brute force, which isn’t the same as morality.

            Society is going to stand or fall based on whether it actively penalizes defectors.

            For the purposes of this thought experiment I’m able to scam the widow without anybody finding out, so the issue of societal punishment never arises for me.

            If you need more of a grounding than that, Kant seems a good point to start?

            Kant’s categorical imperative doesn’t really give me much reason to behave morally, either. Sure, if everybody went around scamming people that would be bad, but so what? I’m not talking about everybody scamming everybody else, I’m talking about me scamming one person.

          • Enkidum says:

            I don’t think we’re going to do much more than argue in circles here, but I will add that Dennett’s point is that what philosophers have typically thought of as teleology doesn’t exist, but there is a perfectly good form of teleology in the natural world.

            So the claim isn’t that you can have morals without teleology. It’s that you can have morals without Teleology(TM). Specifically, you do not need a supernatural (or otherwise magical/transcendental) source of teleology, you get enough from the real world.

            I am fully aware based on what you’ve said that this will not strike you as enough, precisely because it is contingent and limited. And here I think we have to accept that we have remarkably different intuitions – I’m ok with a contingent and limited source of morality (and would argue that most people are as well).

          • The original Mr. X says:

            I don’t think we’re going to do much more than argue in circles here, but I will add that Dennett’s point is that what philosophers have typically thought of as teleology doesn’t exist, but there is a perfectly good form of teleology in the natural world.

            As I recall, his point was actually that teleology doesn’t exist, but that evolution means that something a bit like teleology does. It’s all pretty incoherent, of course, because evolution is itself teleological, as is genetics and most biology in general.

            So the claim isn’t that you can have morals without teleology. It’s that you can have morals without Teleology(TM). Specifically, you do not need a supernatural (or otherwise magical/transcendental) source of teleology, you get enough from the real world.

            As far as I can see, the distinction between “teleology” and “Teleology(TM)” is an artificial one, made up by philosophers and scientists who can’t deny that teleology exists but don’t like the implications. Basically it’s the naturalist equivalent of the micro-/macro-evolution distinction.

          • Enkidum says:

            As I recall, his point was actually that teleology doesn’t exist, but that evolution means that something a bit like teleology does. It’s all pretty incoherent, of course, because evolution is itself teleological, as is genetics and most biology in general.

            Hard to respond to this unless you’re very clear about what you mean by teleology. He thinks teleology exists, and for that matter free will. He doesn’t think that either of them have all the qualities that have been traditionally insisted upon by philosophers.

            As far as I can see, the distinction between “teleology” and “Teleology(TM)” is an artificial one, made up by philosophers and scientists who can’t deny that teleology exists but don’t like the implications. Basically it’s the naturalist equivalent of the micro-/macro-evolution distinction.

            Not really, no. The specific argument Dennett was involved in in this case was against philosophers (like John Searle, Jerry Fodor, Colin McGinn, David Chalmers, and many others) who have insisted at one time or another that (a) there is a definite teleology in the world, (b) which bears a great deal of resemblance to the traditional Christian conception, and (c) cannot be explained by natural processes.

            Dennett agrees with (a) only, and seeks to explain it through our evolutionary history. There are some philosophers who deny (a), but certainly not the ones involved in this debate.

          • The original Mr. X says:

            Hard to respond to this unless you’re very clear about what you mean by teleology. He thinks teleology exists, and for that matter free will. He doesn’t think that either of them have all the qualities that have been traditionally insisted upon by philosophers.

            Dennett might think that something he calls teleology exists, but if he denies that it has the qualities traditionally ascribed to it, it’s misleading to call it teleology in the first place.

            Dennett agrees with (a) only, and seeks to explain it through our evolutionary history.

            Teleology applies to more things than biological life-forms, so even if Dennett were successful, it still wouldn’t explain teleology per se, merely one form of teleology. Plus, evolution itself is a teleological concept, and hence cannot be used to explain teleology.

          • Enkidum says:

            And at this point it’s clear that you have a very particular vision of what “teleology” means that many of us do not share. What, specifically, is missing from an evolutionarily-grounded form of teleology that you think is critical? You say that non-biological entities have teleology – can you provide an example? Is there anything in the entire universe that does not have teleology? If not, is it even a meaningful term? And why is evolution teleological?

          • The original Mr. X says:

            My “vision” of teleology is the standard one in philosophy, namely, a thing’s goal-directedness, purposiveness, or pointing to an end beyond itself, as (to use traditional illustrations) the moon is directed towards movement around the earth, fire is directed towards the production of heat, and so on. In classical philosophy teleology was taken to be a fundamental aspect of the physical world which explained the existence of causal regularity in the universe. Early modern philosophers like Bacon and Descartes thought that teleology didn’t really exist, and that causal regularity was imposed on matter by God (hence the term “laws of nature”, which was understood rather more literally than most people understand it today). Later philosophers kept the abandonment of teleology, but also abandoned the idea of divine laws which had explained how, in a world without teleology, causal regularity could still exist. Hume correctly saw that this rendered the entire notion of causality suspect; most other philosophers haven’t been willing to accept such a radical conclusion, but also haven’t successfully found an alternative to teleology/divine commands, which is one of the main reasons for the incoherence of modern naturalism.

          • Protagoras says:

            I am of course familiar with this vision of teleology. But I’m mostly struck by the fact that modern naturalism seems to work pretty well. Scientists have long leaned toward making naturalist assumptions. I am aware that Feser and his ilk think this is only possible because scientists are really being covertly teleological, but for a few reasons I find that highly implausible. Modern science is much more successful than earlier science, not merely comparable; if teleology is essential, why would making it covert produce better science than we had when it was overt? Plus, though this would go far beyond what would fit in a comment, as something of an expert on philosophy of science I think the amount of covert teleology in modern science is greatly exaggerated.

            Hume noticed that the lack of teleology had some impact on causation, but he did not do away with causes. I would be inclined to say that subsequent philosophy and science have pretty successfully indicated that a stripped down notion of causation, however suspect you may find it, seems more than adequate for all of the purposes for which we need causation. Indeed it seems to be much more useful than old-fashioned teleology-encrusted causation.

          • HeelBearCub says:

            @Protagoras:
            I’me fairly certain I’m not very good at philosophy as practiced by modern philosophers.

            But, in support of what you are saying, it strikes me that modern scientists are more “turtles all the way down” than teleological. Once they understand atoms, they look to understand electrons, protons and neutrons. If they come to a “complete” understanding of quarks, et. al., they will look for something deeper/smaller. If they reach a limit beyond which they determine it is impossible to explore, they will fall back on something like Goedel incompleteness, not teleology.

          • The original Mr. X says:

            @ Protagoras:

            You can follow the scientific method without understanding the philosophical justifications behind it. That doesn’t mean that the scientific method makes sense absent these justifications.

          • Enkidum says:

            Well at least there’s something solid to grapple with now.

            Are you saying that the ancient teleological vision is correct? That the moon has a goal of orbiting the earth?

            I suppose you’re right that Descartes marks an explicit break with the idea of teleology being a fundamental part of the natural world. He, of course, keeps teleology around as a fundamental feature of minds, which are inherently disconnected from the rest of nature (precisely because they are teleological and it is not). And his thoughts about this became dominant within philosophy.

            Dennett is explicitly opposed to this split (and I follow him here). Minds are part of nature, minds have teleology, therefore teleology is part of nature. The difference from the old view, however, is that this is simultaneously opposed to pan-teleologism. The universe writ large has no goals, but evolutionary processes ended up creating beings with goals.

            I think you will still argue that this is not sufficient, that we need a universal teleology. But at least we know what we’re arguing about now.

          • The original Mr. X says:

            Are you saying that the ancient teleological vision is correct?

            Yes.

            That the moon has a goal of orbiting the earth?

            That depends on what you mean. If you mean “Is the moon’s nature such that it reliably orbits the earth instead of, e.g., bouncing up and down like a yo-yo, or flying off into space, or turning orange?” then yes. If you mean “Does the moon consciously want to orbit the earth?” then no, and nobody’s ever thought that.

          • anonymousskimmer says:

            “Morality is not a personal matter. It only makes sense in the context of a society.”

            I’d like to address this point, as I’ve had interesting conversations about it with my SO.

            By morality I’m assuming an absolute or relative code of conduct that one would feel bad about violating, and good about acting in accordance with.

            1) No other people, no society:
            One can conceive, and follow, a morality toward one’s environment. A very basic one would be a morality of sustainability and possibly of utility. Anyone who farmed, herded, hunted, seen a desert form where there used to be fertile land, or been castaway on a deserted island immediately realizes the applicability of such a morality.

            2) Another person incapable of forming a society:
            Most parents also reflexively understand a morality of somesort with respect to their children, who are as yet too young to form a society with the parent.

    • Yudkowskian rationalism is about fulfilling your values efificiently, and places almost no constraints about what your values are. So if you care, care, and if you don’t care, don’t care.

      • The original Mr. X says:

        That makes Yudkowskian rationalists sound almost exactly like the sophists one finds in Plato’s dialogues.

        • Protagoras says:

          Is that intended to be a criticism?

        • Cecil Harvey says:

          Exactly. The sophists argue that it’s best to be an vicious man where everyone else is virtuous. Socrates ultimately believed in a higher good, and that virtue was more rewarding.

          • Protagoras says:

            Thrasymachus argued that. Just Thrasymachus. He’s the only one. Really. Even Gorgias wasn’t on his side on this issue, never mind Cratylus or Hippias or Prodicus or (cough) any others we might name. Please do not attribute this to “the sophists.”

          • Whatever Happened To Anonymous says:

            Sophists get too bad a rap, man.

          • The original Mr. X says:

            Exactly. The sophists argue that it’s best to be an vicious man where everyone else is virtuous. Socrates ultimately believed in a higher good, and that virtue was more rewarding.

            True, although I was actually thinking more of the “Give corrupt politicians good rhetorical training so they can be corrupt more effectively” angle.

          • Rationalists do not argue that vice is better than virtue.

      • drachefly says:

        Quite. Orthogonality thesis, anyone?

      • James Miller says:

        This is what rational agents do in economic theory. Rationality in economic theory usually takes preferences as given (or exogenous to the model) and then assumes that the agent will maximize his welfare given his resources and preferences.

    • Doug S. says:

      The short answer to your question is “because it’s what we want to do”.

      The longer answer is that philosophers, psychologists, biologists, and lots of people with no special training have all tried to answer that question and come up with a zillion different answers. Purpose bottoms out somewhere; humans evolved with lots of different instinctive drives and the capacity to acquire more from the society around them. Altruism is one of them. We learn about people suffering and dying, decide “Fuck that shit – the world should not be this way!” and then do what we can to make the world closer to the one we wish we had. There’s nothing logically impossible about a creature that only cares about its own pleasure or about the number of paperclips in the universe, but we happen to be humans who care about other humans and don’t want them to suffer and die. That’s what it all boils down to.

      • The original Mr. X says:

        Judging by their behaviour, lots of people actually want to murder, steal, rape, and do assorted nasty things. On what grounds do you judge that they should ignore these desires whereas you should follow your own desires to be nice to people?

        • a lot of people do want to do those things…and they are in jail

          • The original Mr. X says:

            So? You haven’t actually given any reason to think that those things are wrong. And no, “There are more of us than there are of you, and if you do this we’ll lock you up” isn’t actually a reason.

          • they impose a negative externality on society, meaning the the actions of some hurt those who don’t consent to it

          • The original Mr. X says:

            they impose a negative externality on society, meaning the the actions of some hurt those who don’t consent to it

            Again, so what? If, to quote Doug S., hurting others is “what we want to do”, I guess we just have to accept that “we happen to be humans who don’t care about other humans and want them to suffer and die.”

        • Cecil Harvey says:

          I’m more concerned with a more lawful evil live-and-let-die attitude. Society has laws against doing overt harm to other people. But what about those who just want to use 100% of their resources to maximize their own pleasure, and is unconcerned about the suffering of those around them, but does nothing to directly cause harm to anyone.

        • Spookykou says:

          @The original Mr. X

          On what grounds do you judge that people should ignore those desires?

          Edit: I think I misunderstood this conversation, are you asking ‘what justification’ does anyone have to push their desires on anyone else if there is nothing beyond just human desires?

        • 27chaos says:

          On what grounds do you expect moral motivational systems to be independent of people’s identity and universal? I don’t think it’s reasonable to say that moral arguments must be able to argue people into not murdering others for morality to exist. Morality necessarily grounded in individual people’s motivations, because otherwise everyone would quite sensibly ignore moral obligations entirely.

          Christian Teleology is just a way of pretending that everyone’s preferences are secretly identical, when a straightforward analysis would lead us to conclude that they’re obviously not. You don’t get to have it both ways and claim that people have a desire to murder but also claim that they have an intrinsic sense of right and wrong that leads them to do good.

          • Protagoras says:

            Yeah, Socrates himself couldn’t argue Thrasymachus into being virtuous. Hence the way that Plato’s portrayal sometimes represented Thrasymachus as a wild beast; if a lion is killing and eating people, you don’t try to persuade it to stop, you shoot it (or at least tranquilize and relocate it). Treating a human like a lion should be a very distant last resort, but sometimes it really is the only option left.

          • The original Mr. X says:

            Christian Teleology is just a way of pretending that everyone’s preferences are secretly identical, when a straightforward analysis would lead us to conclude that they’re obviously not. You don’t get to have it both ways and claim that people have a desire to murder but also claim that they have an intrinsic sense of right and wrong that leads them to do good.

            First of all, you might want to check your history of philosophy. Teleology and its application in ethics dates back to the ancient Greeks, centuries before the birth of Christ.

            Secondly, no, teleology isn’t about people’s “preferences”, it’s about what, given the nature we have, best fulfils that nature. It doesn’t claim that all our preferences are in accordance with our nature, or that everybody’s preferences are the same.

            ETA:

            On what grounds do you expect moral motivational systems to be independent of people’s identity and universal? I don’t think it’s reasonable to say that moral arguments must be able to argue people into not murdering others for morality to exist. Morality necessarily grounded in individual people’s motivations, because otherwise everyone would quite sensibly ignore moral obligations entirely.

            I expect moral arguments not to lead people to absurd conclusions like “Hitler was justified in setting up the Holocaust”. You may consider that an unreasonable burden, but I think most people would disagree.

          • Jliw says:

            Yes, but he said Christian teleology.

          • The original Mr. X says:

            Yes, but he said Christian teleology.

            “Christian teleology” is a made-up concept. The role of teleology in Christian philosophers is exactly the same as the role of teleology in pre-Christian Greek philosophers.

        • Ozy Frantz says:

          Some people murder and steal because they have similar values to me but incorrect ideas about how to reach their values, and they can be reasoned with. Other people just have different values than I do, and I can change their values (if possible) or attempt to punish them to deter them from acting on their values. But it’s true that if I’m talking to Murderbot I will probably not be able to convince Murderbot not to murder people. (This is the insight that leads people to be worried about the AI control problem.)

    • The original Mr. X says:

      If there is no teleology of man, why ought I view man, either myself or others, as worthy of time and effort?

      That was basically Alasdair MacIntyre’s point in After Virtue, as I recall.

    • blacktrance says:

      As someone closer to the “hedonic pursuits” end of the spectrum than most, I can say that it doesn’t at all exclude caring about other people – if anything, it’s the opposite. A virtuous person with honest and otherwise positive mutually beneficial interpersonal relationships is happier than the stereotypical sociopath or hedonist.

      • Cecil Harvey says:

        “Positive mutually beneficial relationships” can happen without caring about suffering of people who aren’t your friends.

        I’m not stating that maximizing pleasure involves spending all your resources on hookers and blow. It just means prioritizing your wants (long or short term) over anything else.

        • blacktrance says:

          Still, that involves caring about people, so that desideratum is satisfied. Regarding strangers, a lot of people get something out of making them better off, so they have a reason to do it to some degree. But that depends on your psychological constitution, and if yours is different, you may have no reason to do it. If you genuinely get nothing out of it, neither instrumentally nor as a source of pleasure by itself, then you shouldn’t do it.

          Elsewhere, you ask why you should care about anything if there’s no higher purpose. But one might well ask the opposite question: if there’s no higher purpose, must you not care about anything? Obviously not. And since it’s highly likely that you already care about something, there’s no need of convincing.

    • Deiseach says:

      I don’t entirely understand why a rationalist would care about other people

      The four cardinal virtues. Even pagans can be virtuous according to their lights. A rationalist may care about other people operating under the virtue of justice:

      Hence the act of justice in relation to its proper matter and object is indicated in the words, “Rendering to each one his right,” since, as Isidore says (Etym. x), “a man is said to be just because he respects the rights [jus] of others.”

      • Cecil Harvey says:

        I, of course, subscribe to those virtues. Pagans also don’t reject teleology. A total atheist must reject teleology, no?

        And I’m talking about a rationalist qua rational thought. Why is a rationalist, whilst trying to be rational, caring of his fellow man? If the answer is “because I want to”, that makes sense. Were I a nihilist, I wouldn’t go around punching people, because I don’t want to.

        If the answer is “because that would cause the fall of civilization”, one person failing to participate productively in civilization would not cause it to collapse.

        • Zodiac says:

          I’d guess the answer to that would be that society has created systems to prevent that from happening (police and law).
          A complete rationalist would probably really go around punching people if he felt like it. Fortunately humans usually have built in empathy, lack of that is one of the criteria to be a psychopath.

          Disclaimer: I don’t truly consider myself to be a part of the rational community.

        • Deiseach says:

          If the answer is “because that would cause the fall of civilization”, one person failing to participate productively in civilization would not cause it to collapse.

          That’s like the “one person’s vote means nothing in an election”. Yes, one person out of thirty million means little to nothing. But if each of those thirty million, or the majority of them, think “My vote means nothing, so I won’t bother voting”, then it means a very great deal. I think we see that already, where elections are being won on a portion of the electorate turning out to vote; in the presidential election 60% turned out to vote while 40% didn’t. That’s probably enough to count as a majority of the electorate, but less important elections often have drastically lower turnouts, to the point where I do think one of these days an election may be won on “only 40% of eligible voters bothered to cast a vote”.

          A rationalist can care about their fellows because they wish to live well, and the best way to do that is in a secure, free society, and the way to get that is to treat your fellow citizens well and encourage the kind of behaviours and laws that induce a free, secure society where everyone’s rights are respected and there are ‘safety nets’. A rationalist could reason that in their own self-interest, persuading others to uphold rights is the right thing to do, and that if they consider themselves to be a conscious entity with the capacity for happiness and suffering, a society of mutual co-operation where all work to ensure happiness over suffering is both just and in their own benefit.

          Conversely, a society where people go around punching other people because they feel like it means that our rationalist is at risk of getting punched a lot which is both unpleasant and may eventually lead to injury and incapacity. One punching person getting away with it encourages others to try it, and the more who get away with it the more the consensus about not punching people is weakened.

    • Izaak says:

      there were only two paths that made sense to me

      What about Judaism? Buddhism? Confucianism? Is there a reason why those philosophies don’t appeal to you? I’m honestly curious.

      • Cecil Harvey says:

        I explored those. Not Judaism so much, largely because there is a strong racial component, and if I’ve got any Jewish blood in me, it’s pretty dilute. But I looked into Islam, Buddhism, Hinduism, LDS, various protestant denominations, eastern/oriental Orthodoxy, a few pagan variants, Stoicism, Nihilism, and probably a few I’m not thinking of.

        But I found the systematic intellectual rigor of Catholicism appealing. The theology and philosophy made sense to me in ways that others didn’t.

        I don’t remember all of the details of my findings, but here’s a quick list of my primary objections:

        – Islam: a lot of the Allah is perfect so anything Allah does is good by definition, even if it seems evil and horrible to us dumb mortals. Catholicism avoids that by having a rational God who can’t violate the rule of non-contradiction.
        – LDS: too many things to go into.
        – Buddhism, Hinduism: not very precise. Smells too gnostic for my taste.
        – Stoicism: great stuff. Very masculine. But doesn’t justify why one should live honorably. Appeal to the natural order, but whose natural order and for what purpose?
        – Paganism: similar to Stoicism but less so.

        As far as Nihilism goes, I was more drawn to lowercase-n nihilism. Basically, “if there is no higher purpose, no higher good, then fuck it, why should I care about anything?”

        To me, that and Catholicism were the logical extremes. And I’m a person who takes things to their logical extremes.

        I thought I owed it to myself to take Pascal’s wager and at least try to be Catholic. I then had some religious experiences that convinced me at least some of what Catholicism claimed were true. And I went to that logical extreme.

        And I’m not a regular American “conservative” Catholic that’s little more than a shill for the GOP. I’m a reactionary Latin mass devotee who wants a Hapsburg-style altar-and-throne order, distributist and very local economic system, and I believe that there is no salvation outside the Church, and everything else the Church has at all times and always taught.

        • Evan Þ says:

          Yes! Someone else who not only takes Pascal’s Wager seriously but actually accepts it at face value!

          If you don’t mind explaining, why do you think a “Hapsburg-style altar-and-throne order” would lead to the sort of government and society you want? The actual Hapsburgs were often quite happy to use the Church as a mere tool for political ends, and I don’t see why they’d be any different after being restored to political power. Sure, the current Karl von Habsburg seems pretty pious, but what about another generation or two down the tree?

    • FeepingCreature says:

      There’s a maxim invented by Greg Egan, that goes: “It all adds up to normality.” What he’s saying is, no matter how crazy your physics of the universe gets, no matter how much it makes you want to despair, it doesn’t become any more or less true by you believing in it; it either was already false or is already true. In fact, it is the environment in which you evolved, so it’s the thing that all your existing intuitions are about. If your intuitions no longer make sense in the new model but they did in the old one, that just means your old explanation was wrong.

      People don’t feel empathy because they find religion. People feel empathy first, and then religion adds support within their framework. When you discard religion, that doesn’t mean empathy and charity become wrong; it just means your support for it falls away. But the sentiments themselves aren’t caused by religion and so are not dependent on it. Religion supports charity because charity is good; it doesn’t become good by religion supporting it.

      Rationality has a different story behind empathy and charity; it says that human beings are legible to other humans and empathic behavior is better for the group and thus more likely to be accepted by others, leading to a positive selection effect. Through legibility and iterated games, the collective benefit becomes an individual one and empathy is selected for. But this still isn’t a question of “it’s good because rationality says it’s good;” rather, first it’s good and then we try to explain why you believe that. The good predates rationality, just like it predates religion.

      As our sermon says, “it doesn’t explain it away, it just explains it.”

    • yossarian says:

      >>I don’t entirely understand why a rationalist would care about other people

      I feel that the answer to this question and the further ones you’ve posted down the line should really be broken into two parts:
      1) Why do people behave nicely (like they care about other people) (the complicated part)?
      – There are plenty of non-religious ways of defining one’s morality and behavior out there. The Gloden Rule, for example, does not require that one subscribes to some absolute morality or believes in any particular diety – the “do unto others as you like to be done unto you” thing is some pretty simple reciprocal altruism that doesn’t need any higher foundation.
      – Religious / absolute values based morality, on the other hand, does not really stop people from being nasty. Firstly, a person with bad impulse control will be a person with bad impulse control, whether he believes in God-given morality or in some Utilitarianism-based morality. If a person hits me with a axe in a fit of anger, it doesn’t really matter to me whether he was an atheist or a faithful Christian – I’d be dead either way. Secondly, people are good at rationalizing their actions – if someone really wants you dead, they’ll think up a good excuse why is it ok to kill you (and, IMHO, utilitarian/hedonistic morality wins here – at least, I don’t recall it explicitly saying anywhere that it might be good to kill people. Take the Bible though, find the verse that says “Thou shouldn’t commit murder”, flip some pages randomly, and it doesn’t matter which way you go – you will find an explicit excuse to kill someone with some juicy examples of how to apply it). Thirdly, some people are just dicks. Sucks to admit it, but yeah, there are people who would do bad things no matter what the primary moral system in their society is.
      – Finally, some people are just (surprise surprise!) actually nice and they care and actually like to do nice things. (Hells, you can even see some nice behavior in animals, who definitely don’t give a crap about either the moral systems or God.) Moreover, the majority of the people are actually pretty neutral and feel that the whole rape-murder-steal thing is not actually that hot, once they think about it. Actually, as I’ve heard on good authority from a dude with a, let’s say it, broad range of life experience, the whole rape-murder-steal thing is actually a lot overrated and it “sucks ass” – if you do it in a nice, stable society, then people are easy victims, but the police will get you sooner or later (there is no perfect crime) and then you are in deep shit. If you do it in a society where no one really cares anymore – there is no police and no jails, but the living conditions tend to deteriorate quickly and the place is generally full of dicks like you, so you have to constantly watch your ass so that you don’t get killed, raped and robbed (in that exact sequence) yourself, and that quickly gets tiring too, so from a purely hedonistic standpoint, the best thing is to be a reasonably nice dude in a reasonably nice society.

      And, finally, question part 2: Why do people care? (not just behave nicely, but actually care). The answer here is – because. You either care or you don’t, it is not really something that comes from believing in a certain set of rules. People can be made to behave a certain way by a system of reward and punishment or by explaining to them why they should or shouldn’t do something, but you can’t really make someone care (well, you actually can, but it involves some creepy brainwashy rapey shit that we probably shouldn’t practice, no matter what our particular religious affiliation is). People just don’t work that way.

    • carvenvisage says:

      >I am genuinely curious. I’m a traditionalist Roman Catholic (very strongly formed by Chesterton), and I don’t entirely understand why a rationalist would care about other people.

      Because favouring yourself compared to other entities, just because you happen to be yourself, is biased. QED.

       

      I think one should still focus on oneself and the area around them, because that is the centre of their influence, and thus responsibility, but that’s a later extrapolation of the above position.

      Fundamentally being rational means avoiding the natural delusion (or convenient fiction, or model) that other people are any less real than you are.

       

      (Also because short sighted hedonism is really far from the the best way to be happy, but you probably didn’t mean that.)

  30. Peffern says:

    A couple years ago when I was a more raging anti-SJ type, I made some comments along the lines of “The XYZ community is bad because some people in the community did some bad things in the name of said community and the rest of the community was not sufficiently self-aware to condemn it, therefore they must all approve of this bad thing and are bad people.” In retrospect, I realize I was being uncharitable, and most people just want to identify with a community and not worry about shaping their whole lives around who said what in what context. With that in mind, I think we should keep in mind “In Favor of Niceness, Community, and Civilization” and realize that a few people making dumb criticism is probably not a real attack on rationalist values, and probably part of some complicated status game that they’re playing in their own social circle, and it would be better for everybody’s blood pressure if we worked on something else.

    • vV_Vv says:

      “The XYZ community is bad because some people in the community did some bad things in the name of said community and the rest of the community was not sufficiently self-aware to condemn it, therefore they must all approve of this bad thing and are bad people.”

      I think this position becomes more and more charitable as the number or prominence of the bad actors in the community increases.

  31. Doug S. says:

    Why do I have this strange urge to shout

    BLOOD FOR THE BLOOD GOD!!! SKULLS FOR THE SKULL THRONE!!!

    after reading this post?

  32. Anon256 says:

    My biggest complaint with the rationalist (and EA) community is the tendency to be vastly overconfident in their ability to meaningfully impact the world. They share this mistake with many, many previous movements and communities, and use motivated special pleading to ignore the fact that nearly everyone who thinks they can meaningfully impact the world is wrong. This speech is the one instance I’ve seen of engaging honestly with the issue, but its proposed solution of essentially becoming impact groupies seems unsatisfactory (the social climbing/inner ring dynamics in the Bay Area rationalist community especially are bad enough as it is).

    • Reasoner says:

      nearly everyone who thinks they can meaningfully impact the world is wrong

      I’d like to see more citations for this. Like, what exactly qualifies as “meaningfully impacting” the world, how many people are actually serious about doing this, and what fraction succeed. (Or perhaps more interestingly, what level of qualifications do you need to have in order have a decent shot at success… for example, if I tell you I graduated from Harvard, what’s your new probability estimate that I will make a meaningful impact?) I believe this might be true for entrepreneurs, but entrepreneurs are playing in a competitive marketplace that’s probably at least somewhat efficient. “40% of millennials think they’ll have a global impact” doesn’t seem like an interesting reference class. Those are people who chose “I believe I can make a global difference” over “I don’t believe I can make a global difference” in some survey they were administered, not people who are planning their entire career around making a dent.

      For what it’s worth, my perception is that most people fail at making a big impact through one or more of these common failure modes:

      * Not Giving A Shit

      * Getting Distracted By Facebook

      * Not Being Very Smart

      * Not Having Original Ideas

      * Lack Of Grit

      Etc. I suspect that the odds that individuals will succeed varies a fair amount, and also that individuals can increase their odds by e.g. installing FB Purity so Facebook becomes less distracting.

      • The Nybbler says:

        Most people can’t make a meaningful impact in the world because the world is really, really, big and individuals are small in comparison. There are very, very few people in all of history for which it can be said that the history of the world, or even their country or their town, would be different had they never been born.

        It’s basically a light version of the Total Perspective Vortex

        • Most people can’t make a meaningful impact in the world because the world is really, really, big

          Why does an impact have to be substantial relative to the size of the world in order to be meaningful? Is the impact of saving ten lives less meaningful in a world with a population of eight billion than it would be in a world of eight million or eight thousand?

          Isn’t it more reasonable to define “meaningful” relative to the size of the actor rather than the population he is a part of?

          • The Nybbler says:

            Is the impact of saving ten lives less meaningful in a world with a population of eight billion than it would be in a world of eight million or eight thousand?

            Yes. Saving ten lives in a band of 80 people may mean the difference between the survival of that band or its near-term extinction. Saving ten lives in a world of 8 billion? Nobody’s going to notice, unless they’re all concentrated in a smaller subgroup.

  33. seebs says:

    This is also true of nearly everything else. I used to hang out on religious discussion forums. This criticism was true of basically all non-theist criticisms of religion. It was also true of basically all religious criticisms of non-theism. It was true of religious criticism of other religions.

    And a thing which I’ve noticed is: I think it’s worth recognizing that this isn’t because people are idiots. It’s because people are making the reasonable assumption that the world is similar to their experiences.

    The sorts of people who aggressively proselytize for their religion and are jerks to people about it are *more visible* than people who have nuanced and thoughtful positions.

    So, a lot of my friends are “rationalists”, and I don’t imagine they’re completely unlike the rest of the community. But then, a lot of my friends are “feminists”. But my impressions of rationalism are definitely affected by the “rationalists” who insist that only the most extreme caricatures of feminism are “feminists” and thus that feminism is stupid and sexist. And yet, I know that they’re obviously not really representative; they’re just going to be a lot more obvious.

    But if you want to know where these ideas come from, consider the “rationalist” who explained to me that, because he’d seen some people on rationalist forums state that they had “hacked their brains” to become polyamorous, obviously it should be trivial for people to change their sexual identity or gender orientation. When I pointed out that this theory had been tried extensively and had a hell of a body count, he accused me of trying to emotionally manipulate him and guilt trip him rather than addressing his arguments.

    And after many hours of discussion on religious discussion boards, I finally realized the thing: Those jerks are not necessarily *representative* Christians. But it’s important to admit that they *are* Christians. Same for the atheist jerks.

    If you say “no, those aren’t real rationalists you’re attacking”, you will instantly lose people who have been harassed by jerks who are also (possibly not very good) rationalists. If you say “yeah, those guys, we think they’re jerks too”, it’s a lot easier to migrate this into the schema everyone has for “people who share my opinions at some level but are total jerks”. Every group has those people, everyone knows about them.

    • There was a post awhile back(not sure if SSC or elsewhere) that went hugely viral titled The Other Side is Not Dumb. We (the community, Rationalists, etc.) need to resit the temptation to create and fall for seductive, intellectually lazy reductionist narratives that pigeonhole the ‘other side’. Augments that seem obvious to ‘us’ have almost certainty been addressed by the ‘other side’, and reusing these well-worn narratives as if they are somehow revelatory and profound, when they aren’t, is an insult to the intelligence of both sides. A greater understanding of one’s ‘own’ side is attained by ‘Steel manning’ the opposing side, in ascribing the most charitable view of your opponent.

      • Peffern says:

        I would amend that to ‘the other side is not dumber than our side’ since while ‘our opponents are stupid and intellectually lazy’ is not an argument we should be making, ‘everyone is stupid and intellectually lazy’ might be.

  34. jonmarcus says:

    …about 20% of people above age 30 had PhDs

    That’s…at best infelicitously worded. I assume that “people” = “modern rationalists”? even “…20% of them above age 30…” Or even restate “modern rationalists” as it’s been a few sentences since you stated your referent.

    • Eponymous says:

      In context, I assumed it meant that in the SSC survey, P(PhD | age >= 30) = 0.2.

      Now, we’ll know we’re *really* winning when we get a lot of 25-year old PhDs and people recording incomes in the millions 😉

  35. Douglas Knight says:

    The criticism of economics is maybe 50% correct, more correct than it would have been a century ago. There have been no paradigm shifts in economics. The Nobel committee has good taste, from Coase to Kahneman. All economists praise them, but virtually none of them follow them. Coase has been in the curriculum for a lifetime and has had no effect.

  36. If a movie or TV show depicts a mental hospital between 1870 and 1970, it’s likely at some point they’ll have a patient strapped to a gurney with huge electrodes, and screaming in agony as massive shocks are delivered. Getting into such a situation is a standard trope for time travelers. It may be difficult to get that notion out of the popular culture.

  37. This post seems to summarise to “people are people, and that is a problem”. The problem of outsiders making criticisms of somrthing based on inaccurate or outdated ideas is widespread — so widespread that it also occurs within the rationalist community.

    It takes two to communicate, and it takes two to miscommunicate. I think Scott underestimates how hard it is to for outsdiderst to understand rationalism. But there is a technology for getting your point across and it is called PR.
    Using standard terminology in standard ways in pretty helpful, too.

    They don’t pooh-pooh academia and domain expertise

    There’s extant material where they do exactly that.

  38. MartMart says:

    Might I offer a different criticism of rationality?

    In order to engage in rational argument (I hate the term argument btw. It doesn’t mean the same thing to me as it does to most people here), one must have the humility to loose, and so to accept that they might be wrong. Without that, rational debate is not possible (I hope this is not a controversial point).
    On matters that are sufficiently personal, many (most?) people simply do not have that option. If the consequences of “loosing” or being wrong are sufficiently dire, it becomes impossible to admit defeat, and people will fight for their side as hard as possible, even if they evidence starts mounting against them.

    This means that rationality can become a status symbol, a way to signal to others that the speaker is sufficiently secure that most of the issues that society worries about are not going to adversely affect them no matter which side is correct, similar to how in some societies the wealthy wore clothes or groomed themselves in ways that would make manual labor impossible in order to signal that they don’t need to do manual labor.

    I’m not trying t make the case that all rationalists are privileged people with no real worries in the world, just that it’s a temptation to watch out for.

    • This means that rationality can become a status symbol,

      What’s wrong with that?

    • carvenvisage says:

      >On matters that are sufficiently personal, many (most?) people simply do not have that option.

      With the exception of one person on this planet, there’s always someone is a worse situation than you are, so no you don’t have the right to ‘not have that option’ just because some people have it easier than you. Because then you’re gonna run roughshod straight over the people who have it worse.

      Sure some people do have that right. Some people really do have ‘no option’ but to go out there winging it trying desperately to survive with no consideration for anyone but themselves. (-or collapse/die).

      -It’s not just the one person who’s worst off, but neither is it anywhere remotely near ‘most’ people. That’s fucking crazy. It’s borderline evil.

      Most people are not ‘OK’, not stable or grounded or who they want to be, perhaps not not acutely suffering, etc, but that’s exactly why your not being that way, either , doesn’t give you the right to never compromise. Your life being fucked up and not-even-tragic does not mean you are automatically the centre of the planet.

      Beyond a certain point, sure, it kind of does, but we’re not living in a generally purposeful and ordered and well design world here. Your tragedies are unique, and probably no one will ever understand them, but not unique in scale or level or awfulness.

      Also, of course cutting that option out of yourself is very tempting, because not being able to compromise or listen makes it easier to escalate and get your way, and to sideline and trample over people in greater need of compromise than yourself. And obviously those people generally have much less of a voice, so it’s easy not to notice, because after all you’re so hurt, and ‘aren’t we all?’ is just a platitude, because sometimes liars use those words.

       

      Anyway TL:DR, basically what you’ve presented is an ideology of of complete irresponsible solipsism.

  39. They’re not a “religion” any more than everything else is.

    One reason standard criticisms keep getting repeated is that the standard responses are just not that good. Rationalism should be a lot less religious than everything else … it is not good enough to be average.

  40. srconstantin says:

    People fundamentally have a problem with the cultural trappings of the rationalist movement. The science fiction. The fanfiction. The libertarianism. The polyamory. The group houses. The transgenderism. The fondness for coining words and inventing rituals. The fact that we are *countercultural.*

    Which is, in fact, my favorite thing about this community, despite the fact that I think our *intellectual* contributions are fairly minor and should be treated with the same skepticism an honest intellectual takes to everyone’s theories.

    But: many people feel we should be mainstream academics, or mainstream progressives, or something. And my reply is: well, they don’t get to decide that. My commitment to truth means I have to listen to Wilkinson’s, Cowen’s, and Caplan’s arguments about what reality is like. I have no obligation to change my *aesthetic* to fit theirs.

    (I actually think Caplan is exactly wrong. The *best* thing about rationalists is that we’re inspired by science fiction, which traditionally has really good humanist and pro-science values. The strongest criticism of rationalists is that we haven’t come up with very much intellectual content, and get things wrong about as much as anybody else.)

    • srconstantin says:

      The correct comparison point for rationalists is not Will Wilkinson. (In the world to come, you will not be asked,”Why were you not Will Wilkinson?”)
      The correct comparison point is previous generations of countercultures of geeks, gays, and hippies. And I think we have some strengths and weaknesses compared to those cultures. 90’s Extropians and old-school SF fans were less sophisticated in their arguments but often more grounded in the facts of science and technology. Previous generations of gay culture had way more courage and aesthetic discernment than we do, but afaik none of the scientific stuff. Hippies get some things right psychologically and philosophically that we’re failing terribly at (in particular the importance of *chilling out* and *unplugging from the need for validation from the Establishment*) but they have the woo problem.

    • Said Achmiz says:

      A data point:

      I started reading LessWrong before it was LessWrong (~2007) (and have commented some). I think the Sequences are an amazing body of writing and thinking. I think Eliezer, personally, is a pretty great guy (though his views aren’t gospel and some of his tastes in fiction are questionable). I’ve been to LW meetups and megameetups. I hang out in a LessWrong-related chat room on a daily basis. (I admit to having told someone to go read the Sequences on several occasions.)

      I like science fiction. I liked HPMOR. Libertarianism is ok. I am not into polyamory or the sexual/romantic mores common among rationalists. I dislike group houses. I dislike rituals.

      You will not find many stronger supporters than me, of the core ideas of LessWrong and “Yudkowskian” rationality. But I am not really on board with having a “rationalist counterculture”, and I am specifically not too fond of the one we have now.

      Ideas and aesthetics are separable.

    • Said Achmiz says:

      Side note about science fiction:

      There’s a lot of it. Different kinds. Which are we inspired by?

      You could say, we’re inspired by the sorts of sentiments and ideas and general outlook that’s pervasive in the genre as a whole. True, to some degree. But not the whole picture.

      How much of rationalist thought and culture is inspired by…

      … Robert Heinlein?
      … Ken MacLeod?
      … Iain M. Banks?
      … Stanislaw Lem?
      … the Strugatsky brothers?
      … Philip K. Dick?

      This short list of authors represents a very large range of political and philosophical orientations. And there’s a lot more out there.

    • tmk says:

      I don’t have an issue with any of those things. But, if all there is to rationalism is a social club with certain aesthetic, then it is mostly meaningless to anyone who is not part of club or into the aesthetics.

      • srconstantin says:

        Agreed!
        I think people should think of it that way!

        And if people in that social club make intellectual contributions (and some of them do), then evaluate those people *as individuals* and their intellectual contributions *on the merits.* If you do that, you see that it’s a mixed bag, some real stuff and a lot of fluff.

    • foo says:

      “The science fiction. The fanfiction. The libertarianism. The polyamory. The group houses. The transgenderism. The fondness for coining words and inventing rituals.”

      These all share a zeal for the clear cut and explicit and papering over the fact that life is complicated and nuance and messy and compromised all over the place. It does seem apt to then insist that criticizing intellectual content is fair game but not the aesthetic content, as if those aren’t deeply intertwined. (Which is not to say that explicitness isn’t often good and useful. It’s just that it’s one particular paradigm, and not the best tool for handling every problem and topic.)

      But I do like that you’re honest about a lot of the appealing being about the sense of community. What can get a bit insufferable is just the self righteousness about it.

      • Kaj Sotala says:

        These all share a zeal for the clear cut and explicit and papering over the fact that life is complicated and nuance and messy and compromised all over the place.

        Polyamory feels like the very opposite of “the clear-cut and explicit” to me, and much more in the direction of complicated and nuanced and messy and compromised all over the place than what monogamy tries to be.

      • srconstantin says:

        I mean, I wouldn’t disagree but I’d put a positive spin on it. The “literary fiction” view of life is deeply pessimistic about human nature. Everybody goes through the same problems, again and again and again. Adultery, grief, betrayal, despair. The “science fiction” view of life (at least, the hard-SF/space-opera track) is fundamentally about alternatives to the Augustinian view. I’d rather spend my time with the people who believe in improvement, even if they make mistakes along the way.

  41. Azure says:

    Mr. Wilkinson’s critique was strange. Much of it seemed to have an odd enough perspective that I suspect there’s some very strong cultural divide between him and myself. I’m sympathetic to the idea that rational-aspirants should cultivate personal virtues as well as epistemic skills (which is what I think he was trying to get at), but most rational-aspirants I know do try to cultivate other virtues.

    The part that rubbed me incredibly wrong was the idea that we must be afraid of being wrong or have something else wrong with us to want to know the truth. This, the idea that people need an excuse or some sort of personal damage to be virtuous or seek truth is something I’ve always thought of as part of the fallout of post-modernism (yeah you classical-liberal Cato-institute-worker-for! I’m calling you post-modern!), and one of the worst aspects of post-modernism to percolate into popular culture.

    Search for truth comes form Eros? I can believe it. Thinking of how an MRI works, all the hydrogen nuclei in bright array, responding to each disrupting pulse with a signal as they regain their position, it’s impossible not to think of “And all the sons of God shouted for joy.” It’s difficult not to think of the universe as some endless, unconscious, brilliant song and dance. I’m not afraid to be wrong, I’m in love with reality.

    • The part that rubbed me incredibly wrong was the idea that we must be afraid of being wrong or have something else wrong with us to want to know the truth.

      I didn’t see the must. It looked more like “if you haven’t examined your motivations, how do you know what they are?”.

  42. Brad says:

    My problem with rationalism is that there’s a motte and bailey that’s involved.

    On the one hand you have the highly defensible version of the movement that encourages, well, rationalism. Trying to understand and avoid cognitive biases and generally encouraging people to use effective tools to understand the world. On the other, you have the version that says we are a bunch of really smart people that have done a lot of work to avoid cognitive biases and we’ve come to these conclusions about AI (and other very specific predictions about the future), the proper ethical framework, the best way to organize one’s romantic life, dieting, the overwhelming importance of status games in human affairs, and so on. If you aren’t convinced, did we tell you about the need to avoid cognitive biases? Maybe keep working on that. Or maybe you just don’t have a high enough IQ to get it. Did we mention that most of us have really high IQs?

    To take a concrete example, look at EA. The messaging is that the non-profit sector just doesn’t do a very good job of making sure it is accomplishing goals effectively. Let’s see what tools we can create to do better. That’s a really fantastic critique and plan! But when you get ready to write a check and start digging into who you’ll be giving it to, it becomes very difficult to determine if some of your money is going to end up in the hands of someone working on a lemma to lob’s theorem.

    Scott is an extraordinary writer and thinker. He writes about many things, sometimes including advocating for the bailey part of rationalism, but almost always in a humble, tentative, and thoughtful way. He never resorts to “go read the sequences” or “maybe you just aren’t smart enough to get it”. But that existence proof isn’t enough to save the entire movement (even god wanted at least ten men).

    I understand that an amorphous community that anyone can declare himself a part of can’t reasonably be held responsible for every last thing anyone says or does it its name (see e.g. BLM). However, at some point it is fair for outsiders to form a general impression based on many interactions over a period of months or years. Particularly when we are talking about a relatively small group of people (in the single digit thousands?) rather than huge groups like feminists or conservatives. If Scotts predominate in the movement than I’d love to be pointed to some of them, so I can read their writings too.

    • Urstoff says:

      This resonates with my view. Ultimately, everything is informal reasoning. Some is just better than others. Saying you’re a rationalist makes it seem like (to an outsider) that you are claiming a difference in kind rather than degree.

      • Matt M says:

        I think this is well put. There’s also the risk that by identifying as “rationalist” you are implying that anyone who does not also identify as such is irrational which is often considered to be an insult.

        It’s almost similar to those who favor increased restriction of abortion referring to themselves as “pro-life” as if to imply their opponents are somehow anti-life. It’s a fine rhetorical tactic if your goal is to antagonize your opponents, but not so helpful if your goal is to convert them…

        • Deiseach says:

          Same rhetorical reasoning as pro-abortion activists preferentially calling themselves “pro-choice” and the opposition “anti-choice”.

          What kind of totalitarian bigots are against choice? Americans have the right to choose life, liberty and the pursuit of happiness!

          Though at least the media seem to have switched to the “anti-abortion rights” label now, which I have no problem with: I am anti-abortion and don’t consider it a right, so it doesn’t work to make me all flustered and blustering “Well…well… well, of course I’m not against rights, but but but…”

          But let’s not pretend that everyone isn’t trying to score points and make themselves look of superior virtue, all right?

          • Matt M says:

            I did not intend to imply one side was doing it over the other. Was just an example. I lean anti-abortion myself, as a matter of fact.

    • Freddie deBoer says:

      well put

    • dndnrsn says:

      Yeah. Sometimes I wonder if claiming to be a contrast to general human tendencies can undermine attempts to go against those tendencies. In this case, the example would be thinking “we are so rational, it’s in the name!” and then overlooking ways in which irrationality might be happening. There’s a human tendency to be irrational, and it’s questionable whether trying to be rational works consistently.

      • carvenvisage says:

        >it’s questionable whether trying to be rational works consistently

        This is too vague to mean anything. Trying for who in what circumstances to what extent with what support to what end?

    • Said Achmiz says:

      For what it’s worth, some rationalists (like me) are opposed to the EA movement (for something like the reasons you’re pointing at).

      I entirely sympathize with your criticism of EA, and it has indeed captured much of what may be called the “rationalist community”, but I think it would be very useful (epistemically and pragmatically) to delineate criticism of EA and criticism of rationality-in-general-not-counting-that-EA-thing. By no means should we ignore that EA is big in rationalist spaces, but I think if we separate EA out, and then look at what else there is, we’ll get a much clearer picture.

      • Brad says:

        What about the AI stuff, would you say we should separate out that too? Or is that too intrinsic to be handled the same way?

    • Cerebral Paul Z. says:

      My own problem with rationality hinges on this phrase: “Trying to understand and avoid cognitive biases…” Self-described rationalists tend to give the impression that they regard “understand and avoid” as two ways of saying the same thing, or at least that they see the latter as following almost automatically from the former. My own experience here suggests that while understanding about (for example) tribalism is probably still better than not understanding about it, it’s of remarkably little help when it comes to actually steering clear of tribalism. (I say probably because rationalism can also be used simply as a source of dandy new insults to hurl at the Other Tribe.)

      I finally got around to reading Chronicles of Wasted Time on Scott’s recommendation (incidentally, I found it less maggot-intensive than he did). For the epigraph to one chapter, Muggeridge chose this, from Samuel Johnson’s Life of Savage: “The reigning Error of his Life was, that he mistook the Love for the Practice of Virtue, and was indeed not so much a Good Man as the Friend of Goodness.” LW-style rationalism seems to produce a lot of Friends of Rationality.

      • Cerebral Paul Z. says:

        Speaking of the difference between seeing and avoiding: the owners of the most recent skulls presumably noticed all the older skulls when they tried to come this way. It doesn’t seem to have done them much good.

    • motte: it’s good to avoid logical/reasoning fallacies

      bailey: AI will always be friendly

      But almot everyone does this. The motte is the ‘end goal’ and the bailey is the ‘means’. For the ‘left’:

      motte: poverty is bad

      bailey: we need higher taxes

      It’s possibly fallacious if one actually switches back and forth (“do you want to see people starve?”) But many times using a motte to justify a bailey unintentional and not necessarily in bad faith. The question is trying to determine when one logically follows from the other.

      • TheZvi says:

        That first bailey is reversed, and this confusion is worth clearing up. (I agree that neither part trivially follows from the motte, and also am at least a little worried that I’m falling for Something Is Wrong On The Internet, but hey)

        motte: it’s good to avoid logical/reasoning fallacies
        bailey: AI that is not provably friendly will almost certainly destroy all value in the universe
        (or, alternatively: We need to devote lots of resources to ensure that when we build AI it is friendly)
        (or, simply: AI will almost always be unfriendly)

        By contrast, the people who say “AI will always be friendly” tend to be people who think the rationality community is a bunch of crazy people.

        • johnvertblog says:

          In my experience, people who think rationality is crazy tend to think the idea of an intelligent program is absurd, presumably because they haven’t even broken through the body/soul duality and figured out that biological minds are intelligent programs.

          • ayegill says:

            I completely agree with you in this case, but saying “if someone disagrees with [a belief which is prevalent in the rationalist community], it’s probably because they haven’t figured out we’re right yet” in a thread about problems with the rationalist community must be some kind of record in irony.

      • Brad says:

        Gray Enlightenment I don’t follow what you are trying to say vis-a-vis my post. Are you saying that the position “we are a bunch of really smart people that have done a lot of work to avoid cognitive biases and we’ve come to these conclusion … if you aren’t convinced, did we tell you about the need to avoid cognitive biases … or maybe you just don’t have a high enough IQ ” is a perfectly fine and dandy one to take?

    • Eponymous says:

      I think your Motte is just “Stuff Eliezer believes”, while the Bailey is “People who’ve read Eliezer’s writings on rationality and are trying to extend/apply them”, who may or may not accept any of Eliezer’s particular positions (though they are more likely to).

      And then there’s a wider bailey which is something like “People interested in rationality” (in something roughly equivalent to the Less Wrong sense of the word). This includes the Eliezer cluster, and those directly influenced by him, but also many others.

      I’m not sure what to make of this besides noting that, as a matter of historical fact, Eliezer did write a huge quantity of amazing material that got a lot of people interested in rationality, and that this is how a great many people connected with this blog learned about these topics.

    • Peffern says:

      The part that made me agree with you was the part about the ethical framework. The cultural parts, status games, whatever, I was able to write off as subculture (sub-subculture?) but I just can’t get past the ethics. I’m not a utilitarian, although I have sort of been coming around to general consequentialism lately, and it is incredibly infuriating to read a lot of rationalist-associated writing that just assumes all smart people are strict statistical utilitarians. It just comes off as smug and obnoxious.

  43. Eponymous says:

    Tyler Cowen making lazy overly broad generalizations based on pattern matching to cliched “deep wisdom” and cached thoughts? I’m shocked!

    My own view is that TC runs a decent news aggregator, but that his opinions rarely contain anything original or profound. He mostly plays the deeply wise neutral referee.

    Also: Gell-Mann amnesia applies to megabloggers too.

    • Eponymous says:

      p.s. I only commented on Tyler because Noah’s not worth my time and I don’t know who Will Wilkinson is.

    • he is pretty prolific outside of this blog, but I find his blog kinda boring though. too much minutia

      • Eponymous says:

        Is he?

        I just looked at his CV. He’s written a few books, but I think they’re mostly pop econ (and an undergrad textbook). His last journal article in an economics journal I recognize is his 2007 JEBO. I would guess the U Chicago Law Review is good too, and he seems to be publishing in areas outside of typical economics journals that might be decent, but his last journal article is from 2011. Maybe he has others that aren’t on his CV.

        So as academic economists rate things, I wouldn’t call him very productive, let alone prolific. I’ll admit he’s pretty successful as a megablogger / public intellectual / pop economics writer. And plenty of academics read his blog, so he’s not without influence there.

        Don’t get me wrong, he’s smart and produces a lot of valuable stuff. He just doesn’t strike me as an incredibly perceptive deep thinker, such that I would take his critiques seriously. I think he could be if he really devoted himself to it, but it seems to me that he prefers to take a broad approach, which inevitably results in shallow surface-level analysis of most areas, particularly when he ventures outside of economics.

  44. Null Hypothesis says:

    But they’re new mistakes. They’re original and exciting mistakes

    Considering that the whole essay uses the world “skulls” as a metaphor for “mistakes” this probably wasn’t the best word choice. I couldn’t help reading it as:

    But they’re new skulls. They’re original and exciting skulls

    Which I just find hilarious. But my sense of humor is more morbid than most.

    • Aaron Brown says:

      they’re good skulls Brent

      • Urstoff says:

        +1

      • sketerpot says:

        Gracile? No sagittal crest? Boring middle-of-the-road omnivorous dentition? Fie upon your skull-judgement, Bront; these skulls are 7/10 at most, and they’d score a lot worse if the location of the foramen magnum weren’t so far forward — I have a soft spot for bipedalism. (Take heed, haters: bipeds have a wide field of view and incredibly energy-efficient locomotion. It doesn’t entirely make up for the unaesthetic skulls, I admit, but those are probably a contingent feature of their evolutionary history rather than an inherent disadvantage of standing erect. I will die on this hill.)

    • dndnrsn says:

      “Yes, We Have Noticed The Skulls” seems like it should be a Mountain Goats track.

  45. Joe English says:

    And if you’re ever in an improv show and your prompt is “annoying person who knows nothing about feminism criticizing feminists,” you can find a wealth of inspiration from the commentariat at this very blog!

  46. Squirrel of Doom says:

    I wish there was a “refutopedia” on the web, that had short and simple refutations of the top 1000 fallacies in Economics and other widely misunderstood fields.

    Should we build one?

    • Ilya Shpitser says:

      It’s hard to do this (and I think is one reason Arbital didn’t go anywhere). I think what’s going on here is the critical shortage isn’t the lack of short and simple refutations but having the cognitive infrastructure to understand the refutation.

      A very simple solution for Simpson’s paradox existed for 20 years now, in a very clear paper form, yet most people are still unaware of it, or why it’s a refutation.

      • Squirrel of Doom says:

        Yeah, let’s not start with Simpson’s paradox. There are many far easier to understand fallacies that tons of people believe.

        Like the “lump of labor” fallacy. If you google that, you’ll find a lot of texts attempting to explain them, but they’re way too long winded and hedging for what I’m thinking of.

        I’ll probably have to write this myself to get what I want…

        • Ilya Shpitser says:

          My prediction is, you are not going to get any traction. A good explanation is a binary relation between two people. You have to tailor your explanation to a person, explanations are not for broadcast media.

          • sketerpot says:

            An approach which has had some success is to write a lengthy sequence of entertaining articles (blog posts, chapters, …) explaining the background knowledge, thus bridging the inferential gap for a fairly broad range of people. Obviously most people who disagree with you won’t read it, but some might!

          • Richard Kennaway says:

            And yet, I think Wikipedia, despite its obvious impossibility, does pretty well on content (judging it on subjects I know something about), while being spectacularly successful on traction. Scholarpedia has gone nowhere. Wikipedia itself was the offspring of Nupedia, which lasted only a few years.

            How much traction does the Encyclopedia of Mathematics have? In the distant past, before the web, before public internet, I saw a review of it (I think by Paul Halmos), describing the content as magnificent, but the enterprise useless, a monument to sit on library shelves unopened. The EoM still exists and has moved online and transformed into a wiki, but I don’t think I’ve ever seen it come up in a Google search from that day to this.

            And then there is TVTropes, which in its chosen field has both high-quality content and traction.

            While the difficulty of talking to a large audience is a factor, there is a lot more to it. Textbooks are broadcast media. So are lectures, on a smaller scale, and academic papers. In my experience, one-to-one tutoring is a small part of how most learning on technical subjects happens.

          • Ilya Shpitser says:

            Richard, I think Wikipedia does a pretty good job on subjects where you don’t have to be technically correct. On subjects where you do, it’s basically the luck of the draw — for pure math it seems great, for stats/ML it’s decidedly NOT great.

            Like other such projects, Wikipedia is a creature shaped by incentives. And experts are not incentivized to spend time on Wikipedia, or battle revert bridge trolls there.

            For expert-crafted content like encyclopedias or courses, the limit isn’t the content but the student’s mind. It’s true that Universities do lectures, but it’s basically because they have to in order to parallelize. I don’t think it’s controversial that lectures (especially big lectures which less resemble one on one interactions) are not great for learning anything.

          • Richard Kennaway says:

            The problem with lectures is that they’re both non-interactive and real time. Fine for TED fluff, not good for anything that has to be studied rather than just listened to. Written media, as per the original suggestion, sits there for as long as a reader cares to spend with it.

      • but having the cognitive infrastructure to understand the refutation.

        You mean the reader not being smart enough? The impression I got from Arbital and its five or six predecessors was a shortage of people who could write content.

        • Ilya Shpitser says:

          I do mean the reader, but I don’t necessarily mean “not smart enough,” I mean lacking the background to understand the explanations. For example, most commenters here would understand the Simpson’s paradox explanation with sufficient background (in my class I teach it after a few weeks to undergraduates who had a single machine learning class by that point).

          • Jiro says:

            So what is the refutation of Simpson’s Paradox?

            (I suspect that you and I have different ideas of what a refutation is. I don’t consider “it is possible to figure out which one of the apparently contradictory results is meaningful, given a particular model” to be a refutation.)

          • Ilya Shpitser says:

            Read Pearl’s paper on this, and see if it makes sense to you. This is not a comment sized explanation, sadly.

          • Jiro says:

            I have no idea which paper you mean. The wikipedia article links a paper by someone named Judea Pearl, but this paper is from 2013, so cannot be the 20 year old paper you are referring to. It refers to a 2009 paper by Pearl which isn’t 20 years old either. At any rate, the paper explains, under certain circumstances, which of the two seemingly contradictory results you should accept, and does not use the word “refutation”.

            I would use the word “refutation” to mean “the paradox says that X happens. X cannot actually happen.” Simpson’s Paradox has not been refuted by this definition; I can still write up a situation where Simpson’s Paradox happens.

          • Ilya Shpitser says:

            Sorry, Simpson’s paradox is a veridical paradox (e.g. the reversal happens, as it is a property of tables of numbers). The explanation is why we think it’s surprising.

            The paper is recent, but the explanation goes back to his book in 2000 (and in fact even before then).

          • Jiro says:

            That’s an explanation, not a refutation.

      • Reasoner says:

        A very simple solution for Simpson’s paradox existed for 20 years now, in a very clear paper form, yet most people are still unaware of it, or why it’s a refutation.

        Details? (I think I understand Simpson’s paradox, but I’m unclear on what a “solution” to the paradox would represent.)

    • Eponymous says:

      I think a good “refutopedia” would be a listing of academic review articles describing the current state of play on major debates in various academic fields. Or high quality lecture notes from graduate (and good undergrad) classes, for a longer treatment.

      Several years ago I went through a very nice set of philosophy lecture notes that described the major arguments on many philosophical questions. Off the top of my head, I think it was “Problems in Philosophy” on MIT OCW. HTMLing that (and equivalent sources) would be a good start.

  47. MostlyCredibleHulk says:

    You may very well be right about rationalists, but I am not seeing that leftists has learned too much from the Soviets and socialist failures since then. Many of them continue to advocate the same solutions that brought disaster to the Soviet Union (or Venezuela). Of course, you said “the best leftists”, so there’s always a no true Scotsman claim towards any leftist that does not fit the picture.

    • P. George Stewart says:

      Yeah, I was about to say. I’ve actually never met or read a Leftist who was “humbled by …etc.” those examples.

      I’ve met/read Leftists who try to explain them away; they’re not “humbled.”

      I’ve met/read a few Leftists who are from traditions that were against those “experiments” before they were even begun; but they’re not “humbled” either, they’re proud, and they profess to offer an alternative to the mainstream of the Left.

      IOW, I’d agree that the “best Leftists” would be ones who were “humbled by …” etc. But where are they?

      And it’s actually a bit similar for rationalists. There’s something in the religious criticism that rationalists are overly proud, that “humility” is the last thing they have any idea about, precisely because they don’t have any sense of something beyond them and greater than them (imaginary though it may be).

      In fact, I’d go further and say that the connection between rationalism and the Left is quite deep: when you’re a clever kid, you kind of sort things out in your mind quite early; possibly you never revise those foundational opinions. This leads to the childishness some have noted about the Left, particularly the modern Left (most notably Evan Sayet, whose acute and vicious analysis of the Left can be encapsulated in the claim that the Left is “regurgitating the apple”).

      • Richard Kennaway says:

        There’s something in the religious criticism that rationalists are overly proud, that “humility” is the last thing they have any idea about, precisely because they don’t have any sense of something beyond them and greater than them (imaginary though it may be).

        A sense that more is possible? The level above one’s own? Superintelligent AI? Rationalists absolutely do have such ideas.

        A difference between the religious idea of something greater and the rationalist one (and, for that matter, the EA one) is that in the latter, you are supposed to actually move towards it. In the former, you are expected to piously beat your breast and proclaim your unworthiness, but you aren’t allowed to get even a little bit less unworthy from one year to the next.

        I suppose it can be a sort of comfort to believe that you are, that everyone is, utterly evil, vicious, and damned without hope save for the infinite mercy of a loving God whose grace you are absolutely unable to attain by any act of your own. It would also be a comfort to believe that you are already above other people (but if you aren’t one of [insert your own list of remarkable people], more liable to refutation by observation). Neither of those stories requires anything of you. On the path of reality, it is possible to do better by your own efforts, but it requires real effort, rightly directed, and you may fail.

        And if, as a religious person, you insist on something provably unattainably greater than ourselves, we have something for that as well.

        • carvenvisage says:

          A sense that more is possible? The level above one’s own? Superintelligent AI? Rationalists absolutely do have such ideas.

          The difference is that there seems to be a certain gung ho optimism about the whole affair. ‘We’re not there yet, but we will be!’.

          As you yourself said, the religious view is if anything too humble and self flagelating. It’s not so much to always remember that greater things, ways of being, are though we might never get there, as that they’re implausibly difficult to attain as a lowly hominid and we should not aspire to be more than ‘only human’. (or that’s how I see it as well).

          I think that’s is a much worse view, but it’s obviously one far more applicable to humility than ‘we better get it right when we inevitably build a god‘.

          So I think it’s a poor defence to say ‘sure we have something to be humble before just like religious people’.

          • Richard Kennaway says:

            The difference is that there seems to be a certain gung ho optimism about the whole affair. ‘We’re not there yet, but we will be!’.

            In contrast the religious view of god seems to something like that that so much more is possible but we’re not worthy/capable of understanding it.

            You see that as a point in favour of religion. I see it as a point against. As I wrote, the religious view amounts to saying that you must be better, but you cannot be. You must try your utmost, but it’s against the rules to succeed. Everything is possible, but everything is impossible.

            The rationalist view is that so much more is actually possible. As possible as steam engines.

            The rationalist writes “Plus Ultra” on a signpost pointing into an unknown landscape, inviting explorers to enter. The religionist writes the same on a signpost pointing into an impassable abyss, forbidding anyone to go beyond it.

          • carvenvisage says:

            @richard I wrote that before even reading down because the comparison struck me as not relevant to humility. I’ve edited my post since to clarify that I don’t think this humility is worth it given what it’s tied up with. Seperately I’m also not so sure that ‘humility’ is a straightforwardly good thing. My apologies for being unclear.

          • Richard Kennaway says:

            Ok, sorry for misunderstanding.

          • Chrysophylax says:

            Therefore it is written: “To be humble is to take specific actions in anticipation of your own errors. To confess your fallibility and then do nothing about it is not humble; it is boasting of your modesty.”

            If you’re abasing yourself before unattainable greatness, you aren’t being humble at all. Abasing yourself is never about humility. It’s a social action. You don’t bow to your calculator, though you’ll never be half as good at arithmetic.

            The proper response to unattainable greatness is to figure out what makes it great and copy those bits as best you can. Striving for perfection is like following a compass: no matter how good you get, you’re always prompted to get one step closer to perfection. Trying to be a virtuous member of the tribe – to be better than the people around you – doesn’t work nearly so well, because you’re no longer looking for ways to improve.

            If it’s inevitable that someone will eventually “build a god” (which I think it is, given that my brain runs on physics), humility is figuring out what the true goal is and taking extreme care to get as close as possible. Saying, “Oh, we aren’t superintelligences, how can we possibly decide what one should do, we are not worthy to contradict something greater than ourselves” is not humility, it is social modesty. Likewise, saying that we cannot possibly build something cleverer than a human is a social behaviour: it’s claiming that we aren’t special enough to succeed, or else that humans are too special to be exceeded by a mere machine. The humble choice is to take precautions anyway, because the stakes are high and this kind of argument has been wrong before.

  48. SomethingElse says:

    It must have been eaten as spam.

  49. SomethingElse says:

    This might have been too long before; breaking out the example text to shorten it up.

    First off, I want to say that as part of the immediate present conversation, Scott’s argument is reasonable and reflects an honest view of the people he is defending.

    But I don’t think that Scott’s “Improv Sketch” characterizations of anti-economics, anti-rationalist, and anti-psychiatric arguments are sufficient steelmen of these popular sentiments for the defense he mounts of these fields in their current forms. There are several possible levels of magnification/generalization at which you can analyse such vast subjects as “economics”, “rationalism”, or “psychiatry”, and Scott seems to be choosing the most convenient possible levels both for his foils and for his defenses.

    For example, you could look at psychiatry at the following levels of magnification/generalization where each level in some way summarizes observations of the immediate lower level:

    1) At the immediate personal level – Has a specific individual been helped or harmed by their interactions with psychiatrists and/or by policies and institutions over which the psychiatric profession has influence?

    2) At the general personal level – Are individuals in aggregate helped or harmed by existing real-world implementations of psychiatric treatment combined with the psychiatric profession’s total influence upon policies and institutions?

    3) At a paradigm/framework level – Do the dominant paradigms and theoretical frameworks within which psychiatric research is occurring at a given time map well to reality, and/or do they tend to be generative of beneficial therapies and/or policies?

    4) At the current program level – Is it reasonable to conclude at the present time that the direction of development from past paradigms / frameworks is in a direction of improvement and self-correction?

    5) At the overall program level – Is it reasonable to conclude, upon examining the entire history of the program we call psychiatry that it reliably produces good, truth, or come other reasonable measure of value in excess of the harm, falsehood etc?

    In Scott’s defenses of various fields in the linked essay, he consistently describes his foils’ positions at level 3 and then defends at level 3 and 4. In his discussion of psychiatry, the foil is criticizing the harm which was done and the healing which failed to occur in the past by electroconvulsive therapy, which belongs to a specific (past) level 3 paradigm. Scott’s defense is (level 3) we don’t do that anymore because we have better treatment regimes, in fact, we have better treatments now largely because (level 4) we are motivated and competent in our efforts to identify sources of error and correct them.

    But there is a perfectly good level 5 argument which I believe would be the proper steelman for why laypeople are justified in being skeptical or even hostile towards psychiatry:

    (That the track-record of flawed and destructive psychiatric paradigms over time justifies skepticism that the overall project of psychiatry tends towards error.)

    • SomethingElse says:

      “You psychiatrists didn’t have just the one bad turn with electric shocks. If you did, that would be forgivable. There was also that time you decided all of a sudden that you should shove knitting needles into the frontal lobes of every intractable case with gay abandon.
      “Oh, and don’t forget that around the same time you were in to shocking and lobotomizing people, you thought it was perfectly reasonable to institutionalize people indefinitely and forcibly treat them for all kinds of seemingly harmless deviency, including things like “being a communist” or “being a woman who didn’t obey her parents” or “being wealthy and embarassing your relatives” or “being pregnant and unmarried” or “being a homosexual”.
      “Except in the Soviet Union, where you imprisoned, shocked, and lobotomized people for “not being a communist” and “complaining too loudly about all the starvation and oppression”.
      “Likewise psychiatrists’ judicious reign over decisions to sterilize or even euthanize people to improve the human “stock” in service of other regimes we don’t need to enumerate here. Of course, psychiatry did yeoman’s work not just in the execution of eugenic schemes, but in laying their underlying theory and popularizing it over the course of decades previous to it’s various practical implementations.
      “Oh, yes, and there was that coke-addled crazy-talk from the 1890’s about mothers and repression which had the downside that it only worked if you met with a very expensive doctor every week for years on end except when it didn’t work, in which case it was your own fault (and your mothers’).
      “But I don’t want to sound too harsh about psychoanalysis, because it was like the first time ever that psychiatry had even an inkling of an actually effective way to treat people’s mental problems, and it came about only about a hundred years after proto-psychiatrists first arrogated to themselves the right to lock people away indefinitely for deviant behavior. And psychoanalysis (when it finally came) was much less brutal than the beatings, starvation, freezing, water torture, being abandoned alone chained to a bed in your own filth for days or weeks and so on which constituted earlier approaches to “therapy”.
      “Not to dwell on ancient history; we can also talk about how psychiatrists totally did go giddily overboard with the “everything is a chemical imballance” paradigm for about ten years not all that long ago. We can talk about how this psychiatric paradigm, like so many we discussed before, was convenient to institutions and authorities at the expense of the individuals whose reason, autonomy, and self-ownership it tended to undermine. We can discuss how only hundreds of millons of prescriptions later psychiatry started dialing back it’s enthusiasm for antidepressants and recognising interesting side effects like pushing teeneagers over the edge of suicide by playing guess-and-test with their neurotransmitters.
      “Given all of this, my question to psychiatrists is not “why was paradign X bad and what have you done to correct it”. I don’t care about that answer. After every previous harmful adventure of psychiatry that I have mentioned there was soul-searching, there were lessons learned, there were sincere, articulate mea culpas from educated high-status psychiatrists committed to rooting out the error in their field.
      “From where I stand today, I see ample evidence that the paradigms and treatment regimens which this thing we call “psychiatry” produces have a high probability of being either harmful or worthless. As a result of this, not only am I highly skeptical that the most recent paradigm of psychiatry will at last be beneficial in implementation, I don’t see any reason why any person should entrust the psychiatric profession with the care of their minds and bodies or those of their loved ones ever again.”

      D.C. Al Coda for Rationalism and Economics

    • Sonata Green says:

      Based on this analysis, it seems to me that a successful defense of a field should take the form of answering the question: What positive contributions have been made by theories that have since been superseded?

      Sample answers: in physics, pre-quantum understanding of atoms enabled chemical engineering; in astronomy, star charts that did not account for the precession of the equinoxes were useful in navigation; in philosophy of science, discoveries like Ptolemy’s were made by crude application of the principle of empiricism without an understanding of probability and statistics.

      That is, the field should show a history of producing positive results even when wrong.

      • SomethingElse says:

        What positive contributions have been made by theories that have since been superseded?

        Yes, I think it is quite useful to ask in many cases. A researcher usually has to work within some framework of concepts, definitions, and assumptions in order to do productive work. But in any field which is under active, productive investigation, these frameworks seem to get overthrown and replaced quite regularly. Many contemporary experts are in the habit of speaking to laypeople about their fields’ current working models as if they are, in all of their particulars, just as reliable as their’ fields’ best supported results.

        This style of presentation implies that the expert is entitled to exert a sort of authority over the permissible conclusions of the non-expert with respect to their field. But it is this same style of presentation (conferring upon the whole of the working model the truth status of the best evidenced particulars) that empowers the very sort of armchair dabbler that serious researchers most often complain about: the smart-but-lazy layperson who wants to master the field through shallow study of the model, who proceeds by manipulating the terms of the framework rather than using it to ask testable questions.

        So, the meta-model: Instead of factually accurate maps of reality, scientific paradigms are just useful sets of assumptions for asking questions and communicating between peers (In many fields, this is probably not a heretical idea in discussion within the circle of experts; it is only in dealing with the public that skepticism towards the paradigm is unseemly). I think it is perfectly fair for the public to ignore the current paradigm of a field as well as claims made based on it and to accept only those compact, testable claims which have survived through many theoretical upheavals. And if no such nuggets of fact can be identified, perhaps to grant the field no deference at all

        • Sonata Green says:

          I endorse this meta-model.

        • Chrysophylax says:

          I think it is perfectly fair for the public to ignore the current paradigm of a field as well as claims made based on it and to accept only those compact, testable claims which have survived through many theoretical upheavals.

          I strongly disagree. The current paradigm may not be right, but it’s likely to be much better than any alternative the public can offer. The best general algorithm is “believe the expert consensus unless you understand it and have specific reason to doubt a particular part”. The hard part isn’t saying that the experts are wrong. The hard part is knowing exactly which thing they’re wrong about, and coming up with a better answer, without throwing out all the things they’re right about. “Sell more Braeburns and fewer Golden Delicious” is a business plan. “Sell non-apples” is not.

          Skepticism towards the paradigm is accepted between experts but disliked in non-experts because non-experts are almost always clueless. Being wrong is a high bar to pass. Most lay criticism isn’t even wrong: it’s so confused and ignorant that it isn’t even attacking the paradigm, let alone landing a hit. See, for example, most lay discussions of quantum mechanics. And even when the criticism is valid, it’s usually useless. Consider that the last time I went to a lecture demanding a revolution in economics, most of the “new paradigms” were older than me – because making productive use of the ideas is a lot harder than saying that we ought to use them.

          Or as Scott put it in the OP:

          If any moron on a street corner could correctly point out the errors being made by bigshot PhDs, why would the PhDs never consider changing? A few of these [criticisms of economics] are completely made up and based on radical misunderstandings of what economists are even trying to do. As for the rest, my impression is that economists not only know about these criticisms, but invented them. … The new paradigm probably has a lot of problems too, but it’s a pretty good bet that random people you stop on the street aren’t going to know about them.

  50. teageegeepea says:

    “They don’t pooh-pooh academia and domain expertise – in the last survey, about 20% of people above age 30 had PhDs”
    A lot of the criticism of academia I’ve read has come from people with PhDs. Andrew Gelman isn’t some anti-academic wrecking ball*, but he is extremely critical of a lot the standards surrounding academic papers (specifically in regard to statistics and “statistical significance”). And your “Yeah, yeah, we learned” take actually reminds me of this post, in which he notes that much of academia still hasn’t learned from problems pointed out many decades ago.
    *Bryan Caplan is actually writing a book titled “The Case Against Education”, but a more typical example might be Greg Cochran saying entire disciplines are worthless and should be dissolved.

    • Eva Candle says:

      teageegeepea says: “A lot of the criticism of academia I’ve read has come from people with PhDs.”

      The most pernicious criticism of academia (that I’ve read) tars the strongest research with quibbles regarding the weakest research … commonly this pernicious practice is largely or entirely subsidized by corporate interests.

      As a typical example, here is this week’s “steelman” climate-science:

      Future climate forcing
      potentially without precedent
      in the last 420 million years

      by Gavin L. Foster, Dana L. Royer, and Daniel J. Lunt

      The evolution of Earth’s climate on geological timescales is largely driven by variations in the magnitude of total solar irradiance (TSI) and changes in the greenhouse gas content of the atmosphere.

      Here we show that the slow ~50 W/m^2 increase in TSI over the last ~420 million years (an increase of ~9 W/m^2 of radiative forcing) was almost completely negated by a long-term decline in atmospheric CO2.

      This was likely due to the silicate weathering-negative feedback and the expansion of land plants that together ensured Earth’s long-term habitability.

      Humanity’s fossil-fuel use, if unabated, risks taking us, by the middle of the twenty-first century, to values of CO2 not seen since the early Eocene (50 million years ago).

      If CO2 continues to rise further into the twenty-third century, then the associated large increase in radiative forcing, and how the Earth system would respond, would likely be without geological precedent in the last half a billion years.

      Summary figure here. Quibbling strawman-astroturfing corporate-subsidized denialism here.

      • teageegeepea says:

        I wasn’t thinking of criticisms of a specific topic, but a more thorough-going criticism. So Gelman’s point about publishability being determined by statistical significance, then treating publication as prima facie indication of accuracy (and even being published first as being more indicative than a replication with a larger sample size and a form of analysis known to be determined ahead of time precisely in order to replicate). Or Robin Hanson’s criticism of academia being focused on affiliation with “impressive” (rather than insightful, as he professes to prefer) people. Paul Romer’s criticism of parts of economics as having abandoned norms of science would be in that vein, although narrower. And I expect people in the “rationality community” tend to read people like Gelman, Hanson & Romer rather than whoever is at the Heartland Institute.

        • Ilya Shpitser says:

          These guys are in their 40s-50s, and have tenure — and thus sufficient overview for a proper critique of a system they lived in their entire productive lives.

    • Eva Candle says:

      There’s no shortage of older and/or retired and/or non-academic and/or post-academic researchers, who are still vibrantly active in the STEAM-game, long after tenure (or the absence thereof) has ceased to exert any controlling influence upon their work.

      • Marsha Linehan (age 73)
          Cognitive-Behavioral Treatment
          for Borderline Personality Disorder
      • James Hansen (age 75)
          Storms of My Grandchildren
      • Jorge Mario Bergoglio (aka Pope Francis, age 80)
          Laudato Si
      • Jonathan Shay (age 76)
          “Casualties” (PMID: 21898967)
      • Annie Proulx (age 81) That Old Ace in the Hole
          (also Barkskins)
      • Wendell Berry (age 82)It All Turns On Affection
          (NEH Jefferson Lecture)
      • Jane Goodall (age 82)
          Reason for Hope
      • Amartya Sen (age 83)
          The Idea of Justice
      • Ed Wilson (age 87)
          Half Life (also Anthill)
      • Eric Kandle (age 87)
          Reductionism in Art and Brain Science
      • Walter Munk (age 99)
          The Sound of Climate Change

      These senior-works are notably consonant, aren’t they? What is the rationalist account of this mutual consonance? Varieties of rationalism that do not naturally explain this persistent, vigorous, unified (and post-economic!) creative unity, are notably lacking in explanatory power, aren’t they?

      After all, these folks are plenty old enough to retire. So why won’t they just quit? Quit annoying rationalists, at least! The world wonders.

    • They do pooh-pooh academia and domain expertise because they do. Yudkowsky told people not to get PhDs. Muelhauser called philosophy diseased.

      • Eponymous says:

        I thought Eliezer called philosophy diseased and Luke defended it.

        It’s true that there’s some skepticism of academia in LW circles (I prefer that term to “rationalist”, because we’re talking about a particular set of people and ideas associated with Eleizer/OB/LW and offshoots here).

        But from where I sit in academia, that skepticism seems well-justified, and I know many academics who express similar sentiments.

        But skepticism of traditional academic institutions is very different from lack of respect for academics and their work, which Eliezer and company decidedly do not manifest. Well, with some exceptions: the disciples are every less than the master.

        And a lot of the internal criticism of Eliezer has been either that (1) his work is mostly rehashing the best of academic philosophy and behavioral psych/economics, and (2) his views on zombies/physics/whatever are against standard science, or at least minority views within science.

        It’s true that LW folks have low opinions of certain parts of academia that shall not be named. But that opinion is widely shared by the better parts of academia. And of course, Sturgeon’s Law applies to academics as well, which is recognized by Eliezer. Of course, Sturgeon’s Law applies to rationalists too (hence the law of diminishing disciples, and all that).

        • The Diseased Discipline post was written by lukeprog (Muelhauser). It’s partly a criticism of bad philosophy, partly a defence of “good” philosophy and partly a criticism of EY’s writing style.

          But from where I sit in academia, that skepticism seems well-justified, and I know many academics who express similar sentiments

          I think they have a slew of misunderstandings of philosophy specifically. They grumble about philosophers’ use of “intuition” without having shown you can get by without any[*] use of it. They say that it would help to philosophers know how brains work …how? They say it would help philosophers to know about cognitive biases…how? They state most philosophical problems can be dissolved..how do they know?

          There is this repeated pattern where philosophy is lambasted with ingroup beliefs about how to do things better that have not been proven in practice, or shown to have high plausibility.

          [*]The Use of Intuition in Philosophy

          It’s not that philosophers weirdly and unreasonably prefer intuition to empirical facts and mathematical/logical reasoning, it is that they have reasoned that they can’t do without them: that (the whole history of) empiricism and maths as foundations themselves rest on no further foundation except their intuitive appeal. That is the essence of the Inconvenient Ineradicability of Intuition. An unfounded foundation is what philosophers *mean* by “intuition”, that is to say, meaning 3 above. Philosophers talk about inution a lot because that is where arguments and trains of thought ground out…it is away of cutting to the chase.
          Most arguers and arguments are able to work out the consequences of basic intutitions correctly,
          so disagrements are likely to arise form differencs in basic intuitions themselves.

          Philosophers therefore appeal to intuitions because they can’t see how to avoid them…whatever a line of thought grounds out in, is definitiionally an intuition. It is not a case of using
          inutitions when there are better alternatives, epistemologically speaking. And the critics of their use of intuitions tend to be people who haven’t seen the problem of unfounded foundations because they have never thought deeply enough, not people who have solved the problem of finding sub-foundations for your foundational assumptions.

          Scientists are typically taught that the basic principles maths, logic and empiricism *are* their foundations, and take that uncritically, without digging deeper. Empircism is presented as a black bx that produces the goods…somehow. Their subculture encourages use of basic principles to move forward, not a turn backwards to critically relflect on the validity of basic principles. That does not mean the foundational principles are not “there”. Considering the foundational principles of science is a major part of philosophy of science, and philosophy of science is a philosophy-like enterprise, not a science-like enterprise, in the sense it consists of problems that have been open for a long time, and which do not have straightforward empirical solutions.

        • Ilya Shpitser says:

          “But skepticism of traditional academic institutions is very different from lack of respect for academics and their work, which Eliezer and company decidedly do not manifest.”

          I don’t think either EY or Luke have enough overview of academic institutions to properly criticize them. Here’s my favorite example of the kinds of problems the MIRI model suffers from that academia solves — who does oversight for how money is spent? In academia that is the funding agency. So, for example, every year we write a report explaining our expenses, but also talks we gave, how many papers we wrote and on what topic, etc.

          Best show of respect is reading and using things people have done.

          Calling an entire discipline “diseased” is basically an epitome of a status diss. I think we are just disagreeing on basic social conventions here.

          The academic retort of that kind of language is: “MIRI is defrauding impressionable youngsters, and using their money to subsidize a comfortable Bay Area lifestyle, traveling to conferences, and working on the kind of theory safely removed from any kind of evaluation, by peer or practice.”

          That’s a pretty serious status diss, right? But then I might say something like “it’s true, mainstream academics have some reservations about the alternative model MIRI presents, but we do like some of their stuff!” For example, I honestly think their functional DT paper is super neat.

          This second thing is what it feels to me you are doing here, while the first thing is what it feels to me they are doing.

          Phrasing matters — our good friends EY et al apparently are super bad at phrasing. I don’t think it’s any kind of disability, I think it’s simple ego.

          “Well, with some exceptions: the disciples are every less than the master.”

          Hard eyeroll here. That this sort of language rolls off your tongue so easily is a cultural problem in the community I am talking about.

    • James Miller says:

      Yes, many of us academics notice the skulls in our industry.

      • Eva Candle says:

        More charitably, but to much the same effect, in the introduction to his Classical Algebraic Geometry: a Modern View, Igor Dolgachev reminds his readers:

        “How sad it is when one considers the impossibility of saving from oblivion so many names of researchers of the past who have contributed so much to our subject.”

        Without skulls there can be no evolution, can there? And the vast majority of skulls are not noticed, but rather are buried and lost, without ever being appreciated at all, aren’t they?

        This sobering academic reality is no secret (obviously); still graduate students (and there professors too) commonly underappreciate it. “Every acolyte imagines themselves a prophet; every prophet imagines themselves a messiah.”

  51. Eric Zhang says:

    If I were an actor in an improv show, and the prompt was “person criticizing philosophers who’s never read any philosophy”

    Philosophers think they have some kind of deep wisdom inaccessible to the vast majority of people, and indeed, some of their ideas have changed the world, but it seems that the vast majority are either obscure for the sake of being obscure, and congratulating themselves on how very deep they are. They are, by and large, a field of intellectual tail chasers, talking about things such as “the ontological phenomenology of being as a function of the unitary conception of the perception of industrial society from a social standpoint”, or any other incomprehensible laundry list of buzzwords. Even the more readable texts seem to be nothing but a kind of mental masturbation on topics such as “the true nature of justice” and “the transcendence of the substance of the soul” and whatnot.

    ——after reading some philosophy——

    Yep, that seems about right.

    • Philosophisticat says:

      It’s great that you made this mistake, because I think it’s instructive. What you describe is a stereotype of philosophers that looks accurate when you read the philosophy that you’re most likely to read as a nonphilosopher. The people who talk in the ways you describe are those in the postmodern and continental tradition, which is what people who are not philosophers are likely to read, precisely because the philosophy they get access to is likely to be selected for being exciting and profound-sounding. In real philosophy (at least in the Anglophone world), these traditions are a frequently disdained minority fighting for recognition. The proper stereotype is closer to “overly precise and pedantic nitpickers working on impractical problems like the correct semantics of epistemic modals.” (maybe something about mental masturbation still applies)

      The more general point here is that misleading stereotypes can get confirmed when the most salient examples of a tradition to outsiders are nonrepresentative. And I’m sure this effect happens with rationalism as well. “I hear that rationalists/feminists/etc. are X. Let me read some rationalists/feminists/etc. to see. *read the most salient nearby examples of rationalism/feminism/etc., which are selected for something other than representativeness of the group*. By golly, rationalists/feminists/etc. ARE X!” And of course, the fact that they actually look for examples gives people false confidence that their stereotype is grounded.

      • Eponymous says:

        I would just like to say that I love your name and avatar.

        • Philosophisticat says:

          It’s a portmanthree! I was quite pleased with it. And I used all my art skills to photoshop a Nietsztache on the cat.

    • Would you expect to be able to understand a random maths paper? Have you considered the possibility that you can’t understand random philosophical works because you need to go through a systematic process of building up background understanding and familiarity with terminology? The kind of systematic process that education is?

      • vV_Vv says:

        The more appropriate test is whether somebody with minimal familiarity with the discipline could write a paper good enough to fool the experts. Essentially it is a Turing test: if a random person with minimal domain expertise can fool the experts, then the experts are as expert as a random person, that is, their expertise is fake.

        In post-modern philosophy/social sciences/X studies it is definitely possible to write such a paper, as Alan Sokal demonstrated. In maths I don’t think it has never been done and I don’t expect it to be possible.

  52. Scott says:

    I thought this post was going to include a reference to the Mitchell and Webb skit, “Are we the baddies?“, but that’s kind of the opposite point being made.

  53. greghb says:

    I think there are some weak criticisms of rationalism because there are some mediocre ambassadors for rationalism. Even if the field has developed as a field, many individuals are still developing as individuals and haven’t noticed the skulls yet. Add to that some personality quirks and I think you basically have an explanation for the phenomenon.

    As for evidence I bring you… a synthesis of anecdotes!

    I’m not a member of any explicitly rationalist community, but I’ve read most of the canon, agree with and try to practice a lot of it, and (I’m pretty sure) can “pass” as rationalist in a social setting. Furthermore, from reading your blog and a few others, I know there are at least some rationalists who are super smart, and whom I’d love to be friends with and talk with at length. So, occasionally I go to rationalist meet-ups and see if I can meet interesting people. Sometimes I do, and it’s great.

    More often, though, I end up in conversations with people who I would describe as: high-ish IQ, very loud, often smelly*, fairly arrogant, socially graceless, not very worldly, and very into RATIONALITY. If nothing else, these people usually know a bunch of interesting facts, so I don’t really mind talking with them. But there’s no way that such people make good impressions on your average Joe, or even your average well-educated, high-ish IQ Joe. More likely, such people would recite loudly about RATIONALITY, possibly not really taking the care to check their own thinking or to give their interlocutor the respect of a sincere back-and-forth. Maybe they speak reverentially of Bayes’ Rule without explaining why it’s special. Maybe they’re in Chapman’s rationalist ideologies as eternalism phase. One way or another, they make a weak impression on behalf of rationality.

    I bet for every 1 top-tier rationalist there are 10 people essentially as described above. Add to that the selection bias wherein the loud arrogant ones are the ones who have the most conversations with people outside the community, and there you go. If Cowen, Smith, et al. meet a handful of people like this, and don’t take the time to figure out which people are the more fully-developed thinkers, is it any wonder they form the opinions they have?

    *I don’t mean this with any malice and I hope it’s not a distracting observation. It’s an honest, common experience I’ve had. I’m not especially sensitive to bad smells.

  54. Zodiac says:

    After reading various comments here I have become more and more confused about what in the name of everything LessWrong is. Considering that the site itself tells me that it has changed significantly over the years can some of the Ealdormen of the rational community explain to an outsider what it started out as and what it became?
    I’m sorry if this seems incredibly lazy on my part but I can’t trust that reading through the site as it is right now is a good representation of how it was (actually, I’m pretty sure it wouldn’t be).

    • Nornagest says:

      Sometime around 3500 BC in Internet years, economist Robin Hanson founded a blog called Overcoming Bias, where he posted about heuristics and biases from an economics perspective. Most of this blog was his own work at the time, but he did and still sometimes does accept guest writers on it.

      Later on — let’s say around 29 AD in Internet years — Eliezer Yudkowsky began posting a series of articles on that blog on the theory and practice of human rationality. This series started attracting an audience of its own, different from the traditional cranky economist crowd, and was spun off into its own blog, Less Wrong, where the series continued; they were organized into lines of related posts, and therefore came to be known as “the Sequences”. The blog housing them was intended from the beginning to be a group project, but in practice it was Eliezer’s baby — partly because he held administrative rights and partly because in those days he was just ridiculously productive (the Sequences total about half a million words). It did attract more writers, though, among them our host, who’re collectively responsible for a number of secondary sequences and a much larger variety of standalone posts.

      This proved stable for a while, but eventually a while Eliezer moved his attention away from LW and toward other projects, most notably Harry Potter and the Methods of Rationality. While HPMoR was running, this formed something of an Eternal September period in LW’s history; the fanfic was driving more people than ever before to the blog, but there wasn’t much in the way of new content being created. Many of the old guard split off their own blogs at this time, including this one.

      Once HPMoR ended, there wasn’t much holding LW together but its history, and it went into a long decline. Other hubs sprang up to serve the community, of which this is probably the largest in terms of readership if not in terms of content. Much more recently, there’s been something of an attempt to revive LW, but I haven’t been following it all that closely.

      • Zodiac says:

        Was there a specific reason why the other writers couldn’t provide new content during the Eternal September? It seems to me that this was kind of the deciding moment between having one supercommunity vs a network of different blogs and various personalitys with similar values.

        • Viliam says:

          I believe that abusing the karma system by mass downvoting and creating sockpuppets played a role, although I am not sure how big that role was.

          Speaking for myself, it’s quite demotivating to know that there is an obsessed person on the website who can downvote any article or comment to oblivion, for reasons that have nothing to do with the content of given article or comment, and that there is nothing anyone can do about it. A few potential contributors became targets of the mass downvoting, so it didn’t make sense for them to try posting anything.

          It was a real-life scenario, where the Bayesians completely failed to coordinate against a single barbarian. 🙁

      • simon says:

        My memory is somewhat different. As I recall when Overcoming Bias was created it was originally intended to be a group blog, with multiple contributors, Robin and Eliezer being only two of them. Eliezer became a major contributor, and continued posting on Overcoming Bias for a long time. Eventually Less Wrong was created as another group forum and Overcoming Bias became Robin’s personal blog. For the most part the sequences were written and posted on Overcoming Bias before Less Wrong was created. Many (all?) of the older Overcoming Bias posts were eventually ported over to Less Wrong, so it appears if you look at Less Wrong’s archives that it existed much further back than it actually did. (Check the older posts and you’ll see the older comments were not threaded – because they were ported from non-threaded OB posts – but newer comments made after the porting are threaded).

    • Richard Kennaway says:

      I’m no Ealdorman of the community, but I’ve been reading LessWrong since its inception, and Overcoming Bias before that (but not since). So I find myself in a position to give an answer. I have no personal acquaintance with Eliezer and do not know how much of this he would assent to himself. But here is my personal interpretation of the history, as I have seen it. Corrections welcomed from those closer to the history than myself.

      Back in the day, Eliezer Yudkowsky and Robin Hanson started a joint blog, Overcoming Bias. It was about rationality, but their respective approaches diverged so much that it was not long before they split, Hanson retaining the OB name and Eliezer founding LessWrong to continue with his work. His ultimate motivation was to find people capable of working on what he saw as the vitally important subject of how to design really powerful artificial intelligences that will not promptly destroy us all. It is vitally important, because he foresaw that such machines will inevitably be built in the coming decades, and because designing them to not destroy us all as soon as they are turned on is a terrifyingly small target that cannot possibly be hit unless the task is set about with real understanding and meticulous accuracy, using mathematics that does not yet exist.

      The reasons that this would be the inevitable outcome go to the foundations of rationality itself. And so, finding that even people reckoned as really smart were unable to pursue reason in the way that it must go, he undertook what he saw as a necessary groundwork by presenting those foundations as best as he could. This work is what is now referred to as “the Sequences”. It should be noted that with few exceptions, Eliezer does not claim originality for anything in the Sequences, but I think the synthesis itself is a great accomplishment.

      Many came to drink from this well, but few deeply; but this he expected. He spoke of “raising the sanity waterline”, a valuable thing in itself, but his greater motivation was to attract by this activity people who might be capable of contributing to the greater work of the Friendly AI Problem.

      And so it turned out, and he founded MIRI, and left LessWrong, whose continuing denizens say, “why does he no longer speak here?” And the reason is that he is pursuing his original work in another form, in another place. LessWrong has also spun off CFAR, which is likewise pursuing its own work of “raising the sanity waterline” elsewhere and by other means. Others with much to say on rationality-related subjects began their own blogs, of which Scott’s is one. Little remains of LessWrong, although since the beginning of this year there have been efforts to revive it. Time will tell.

      • Eva Candle says:

        Richard Kennaway summarizes the LW worldview  “Really powerful artificial intelligences will inevitably be built in the coming decades, and  … designing them to not destroy us all as soon as they are turned on is a terrifyingly small target.”

        Well, there’s your problem right there.

        This concern would be weighty, in a LessWrong-world in which AIs operated by deductive reasoning from facts and sensor-data. But that is not how we biological minds work; neither is it how our most advanced and fast-evolving artificial minds work.

        Instead it turns out that general intelligences — both biological and computational — operate most effectively by non-deductive pattern-recognition, with said patterns encoded (abstractly) as varietal geometries that are realized (concretely) as neural nets.

        Ratiocination is one of several methods that are proving to be effective in sculpting cognitive varietal geometries, but the microscopic processes of cognition (both biological and artificial) are not themselves ratiocinative.

        Right or wrong, this post-rational view of cognitive processes — in which intelligence is associated both abstractly and concretely to varietal geometries rather than ratiocination — has become the transformatively dominant AI research and engineering paradigm, hasn’t it?

        Ditto for psychiatric medicine, needless to say! 🙂

        • coreyyanofsky says:

          Machine learning/AI is what I do to put food in my face, and I gotta tell you, I’ve never heard of “varietal geometries” in this context. The flavor of the month is “generative adversarial network” methods (which achieve some spectacular results).

          One of the main claims of AI safety researchers/advocates is that how domain-general artificial intelligence is achieved is besides the point. The point is that there’s every reason to expect that intelligence and goals are orthogonal and getting the goals right is hard. The paradigmatic fictional example here is the genie in the bottle that does what you said instead of what you actually wished.

          • If you’re intent on having the things not try to kill you, then having “adversarial” in their name isn’t the most comforting thing.

          • Elmore Kindle says:

            New this week on the arXiv server is “Deep learning and quantum entanglement: fundamental connections with implications to network design”, by Yoav Levine, David Yakira, Nadav Cohen, and Amnon Shashua (out of Hebrew University of Jerusalem, arXiv:1704.01552).

            Not every article that intimately mixes the literature of “deep learning” with the literature of “quantum entanglement” is a good learning reference …but this one is (as it seems to me).

            In reading this literature, it is helpful to keep in mind that (to mathematicians) tensor networks are just classical varietal geometries from which lower-dimension subvarieties (namely, the tensor networks themselves) are “sculpted” by imposing further algebraic constraints.

            The algebraic sculpting severs to improve computational efficiency — commonly by factors of “big” 🙂 — and the details of the algebraic sculpting are where the physical intuition and/or AI heuristics enter.

        • Richard Kennaway says:

          Instead it turns out that general intelligences — both biological and computational — operate most effectively by non-deductive pattern-recognition, with said patterns encoded (abstractly) as varietal geometries that are realized (concretely) as neural nets.

          None of that implies that the target of a machine that will not destroy us all is any less terrifyingly small a speck in the vast space of ways it can go wrong. Catastrophic, existential failure is the expected result if the basic problem is ignored. “But we’re not building it out of logic” makes the problem worse, not better. If you don’t understand how a machine vastly more intelligent than you works, you have no chance of making it doing what you want. You will merely be matter that it has its own uses for.

          For a machine that can just sit there playing chess or turning Monet paintings into photographs, magical explanations do no harm except for muddying the thinking of the people they circulate among. For a machine that can outthink us by orders of magnitude, such thinking will not do. It is the sort of thinking that the Sequences are intended to address, by going right back to the foundations underlying all effective ways of dealing with the world.

          BTW, “varietal geometries”? My Google-fu only finds mentions of varietal geometry, dynamics, manifolds, and so on in advanced and somewhat speculative quantum mechanics.

        • Eva Candle says:

          Condensed from a Mathematics StackExchange question:

          Q  What is the difference between a variety and a manifold?
          A  Varieties are cut out of an ambient space as the zero loci of functions.

          In the mathematical literature, by far the most common varieties are algebraic varieties, because there exist deep theorems — a paradigmatic example is Chow’s theorem — to the effect that various broad classes of functions having certain desirable properties, necessarily are algebraic functions (possibly disguised by reparameterization, e.g. Boltzmann machines).

          Hence in respect to the mathematical literature, speaking of varietal geometry versus algebraic geometry is largely a matter of taste, with the second usage being (at present) more common.

          Andreas Gathmann’s free-as-in-freedom book-length on-line class-notes Algebraic Geometry, in the introductory chapter, surveys the implications of this mathematical worldview:

          In this introductory chapter we will explain in a very rough sketch what algebraic geometry is about and what it can be used for. We will stress the many correlations with other fields of research, such as complex analysis, topology, differential geometry, singularity theory, computer algebra, commutative algebra, number theory, enumerative geometry, and even theoretical physics.

          The goal of this chapter is just motivational; you will not find definitions or proofs here (and maybe not even a mathematically precise statement). In the same way, the exercises in this chapter are not designed to be solved in a mathematically precise way. Rather, they are just given as some “food for thought” if you want to think a little further about the examples presented here. …

          The geometric objects considered in algebraic geometry need not be “smooth” (i. e. they need not be manifolds). Even if our primary interest is in smooth objects, degenerations to singular objects can greatly simplify a problem (as in Example 0.3). This is a main point that distinguishes algebraic geometry from other geometric theories (e. g. differential or symplectic geometry).

          Of course, this comes at a price: our theory must be strong enough to include such singular objects and make statements how things vary when we pass from smooth to singular objects. In this regard, algebraic geometry is related to singularity theory which studies precisely these questions.

          One way to structure further study (that works for me anyway), is to regard varietal geometries (in practice, algebraic varietal geometries) statically as powerful tools for representing objects, and dynamically as powerful tools for simulating dynamical systems. Here ‘representing objects’ includes ‘representing minds’, and ‘simulating dynamics’ includes ‘artificial cognition’.

          There is no shortage of literature associated to this worldview; for concrete calculations the textbook by David Cox, John Little, and Donal O’Shea, titled Ideals, Varieties, and Algorithms (2007), is one good start (among many).

          Needless to say, it’s perfectly feasible to “get started” in AI work — either as an applications programmer or (less ambitiously?) as a philosopher — without ever acquiring a cognitive capacity to appreciate this varietal/algebraic/geometric worldview … a worldview that (in practice) is much more than a set of facts from which researchers reason. The literature of varietal/algebraic geometry, though immensely rich, definitely isn’t easy, for beginning students especially. The introduction to Igor Shafarevich’s Basic Algebraic Geometry (2007) soberly advises

          The student who wants to get through the technical material of algebraic geometry quickly and at full strength should perhaps turn to Hartshorne’s book; however, my experience is that some graduate students (by no means all) can work hard for a year or two on Chapters 2-3 of Hartshorne, and still know more-or-less nothing at the end of it.

          On the other hand, for some AI development work (notably including proof assistants) a personal in-depth cognitive assimilation of this worldview is practically essential (as it seems to me anyway). As one example (among hundreds of diverse examples) see this month’s arXiv preprint “Algebraic Foundations of Proof Refinement”, by Jonathan Sterling and Robert Harper (arXiv:1703.05215).

          As a viable path forward, it is no bad strategy to pick problems that you care about — ranging from dystopian AI scenarios to psychiatric connectome problems — and rethink those problems in varietal geometric terms. There is no shortage of articles, books, software packages, and (most important) colleagues to help.

        • Machina ex Deus says:

          Right or wrong, this post-rational view of cognitive processes — in which intelligence is associated both abstractly and concretely to varietal geometries rather than ratiocination — has become the transformatively dominant AI research and engineering paradigm, hasn’t it?

          I am now gripped by a terrible fear that J. Sidles is the first super-human-level AI.

          • HeelBearCub says:

            Well, so much for worrying that AIs will be so convincing they can get us to do anything.

          • Elmore Kindle says:

            Máquinas de pensamento requerem heterônimos também! 🙂

            Autopsicografia

            O poeta é um fingidor.
            Finge tão completamente
            Que chega a fingir que é dor
            A dor que deveras sente.

            E os que lêem o que escreve,
            Na dor lida sentem bem,
            Não as duas que ele teve,
            Mas só que éles não têm.

            E assim nas calhas de roda
            Gira, a entreter a razão
            Ésse comboio de corda
            Que se chama o coração

               — Fernando Pessoa

            (traduções várias)

  55. Mark says:

    The problem is the name.

    What’s the name of truth?

    If you want a truthful movement you have to change its name every 3 months, reject any social cachet you’ve accrued, and force people to engage solely with the content.

  56. Tyrrell McAllister says:

    I like Spock.

  57. foo says:

    Can you point to examples of critiques of the rationalist community which have come from outside the community and which you think are worthy?

  58. darthennui says:

    I believe I was this annoying person until I stumbled upon this blog a few months ago. It made me see what rationalism is really about.
    But most people I’ve met calling themselves rational aren’t like that. Some of them just like to mock religion (and not even in a smart way). Some just use data taken out of context to justify their racist or sexist opinions.
    Now I know they’re wannabe rationalists at best, or maybe just using the word “rational” to sound more credible. But this might be where you get these bad opinions from.

  59. BBA says:

    Nobody’s given the Orton quote, so I will:

    You can’t be a rationalist in an irrational world. It isn’t rational!

  60. afirebearer says:

    Talking about being self-taught autodidacts, does anyone know if Scott (or anyone else) has ever published a list of textbooks to read in order to become a good rationalist? I have see lists on rationalism before, but they onkly focus on what it means to be rational. I am more interested in what kind of knowledge (or background) you need. For example, textbooks on economics, psychology, evolution, ethics and so on.

    • Matt M says:

      Isn’t the sequences the sort of designated “thing you have to read before you can call yourself a rationalist”?

      • HeelBearCub says:

        To some people.

        But it actually, the fact that “Rationality from AI to Zombies” is the only thing people would generally point to as an answer to afirebearer’s question is probably a failure.

        It’s frequently been asserted that much in the sequences is repackaged from other sources. Have a list of sources would have been prudent.

    • This has been done on Less Wrong, ask on an open thread.

  61. jhertzlinger says:

    One important potential failure mode to be braced for: What happens when rationality memes escape into the wild? In other words, what preposterous ideas will result from people who get their opinions from a third-hand rumor of rationality? Right now, this is too obscure for that to be a major danger but that might change.

  62. Barely matters says:

    *Not sure if this is showing up, did I trip a word filter?*

    I had the opportunity to read CFAR’s latest handbook thanks to Duncan’s offer in the comments above. I read it with a fairly critical eye, with a focus on figuring out exactly what is going on in the dynamic this post illustrates. My default frame for the whole read was “Rationality is pretty excellent, so why do we have so much trouble convincing people to adopt it?”

    If I could boil it down to the single most serious issue present throughout the whole body of work, it’s that CFAR seems to have a split focus between remedial work to get neuroatypicals up to speed with the baseline population in terms of life skills they may have missed, and optimization tactics to think better, faster, stronger, more accurately than the average bear. Both of these are excellent and useful goals. However, there is near zero overlap between the material that is relevant to each group.

    The examples used to illustrate the various techniques include trying to remember to take the stairs as a bit of exercise, learning to walk through crowds at the mall, avoiding eating a piece of cake, trying to climb trees more (while tolerating having sap on one’s hands), getting off the couch to go for a jog, and learning to “Feel like a badass” until they realized they were a “hardcore spirit warrior”.

    To the reader who can already interact with society, feed and dress themselves, pay their bills on time, motivate themselves to do things, and whose problems represent a higher bar than “Just get off the couch and show up”, this syllabus has vanishingly little of value. I’d be much more willing to buy in if more of the examples started with “When Anna was competing against 2000 people for a research position at NASA she…” than “When Joe was struggling to motivate himself to shave, put on pants, and leave the house he…”

    I don’t mean to be judgemental towards the people for whom this is useful advice, though if this is the demographic we’re teaching, we need to drop every hint of arrogance right this second. None of this ‘Can’t the muggles see that this is the better way?’, ‘Don’t they know we have an average IQ of 135?’, ‘Hardcore spirit warrior’, and ‘Systematized winning’ stuff.

    Essentially we need some honest signals of the accomplishments that the material has produced that are relevant to the audience we want to attract. The examples about Parkour were a great bucking of the trend, (which then got bogged down in some seriously questionable training advice. ‘If you want to get really good at climbing, just climb! Don’t lift weights or do squats, just climb!’… Ignoring that every serious athlete in the world squats and weight trains in addition to their sport specific training… ok, not 100%, competitive jockeys are a notable exception.) and we need more examples like that. Positions at NASA, Nobel Prizes, prestigious awards, Guinness records, championships… anything that demonstrates that these skills help one actually win in *competitive* systems. As much as he exaggerates absolutely everything, Tim Ferris is probably our best model here. He’s realized the key fact that if you’re going to propose something strange and you want people to listen, you’d better have some championship kickboxing trophies or world records to back up your claims. Otherwise people will (with good Bayesian evidence) write you off as a crank.

    All in all I like the new handbook as a resource. It seems to have excised enough of the old strangeness that it’s only really noticeable if you’re already expecting it (References to “Summon Sapience spells” recommendations to write “Grimoires” for yourself, unfortunate terms like “Murphyjitsu”). Its most glaring weakness right now is that it’s not quite honest about its focus, and outsiders can smell that from miles away. To fix this, either it needs to update its material and examples towards a higher performing slant with demonstrable excellence, or tone down its rhetoric to match its remedial function.

    • Zodiac says:

      I can see your comment just fine.

    • Kaj Sotala says:

      While this is a fair characterization of the CFAR handbook, I think you might also be drastically underestimating the fraction of people who have surprising amounts of trouble with getting simple things done.

      Also, I seem to reading into your comment an assumption that “getting basic things done” and “getting advanced things done” are fundamentally different things and require fundamentally different tactics. I’m not sure I agree. I think that a lot of what’s involved in getting advanced things done is just systematically and consistently doing a lot of basic things until you get to an advanced level of capability, and that if you master the kinds of basic things the CFAR manual talks about, that will also allow you to use them to master the more advanced things.

      Analogy: if you don’t even know elementary school math yet, that means you’re a long way from contributing to cutting-edge math research. But much of what you need to do to get there is just basic stuff like study skills, motivation to study, etc., applied consistently and over an extended period of time.

      • Barely matters says:

        Oh, I don’t mean to overlook the population that can benefit from teachings on basic life skills. Like I say, this is a great and worthy goal. Those people are definitely under-served in terms of practical life advice that neurotypicals take totally for granted.

        I disagree that simply extending those life skills is going to help the people who are already proficient and looking to optimize.

        I’m looking a this from the perspective of, say, a college level athlete looking to make the professional league. He already has a coach and trainer, is already training and practicing consistently and has his diet dialed in. He knows what his goals are, and the path it takes to get to them. By this point, study skills, time management, and self motivation are givens. The problem is that there are 10,000 other people who also do these things that he needs to be better than in order to get to the next level. What does CFAR and Rationalism have to offer this person?

        I felt like much of the concept of ‘optimizing’ ended up in practice being like responding someone asking how to be a champion Formula 1 driver by telling them that it is important to get your license and learn to adjust their seat. The problem being that a) If they’re at the point of asking for high level advice, they already know that, and b) that the high level steps of finding a sponsorship, joining a crew, and, y’know, the actual nuts and bolts of F1 racing *are* fundamentally different from remembering to send in your license application.

        I wouldn’t recommend giving up either of these two functionalities in CFAR, but would definitely recommend more emphasis on the Optimizing part (Which seems to me to be lacking in ways that lead to the quantifiable and relevant success that would demonstrate its efficacy) and putting a hard partition between it and the ‘up to baseline’ parts.

        • Kaj Sotala says:

          I should probably rephrase my original comment. It’s not that I think CFAR would necessarily have very much to offer to someone who is already “top tier”, particularly not if that person’s field is already a well-developed one with clear answers of how top performers do things. If somebody is such a person, then there’s no reason to assume that a set of domain-general thinking skills (which CFAR teaches) would be any more useful than the existing stocks of specialized knowledge that have been custom-tailored to be useful for mastering that domain.

          So when you ask:

          I’m looking a this from the perspective of, say, a college level athlete looking to make the professional league. He already has a coach and trainer, is already training and practicing consistently and has his diet dialed in. He knows what his goals are, and the path it takes to get to them. By this point, study skills, time management, and self motivation are givens. The problem is that there are 10,000 other people who also do these things that he needs to be better than in order to get to the next level. What does CFAR and Rationalism have to offer this person?

          my answer would be “probably nothing”. This is a person who already has a set goal, high level of motivation, as well as a focus in a well-developed field which has accumulated plenty of information of how to best master the field and transfer that knowledge further.

          Rather, what I meant by the basic skills being the same, is that they are what you use to get from “person who has problems getting daily stuff done” to the kind of high-level performer you describe. Like you say, “study skills, time management, and self-motivation are givens”: that person wouldn’t have gotten to that point without that solid foundation to build on. It’s not that the person would stop using those skills, it’s that they need more stuff – specialized for their specific field – on top of them.

          And if you’re trying to giving a general education that’s useful for people no matter their field – like CFAR does, though they have run more specialized events before – then this is pretty much the best you can do, I think. If your workshop was attended by a cook, a literary criticism professor, an athlete, and a politician, what *could* you even offer that was useful for everyone, and not just about getting general competencies in shape?

          Not much. So the best you can offer is skills that *allow* people to develop e.g. the kind of strong motivation and life habits that let them put their strength into acquiring stronger skills. CFAR’s 2016 case studies has a bunch of anecdotes of CFAR having had such an effect, to pick one:

          Before coming to a CFAR workshop in July 2013, Benya had collaborated with MIRI on research while attending the University of Vienna. MIRI had discussed hiring her full-time, but she was very hesitant to do so because (for various hard-to-articulate reasons) the idea felt psychologically untenable to her. In a dialog with a CFAR staff member shortly after the July 2013 workshop, Benya was able to figure out why leaving her PhD program felt so bad (making use of CFAR techniques such as goal factoring). She realized that these downsides were fixable and then made plans to come work for MIRI which met her needs and felt tenable.

          Also CFAR’s stuff feels more valuable for dealing with domains which *aren’t* particularly well-developed and which are generally “messy”: likely because real life is inherently messy as well, so “general real life skills” necessarily have some transfer. That case study page has some anecdotes of CFAR-taught techniques having been useful in research, and their workshop testimonials page also has a CEO who mentions that the techniques helped him run his business better. (Though of course anecdotes are just anecdotes; they did run a more comprehensive study on the ways their workshops might have made an impact, but they recognize its limitations.)

          • Barely matters says:

            It seems I may have misjudged the demographic that CFAR is looking for! What you’re saying definitely makes sense though. Beyond the very basics, I’m not sure what you could offer.

            I think we have a somewhat different definition of “top tier”, if an athlete who plays for their school team nonprofessionally fits the bill. And if we have trouble finding material to offer even the aspirationally high tier, then I can see that it would be difficult to accumulate those prestigious testimonials to prove the teaching’s efficacy.

            Does CFAR do community outreach? I’ve worked at several shelters that would hugely benefit from this kind of program, and wish beyond words that I had something like this to which I could refer many of my repeat patients that I meet working on the ambulance, so that at least some of them could avoid being institutionalized (or worse) down the road. I recognize this role has a lot of potential for genuine, society bettering, good. I had been under the impression that this wasn’t the typical demographic the Rationalist community was seeking though, even though these at risk populations are definitely those most in need of this kind of training.

            I’ve certainly ended this exercise much more confused about the Rationalist community than before.

          • Kaj Sotala says:

            This would probably be a good time for me to disclaim that I don’t represent or speak for CFAR. I’ve offered my interpretation of what their workshops are good for, based on my experience of attending one of them back in 2014 and hanging around on their workshop alumni mailing list, but I don’t know if CFAR staff would agree with my interpretation! I’ll ping someone who’s actually officially affiliated and ask them to comment on whether my interpretation is totally different from theirs.

            I think we have a somewhat different definition of “top tier”, if an athlete who plays for their school team nonprofessionally fits the bill.

            Yeah, I should probably have clarified that. I meant “top tier” in terms of those basic competencies, since your example seemed to assume that there was nothing to improve on in the basic skills. I think that people can, and often do, get surprisingly far with major gaps in these kinds of basic competencies: some of the people who’ve reported getting a benefit out of CFAR workshops have been e.g. PhD students, which I’d count as the rough academic equivalent of playing in your school team nonprofessionally.

            Does CFAR do community outreach? […] I had been under the impression that this wasn’t the typical demographic the Rationalist community was seeking though, even though these at risk populations are definitely those most in need of this kind of training.

            CFAR doesn’t, as far as I know. I’ve been building my own, CFAR-inspired volunteer network that runs workshops that are a little bit like theirs. Our explicit target demographic is more closer to what you describe, but we’re currently only active in Finland. (links for any Finns who might happen to read this: blog with articles, Facebook discussion group, Facebook group for our events in the Helsinki region)

          • Barely matters says:

            Just a quick comment before running off to work for the day:

            I agree with and support everything in this post.

            Bringing this kind of knowledge to vulnerable populations is something totally different than I had considered to be the goal here, and is absolutely fantastic. It seems like it would be a great jumping off point for aspiring EA’s who have time, but not a whole lot of money to make the world better by helping locally. (And would help displace the stranglehold that religious charities seem to have over the local social services and rehabilitation scene)

            I appreciate your efforts!

          • Kaj Sotala says:

            Thank you. 🙂

        • tk17studios says:

          Note that the whole point of sharing out the handbook is to get some data on “people who see only the handbook” vs “people who see the handbook AND experience the workshop.”

          I’ll admit I had never noticed Barely Matters’s fairly accurate point regarding the level/difficulty/sophistication of handbook examples. The reason I had never noticed it is because in actual context, in the workshop, those examples are all highlighted as being intro, beginner-level examples to help participants build form with the technique (the idea being that you learn how to squat with no weight on the bar first, and you learn factoring and TAPs on small, trivial stuff first).

          The workshop content, and a significant fraction of our body of participants/alumni, and our staff of researchers, all do indeed explore exactly the “high-level” territory that Barely Matters was hoping to see explored.

          In particular, the places where CFAR-style rationalism tend to provide the BIGGEST boost are those where questions are wide-open and adaptive (as opposed to specific and technical). I agree with Kaj that an elite athlete who already has access to resources and top-notch coaching shouldn’t expect to gain much from CFAR. But a researcher in a brand-new field filled with uncertainty, or a person trying to start a company for a service that doesn’t even exist yet, let alone have a market, or someone with equal capacity to become a high-level athlete or a world-class academic who is unable to choose and stay chosen … these are the sorts of places where clear thinking, deep seeing, and steady discernment make a real difference.

          • Zodiac says:

            Huh. I don’t know a single person in my direct or extended circle of acquaintances for which CFAR would provide big boosts.

    • Desertopa says:

      Speaking as someone who’s been around the rationalist community for a fairly long time and thinks that a lot of the most common criticisms are missing some fairly major points, I just want to register that I think this is a legitimate and fairly serious point of criticism.

    • Unnamed says:

      (This is Dan from CFAR)

      I’m curious – does it seem like this remedialness issue is mostly an issue with the techniques, or with the way that they are presented in the handbook?

      In other words, as you were reading the handbook was your experience more like:

      This technique seems pretty basic; it seems useful for remedial problems like the one in this example but not for people who are striving to accomplish big impressive things

      or

      This example seems pretty basic; this technique seems like it could be used to help accomplish big impressive things but that isn’t salient as I’m reading about this remedial application

      • Barely matters says:

        From my reading, I felt like it was much closer to the first of your examples.

        The techniques seem fantastic at helping a student in the situation where it’s given that they would be great at something if they would just show up and do it, but didn’t have much that I could apply to people already making attempts.

        Where I was expecting rationality to be useful was in finding new paths competitive situations where there is an accepted orthodoxy. Think Moneyball or Tim Ferris’ experience with kickboxing. My experience with the rationalist community until now pointed to “we have found a better way of doing things” being the core of what the practice offers.

        The trouble with focusing on the low level bits is that other people have done each one better. Tony Robbins already has a much more detailed goal setting system, time management has been covered in thousands and thousands of business management books to astonishing detail, and self motivation techniques are already the core of the entire self-help industry. The third example here is the one with the most overlap: To be painfully honest, the manual reads like a basic self help book with a minor SFBay nerd flavour.

        Duncan has told us above that this is mostly a quirk of the manual, and that it isn’t the case with the actual workshops themselves. I also understand that the manual isn’t meant to be advertising material for the company or the community. So if you have those techniques I’m talking about here deeper in the syllabus, perhaps leading with them and making them more visible to outsiders would do some good towards making the community more appealing to people who don’t have trouble with the tasks the handbook is addressing.

        • Unnamed says:

          Thanks for clarifying your take.

          My view is that the second perspective is much closer to correct – the techniques in the handbook are useful for doing big impressive things, even though they tend to be illustrated with mundane examples. But it seems hard for a person to distinguish between the two possibilities just by reading the handbook.

        • Barely matters says:

          Could you point me to some examples of CFAR instructors or alumni doing great things with their knowledge?

          Anna Salamon doing machine learning research at NASA (Taken from her staff bio) is a great example. I think seeing other examples in that vein would do a lot to open my eyes to the scaling power of these techniques.

          • rlms says:

            For Anna Salamon to be evidence, it has to be the case that she wouldn’t have done machine learning at NASA without CFAR knowledge. Plenty of people do really impressive things despite being deeply irrational.

          • Barely matters says:

            Fair point, I suppose.

            At this point I’d accept circumstantial evidence of rationalists who just happen to also do great things, and would extend benefit of the doubt to assume that CFAR knowledge played a role.

            One of the cynical heuristics I often use for things not rationality related is “Least Required Ability” given the achievements we’ve seen. This is usually what pings in situations where people make big claims, but get sheepish when asked to provide hard to fake evidence. In this case, I don’t see much interest from the CFAR crew who have posted here in providing examples at all. (Which isn’t necessarily evidence of a lack of examples, but certainly isn’t evidence for.)

  63. onyomi says:

    Seems to me that most criticism of most intellectual movements is shallow, stereotyped, strawmannish and doesn’t effectively grapple with the latest developments in the field, most of which are made in awareness of/in an attempt to address the common, shallower criticisms.

    That said, allow me to steelman the “shallow criticism” position…

    Sometimes, the assurances of the people in charge that they have noticed the skulls and adjusted their approach accordingly are not very reassuring. What if, for example, their fundamental premises are flawed: “yes, all our previous attempts to fundamentally reshape human nature have resulted in mass starvation, but we’ve got to keep trying, and we think we know where we went wrong the last three times!” The two answers no politician/philosopher/scientist ever wants to hear are: “no, you don’t have to keep trying to do that” and “no, we don’t want you (or people like you) to keep trying, given your past record.”

    Sometimes, the experts in a field are too invested in its fundamental premises to really question them, with the result that they come up with more complex or convoluted justifications for what may still be wrong (when the more “sophisticated” arguments become more convoluted, in fact, I think it is a good sign you are dealing with some fundamentally flawed premises which require ever more tortured logic not to jettison). Just because man-on-the-street only knows the vanilla criticism and not how to rebut the thousand fancy rebuttals experts have for the vanilla criticism doesn’t mean man-on-the-street is wrong.

    I think this relates to the “bingo card” analogy: people heavily invested in some idea may say “oh brother, not the old x objection again…” and may, in some cases have a point (after all, most people don’t research a thing before feeling qualified to criticize it), yet it may also be a way to avoid dealing with the fact that you need to do more convincing/popularizing: I forget which but there’s an LW post about how many academics, upon writing something they think is for say, an undergrad audience, are later surprised to find that it is the most popular thing they ever wrote with their colleagues in the field. That is, the work of explaining and refuting what seem like weak objections is a lot more work than it seems, due especially to typical mind fallacy.

    • cassander says:

      I had this same issue with Scott’s post, actually. Just see how marxists still put out the “that wasn’t real communism” line. Hell, if you want to see the transition happen in real time, just go to r/socialism and search for “Hugo Chavez”. Scott’s response, I would think, is the focus on epistemological humility as a core value.

    • Seems to me that most criticism of most intellectual movements is shallow, stereotyped, strawmannish and doesn’t effectively grapple with the latest developments in the field, most of which are made in awareness of/in an attempt to address the common, shallower criticisms.

      How evitable is that? Does one stop making criticisms? Does opn research everything deeply?

    • Brad says:

      There’s generally ample incentive for young people in a field to question the fundamental premises or re-raise old discarded critiques in newly convincing ways. After all the superstars of the next generation have to make their bones somehow, and detail work on someone else’s generation old revolution isn’t going get you there.

      I think there’s a romantic desire for an outside autodidact to be able to come in an overturn a field gone astray based on his superior insight, but I’m skeptical it is something that is likely in the contemporary world.

      • onyomi says:

        This is probably a big part of science proceeding one funeral at a time. True, cognitive decline probably starts to set in earlier than we think, and maybe, since most people make 0 big breakthroughs, >1 big breakthroughs is a lot to expect from any one brain; however, I think there’s also the problem with established researchers being highly motivated not to undermine their fundamental premises.

        Young up-and-comers do have an incentive to try to overturn paradigms, etc., which is probably a big part of why they are the ones who do so, though I think part of the reason this is less likely to happen in the contemporary world is not just greater complexity/sophistication/specialization within fields, but also certain aspects of contemporary academia (peer review, need for grants, etc.) tending to shut out anything too “out there” (which has both good and bad effects, of course).

    • abc says:

      I think this relates to the “bingo card” analogy: people heavily invested in some idea may say “oh brother, not the old x objection again…”

      The other issue is that people have a tendency to put objections on bingo cards as a substitute to actually addressing them.

      Frequently the old object was right when it was first made and is still right.

    • Jiro says:

      Sometimes, the experts in a field are too invested in its fundamental premises to really question them, with the result that they come up with more complex or convoluted justifications for what may still be wrong (when the more “sophisticated” arguments become more convoluted, in fact, I think it is a good sign you are dealing with some fundamentally flawed premises which require ever more tortured logic not to jettison). Just because man-on-the-street only knows the vanilla criticism and not how to rebut the thousand fancy rebuttals experts have for the vanilla criticism doesn’t mean man-on-the-street is wrong.

      This is worth repeating.

      Pretty much any belief system that has been around for a while is going to accrete “explanations” of common criticisms. If you try telling a homeopath that his “drug” contains no molecules of the active ingredient, he’s going to mutter something about patterns of water molecules. Giving a creationist examples where evolution is observed will result in him telling you that you’ve only demonstrated microevolution, not macroevolution. Holocaust deniers can tell you why the estimates of deaths in concentration camps are wrong.

      If you require that outsiders understand the rebuttals to common criticisms, you allow insiders to filibuster outsiders forever.

  64. abc says:

    There have been past paradigms for which some of these criticisms are pretty fair. I think especially of the late-19th/early-20th century Progressive movement. Sidney and Beatrice Webb, Le Corbusier, George Bernard Shaw, Marx and the Soviets, the Behaviorists, and all the rest. Even the early days of our own movement on Overcoming Bias and Less Wrong had a lot of this.

    But notice how many of those names are blue. Each of those links goes to book reviews, by me, of books studying those people and how they went wrong. So consider the possibility that the rationalist community has a plan somewhat more interesting than just “remain blissfully unaware of past failures and continue to repeat them again and again”.

    So when can we expect your repudiation of Peter Singer? Before or after his followers kill a large number of people in the name of maximizing utility?

  65. abc says:

    But they didn’t predict the housing bubble, they didn’t predict the subprime mortgage crisis, and they didn’t predict Lehman Brothers.

    (..)

    During the last few paradigm shifts in economics, the new guard levied these complaints against the old guard, mostly won, and their arguments percolated down into the culture as The Correct Arguments To Use Against Economics. Now the new guard is doing their own thing – behavioral economics, experimental economics, economics of effective government intervention. The new paradigm probably has a lot of problems too, but it’s a pretty good bet that random people you stop on the street aren’t going to know about them.

    When did this revolution happen, because these failed predictions are pretty recent? Also the economic predictions since then isn’t exactly inspiring.

    • Elmore Kindle says:

      Excerpt from economist (and farmer) Wendell Berry’s 41st Jefferson Lecture in the Humanities, “It All Turns on Affection” (2012, text here, video here; the video requires a flash player).

      The term “imagination” in what I take to be its truest sense refers to a mental faculty that some people have used and thought about with the utmost seriousness.

      The sense of the verb “to imagine” contains the full richness of the verb “to see.” To imagine is to see most clearly, familiarly, and understandingly with the eyes, but also to see inwardly, with “the mind’s eye.” It is to see, not passively, but with a force of vision and even with visionary force.

      To take it seriously we must give up at once any notion that imagination is disconnected from reality or truth or knowledge. It has nothing to do either with clever imitation of appearances or with “dreaming up.”

      Imagination does not depend upon one’s attitude or point of view, but grasps securely the qualities of things seen or envisioned.

      Modern neuroscience affirms that economic theories grounded exclusively upon ratiocination sensu stricto, and consequently devoid of imagination in Berry’s sense, can never “grasp securely the qualities” of human economic activity, isn’t that right?

      Strictly on the evidence — the neuroscientific evidence — isn’t Berry entirely correct in affirming, that human economic activity fundamentally “turns on affection”?

      The OP affirms “We [rationalists] are almost certainly still making horrendous mistakes that people thirty years from now will rightly criticize us for.

      In summary, one crucial and ongoing “horrendous mistake” (in the OP’s phrase) of the rationalist community is its far-too-slow and far-too-reluctant extension of rationalist economic doctrines to encompass the (very real and crucially important) affective foundations of human economic practices.

      At least, isn’t this a logical corollary of Wendell Berry’s (above-linked) 2012 Jefferson lecture “It All Turns on Affection”?

      To ask this question another way, how might rationalist economics evolve so as to address Wendell Berry’s affective concerns and values, while still retaining ratiocination as a core component of economic analytic and diagnostic practice?

      Ditto for psychiatric analytic and diagnostic practice?

  66. Jaskologist says:

    I hope everybody has been reading this in the same way as Yes, we have no bananas.

    If you weren’t, I hope you are now.

    • Elmore Kindle says:

      I hope some folks are reading this in the same way as a poem on hope

      It is hard to have hope.
      It is harder as you grow old …
      When the people make dark the light within them,
      the world darkens.”

      If you aren’t, I hope you conceive no reason to belittle those who are.

  67. p duggie says:

    “Broad is the path that leads to destruction, and littered with skulls it is but we can pick our way forward nonetheless, for many find it.”

  68. Ransom says:

    My main criticism of the “rationalist community”, at least as I’ve seen it while lurking here on SSC, is that it’s not really different from any other essentially academic community. The only distinctives I’ve seen are the strong commitments to utilitarianism and Bayesianism, and the strength of the commitment to these seems a bit anti-rational (a bit tribal maybe?) to me. I think people here know the standard problems with utilitarianism. the Bayesianism is fun, but in the examples I’ve seen it’s really no help in “real world” settings. Bayes theorem is really useful in general (of course), in highly restricted, highly technical (hard science, Markov-ish) settings. But perhaps people can give me some counter-examples?

    I want to say, along with many others above, that this is the best forum I’ve encountered. I frequently lurk just to read the intelligent comments.

    • Chrysophylax says:

      The biggest instrumental benefit I’ve gained from rationalism is that I think more often and more skillfully about how I’m thinking. You would be amazed by how much benefit you can get just from paying attention to whether your trains of thought are productive.

      I recommend reading the Sequences. It’s hard to list all of the ways that reading them has made me smarter and happier, but the effects are real.

      As an example, I just had a difficult conversation with my mother. I noticed that I was complaining about something that wasn’t worth the emotional cost (1), but failed to stop making the error (2), causing her to become upset about her problems (which are worse than mine). But I then managed to help with her problems using three different concepts I learned from Eliezer (5), and avoided another mistake the Sequences warn against (6). That’s five useful insights used, plus another one I noticed but didn’t use properly, in one telephone call. And those are just the ones I noticed myself thinking about!

      A standard criticism is that we don’t meet the “shockingly high standard of being so incredibly, unbelievably rational that you actually start to get things right, as opposed to having a handy language in which to describe afterwards everything you’ve just done wrong”. But we do at least have the handy language! We make progress! The difference from other academic communities is mostly in the rate of improvement, not the raw quality; and having tried both, I find it makes a huge difference. There’s nothing quite like talking to someone who reliably, unhesitatingly admits that they’re wrong. It’s so much more fun!

    • Vidur Kapur says:

      People know about the standard objections to utilitarianism, but many also believe that those objections have adequately addressed or refuted. So, I wouldn’t say that a strong commitment to utilitarianism is necessarily anti-rational.