My Id On Defensiveness

I.

I’ll admit it – I’ve been unusually defensive lately. Defensive about Hallquist’s critique of rationalism, defensive about Matthews’ critique of effective altruism, and if you think that’s bad you should see my Tumblr.

Brienne noticed this and asked me why I was so defensive all the time, and I thought about it, and I realized that my id had a pretty good answer. I’m not sure I can fully endorse my id on this one, but it was a sufficiently complete and consistent picture that I thought it was worth laying out.

I like discussion, debate, and reasoned criticism. But a lot of arguments aren’t any of those things. They’re the style I describe as ethnic tension, where you try to associate something you don’t like with negative affect so that other people have an instinctive disgust reaction to it.

There are endless sources of negative affect you can use. You can accuse them of being “arrogant”, “fanatical”, “hateful”, “cultish” or “refusing to tolerate alternative opinions”. You can accuse them of condoning terrorism, or bullying, or violence, or rape. You can call them racist or sexist, you can call them neckbeards or fanboys. You can accuse them of being pseudoscientific denialist crackpots.

If you do this enough, the group gradually becomes disreputable. If you really do it enough, the group becomes so toxic that it becomes somewhere between a joke and a bogeyman. Their supporters will be banned on site from all decent online venues. News media will write hit pieces on them and refuse to ask for their side of the story because ‘we don’t want to give people like that a platform’. Their concerns will be turned into bingo cards for easy dismissal. People will make Facebook memes strawmanning them, and everyone will laugh in unison and say that yep, they’re totally like that. Anyone trying to correct the record will be met with an “Ew, gross, this place has gone so downhill that the [GROUP] is coming out of the woodwork!” and totally ignored.

(an easy way to get a gut feeling for this – go check how they talk about liberals in very conservative communities, then go check how they talk about conservatives in very liberal communities. I’m talking about groups that somehow manage to gain this status everywhere simultaneously)

People like to talk a lot about “dehumanizing” other people, and there’s some debate over exactly what that entails. Me, I’ve always thought of it the same was as Aristotle: man is the rational animal. To dehumanize them is to say their ideas don’t count, they can’t be reasoned with, they no longer have a place at the table of rational discussion. And in a whole lot of Internet arguments, doing that to a whole group of people seems to be the explicit goal.

II.

There’s a term in psychoanalysis, “projective identification”. It means accusing someone of being something, in a way that actually turns them into that thing. For example, if you keep accusing your (perfectly innocent) partner of always being angry and suspicious of you, eventually your partner’s going to get tired of this and become angry, and maybe suspicious that something is up.

Declaring a group toxic has much the same effect. The average group has everyone from well-connected reasonable establishment members to average Joes to horrifying loonies. Once the group starts losing prestige, it’s the establishment members who are the first to bail; they need to protect their establishment credentials, and being part of a toxic group no longer fits that bill. The average Joes are now isolated, holding an opinion with no support among experts and trend-setters, so they slowly become uncomfortable and flake away as well. Now there are just the horrifying loonies, who, freed from the stabilizing influence of the upper orders, are able to up their game and be even loonier and more horrifying. Whatever accusation was leveled against the group to begin with is now almost certainly true.

I have about a dozen real-world examples of this, but all of them would be so mind-killing as to dominate the comments to the exclusion of my actual point, so generate them on your own and then shut up about them – in the meantime, I will use a total hypothetical. So consider Christianity.

Christianity has people like Alvin Plantinga and Ross Douthat who are clearly very respectable and key it into the great status-conferring institutions like academia and journalism. It has a bunch of middle-class teachers and plumbers and officer workers who go to church and raise money to send Bibles to Africa and try not to sin too much. And it has horrifying loons who stand on street corners waving signs saying “GOD HATES FAGS” and screaming about fornicators.

Imagine that Christianity suffers a sudden total dramatic in prestige, to the point where wearing a cross becomes about as socially acceptable as waving a Confederate flag. The New York Times fires Ross Douthat, because they can’t tolerate people like that on their editorial staff. The next Alvin Plantinga chooses a field other than philosophy of religion, because no college would consider granting him tenure for that.

With no Christians in public life or academia, Christianity starts to seem like a weird belief that intelligent people never support, much like homeopathy or creationism. The Christians have lost their air support, so to speak. The average college-educated individual starts to feel really awkward about this, and they don’t necessarily have to formally change their mind and grovel for forgiveness, they can just – go to church a little less, start saying they admire Jesus but they’re not Christian Christian, and so on.

Gradually the field is ceded more and more to the people waving signs and screaming about fornicators. The opponents of Christianity ramp up their attacks that all Christians are ignorant and hateful, and this is now a pretty hard charge to defend against, given the demographic. The few remaining moderates, being viewed suspiciously in churches that are now primarily sign-waver dominated and being genuinely embarrassed to be associated with them, bail at an increased rate, leading their comrades to bail at an even faster rate, until eventually it is entirely the sign wavers.

Then everybody agrees that their campaign against Christians was justified all along, because look how horrible Christians are, they’re all just a bunch of sign-wavers who have literally no redeeming features. Now even if the original pressure that started the attack on Christianity goes away, it’s inconceivable that it will ever come back – who would join a group that is universally and correctly associated with horrible ignorant people?

(I think this is sort of related to what Eliezer calls evaporative cooling of group beliefs, but not quite the same.)

In quite a number of the most toxic and hated groups around, I feel like I can trace a history where the group once had some pretty good points and pretty good people, until they were destroyed from the outside by precisely this process.

In Part I, I say that sometimes groups can get so swamped by other people’s insults that they turn toxic. There’s nothing in Part I to suggest that this would be any more than a temporary setback. But because of this projective identification issue, I think it’s way more than that. It’s more like there’s an event horizon, a certain amount of insulting and defamation you can take after which you will just get more and more hated and your reputation will never recover.

III.

There is some good criticism, where people discuss the ways that groups are factually wrong or not very helpful, and then those groups debate that, and then maybe everyone is better off.

But the criticism that makes me defensive is the type of criticism that seems to be trying to load groups with negative affect in the hopes of pushing them into that event horizon so that they’ll be hated forever.

I support some groups that are a little weird, and therefore especially vulnerable to having people try to push them into the event horizon.

And as far as I can tell, the best way to let that happen is to let other people load those groups with negative affect and do nothing about it. The average person doesn’t care whether the negative affect is right or wrong. They just care how many times they see the group’s name in close proximity to words like “crackpot” or “cult”.

I judge people based on how likely they are to do this to me. One reason I’m so reluctant to engage with feminists is that I feel like they constantly have a superweapon pointed at my head. Yes, many of them are very nice people who will never use the superweapon, but many others look like very nice people right up to the point where I disagree with them in earnest at which point they vaporize me and my entire social group.

On the other hand, you can push people into the event horizon, but you can’t pull them in after you. That means that the safest debate partners, the ones you can most productively engage, will be the people who have already been dismissed by everyone else. This is why I find talking to people like ClarkHat and JayMan so rewarding. They are already closer to the black hole than I am, and so they have no power to load me with negative affect or destroy my reputation. This reduces them to the extraordinary last resort of debating with actual facts and evidence. Even better, it gives me a credible reason to believe that they will. Schelling talks about “the right to be sued” as an important right that businesses need to protect for themselves, not because anyone likes being sued, but because only businesses that can be sued if they slip up have enough credibility to attract customers. In the same way, there’s a “right to be vulnerable to attack” which is almost a necessary precondition of interesting discussion these days, because only when we’re confronted with similarly vulnerable people can we feel comfortable opening up.

IV.

But with everybody else? I don’t know.

I remember seeing a blog post by a moderately-well known scholar – I can’t remember who he was or find the link, so you’ll just have to take my word for it – complaining that some other scholar in the field who disagreed with him was trying to ruin his reputation. Scholar B was publishing all this stuff falsely accusing Scholar A of misconduct, calling him a liar and a fraud, personally harassing him, and falsely accusing Scholar A of personally harassing him (Scholar B). This kinda went back and forth between both scholars’ blogs, and Scholar A wrote this heart-breaking post I still (sort of) remember, where he notes that he now has a reputation in his field for “being into drama” and “obsessed with defending himself” just because half of his blog posts are arguments presenting evidence that Scholar B’s fraudulent accusations are, indeed fraudulent.

It is really easy for me to see the path where rationalists and effective altruists become a punch line and a punching bag. It starts with having a whole bunch of well-publicized widely shared posts calling them “crackpots” and “abusive” and “autistic white men” without anybody countering them, until finally we end up in about the same position as, say, Objectivism. Having all of those be wrong is no defense, unless somebody turns it into such. If no one makes it reputationally costly to lie, people will keep lying. The negative affect builds up more and more, and the people who always wanted to hate us anyway because we’re a little bit weird say “Oh, phew, we can hate them now”, and then I and all my friends get hated and dehumanized, the prestigious establishment people jump ship, and there’s no way to ever climb out of the pit. All you need for this to happen is one or two devoted detractors, and boy do we have them.

That seems to leave only two choices.

First, give up on ever having the support of important institutions like journalism and academia and business, slide into the black hole, and accept decent and interesting conversations with other black hole denizens as a consolation prize while also losing the chance at real influence or attracting people not already part of the movement.

Or, second, call out every single bad argument, make the insults and mistruths reputationally costly enough that people think at least a little before doing them – and end up with a reputation for being nitpicky, confrontational and fanatical all the time.

(or, as the old Tumblr saying goes, “STOP GETTING SO DEFENSIVE EVERY TIME I ATTACK YOU!”)

I don’t know any third solution. If somebody does, I would really like to hear it.

This entry was posted in Uncategorized. Bookmark the permalink.

623 Responses to My Id On Defensiveness

  1. discursive2 says:

    Here’s a 3rd alternative: surrender your identity labels. Why identify as a [rationalist / …]? Why organize a tribe?

    There are social advantages to having a tribe, as opposed to having a loose, illegible, fluid network of friends + fellow ideators. But do they outweigh the disadvantages?

    Pros:
    -You can give the secret [rationalist] handshake and instantly have a connection w/ someone anywhere in the world (but as rationalism succeeds, that secret handshake’s value gets increasingly debased)
    -Easier to organize collective action if people think they are part of a collective, via “hey we’re just some people who read SSC + think the Methods of Rationality is a great fanfic” (but easier for others to organize collective opposition as well)

    Cons:
    -Must defend identity against external attack
    -Can alienate generally-compatible people who don’t fit exactly into the tribe
    -Identity gets associated with certain beliefs ([utilitarianism / ai risk / cuddling / …]) preventing completely rational evaluation and discourse on those beliefs. This is a big one; what if you start having doubts about one of the core tribe tenets? Do you have to lose all your friends?

    From a moral standpoint, tribe formation seems dubious as well. It requires the creation of an outgroup (otherwise, what’s the point?) and derives much of its value-creation via building an exclusionary trust framework. 10,000,000 years in the future, assuming humanity survives and evolves, are people going to look back at things like saying “I’m a rationalist”, “I’m a Christian”, “I’m a Mason”, etc. as ethically enlightened or ethically degenerate?

  2. agof says:

    but what about limited rationality?

  3. Vealie says:

    This post has made me realise that I’ve been emitting hawking radiation for most of my life.

    Most of my defenses for contrarian views I hold have been a mixture of debating the facts & Issues at hand as best as I can but the more important approach has been to signal something to the effect of ‘Yeah I live in a black hole, what of it?’

    Agree and Amplify. Ask them to define the slur they’re throwing at you and you’ll tell them if you are or aren’t that thing.

    Unfortunately this strategy is downright awful when miscreants are talking about you rather than to you so probably not applicable to your blog or someone with an audience.

  4. Jules.LT says:

    How about “make an easy to use central repository of answers, and point people to it”?
    I’m afraid LessWrong doesn’t count, but it’s a start 😛

    • Nornagest says:

      There’s a wiki attached to LW that was supposed to be this. It was poorly supported even in the site’s heyday, though; I suspect because a lot of LW‘s self-image was invested in being the intrepid mavericks pushing out the borders of philosophy, and curating a wiki page is about the least intrepid thing imaginable.

      I’m not even sure it would have been a good idea if successful. I don’t like Bingo cards, and what you’re describing is essentially a Bingo card scaled up to a medium-sized database.

  5. Shenpen says:

    Scott,

    The weird part is that you are almost saying Moldbug is right at least in the basics – there is a “Cathedral” (“prestigious *establishment* people”), they wield a lot of informal power, they can make or destroy any groups or belief’s reputation, therefore, of course, they can control what beliefs the academia will teach and journos will publicize (they all care about their reputation a lot, and will not teach or publish ideas that would kill their reputation), therefore, what beliefs the average common voter will believe, therefore, what politics will the politicians implement. Hence, there is a clear, albeit long and loop of power going from “prestigious establishment people” through voters to actual politics. In practice, you have an aristocracy / oligarchy of “prestigious establishment people” and no true democracy.

    (I say “you”, because I believe it is less so in Europe – our “Cathedral” was not able to destroy the reputation even of their outspoken enemies like Nigel Farage. Farage is generally seen as a cool funny contrarian guy, “LOL that damp rag joke was superb”, not a horrible monster, which our “Cathedral” would like him to be seen.)

    • Simon says:

      Well, both Antonio Gramsci and Theodor W. Adorno observed the same phenomenon when they railed against “cultural hegemony” but from the radical left back in the 1930s. I’m also pretty sure they have not been not the first people to notice how the “official press” sets the agenda for public thought, as much as just the first people to give it a catchy name.

      • houseboatonstyx says:

        A catchy name is very powerful, especially when the language needed a neat way to refer to something. Soon after Occupy Wall Street began, even their opponents were calling themselves “the 1%”.

  6. mobile says:

    I started reading Scott’s blog after it was cool, but I’ll keep reading after it’s not.

  7. James says:

    I believe you’ve left out one viable option: Fission. If done loudly enough, and if the right people are selected, you can break off into a new group that retains most of the core beliefs of the original group (assuming you can identify that core), while building a situation where you can tell your opponents “I understand your points, and agree with many of them–but those people over THERE are the ones that believe the crazy stuff.” The high-profile, well-connected folks are there for a reason, after all, and if enough of them, plus regular Joes, get together they can establish their own group, with the added benefit that most of the crazy folks have already been identified and removed from the available folks that can constitute the new group.

    Think the Stonecutters in The Simpsons: once you’ve identified toxic folks within the group, and it’s clear that the group is approaching that event horizon, you can break off into a new group with a big sign saying “No Homers”.

    To grossly over-simplify, a historical example would be Protestantism (I’m hoping this is far enough back in time to avoid mind-killing). People believed the Church was doing deplorable things, broke ties with the Church, and formed their own churches. They were different enough in substantive enough ways to allow them to dissociate themselves from any criticism of the procedures of the Roman Catholic Church, while retaining what they considered the core doctrines.

    The key, of course, is to do this before hitting the event horizon. Once you’ve gotten to a certain point (the KKK and Nazis spring to mind) association is too damaging to recover from. The other key is to make absolutely clear that you are not associating with that old group anymore; SOME of your beliefs are the same, but you agree that the old group has become very toxic, often in precisely the ways that the critics said.

    • Anthony says:

      This seems to be practical for “Effective Altruism”, but much less so for “rationalism”.

  8. Isaac says:

    Maybe this has been addressed, but I’m not going to read through 513 comments to find out.

    The fundamental question of the post is whether or not to engage with people willing to sling negative affect at you.

    A lot of the responses are about letting the content stand for itself – e.g., primarily don’t engage. Only talk to, in this case, the feminists, if necessary to rebut criticism, and even then only lightly.

    This is the wrong approach, because it assumes that engaging in discussions with a GUNA (group that uses negative aspect) causes or helps cause an increased propability of their ire being turned your way. This isn’t necessarily true. In fact, the tendency might run the other way. If all they see of you is the horrible sign-wavers, they’ll start pointing the warhead your way. As long as you’re more likely to inspire a positive response than the average member of your group, you should be engaging.

    The actual issue going on here is jumping from “This group might utterly destroy me” to “I shouldn’t engage with this group”, without looking at what will actually help.

    The government might utterly destroy you, e.g., lock you in jail, but that’s no reason to stop talking to anyone in the government. Not talking to anyone in the government would be quite counter productive, and the situation is the same here.

    I rather doubt anyone will ever read this, but that’s my view.

    • Matthew O says:

      Hey, cheer up! Some people do read to the end of the comments thread several days after the initial posting, ya know!

      (Now, will anyone ever read *this* comment?….)

    • Aegeus says:

      You need to talk to the government, but at the same time, every lawyer will tell you not to talk to the police, because anything you say will simply help them build a case.

      Yes, if you think you’ll get positive responses, you should engage. But if you are in the middle of a heated argument where terms like “crackpot” are getting flung at you, odds are that you aren’t inspiring a positive response in anyone. Back off, and re-engage at a time and place where people are open to thinking you’re not a crackpot.

  9. John says:

    The Harper’s writer who wrote that article about Rationalism approached Steve Sailer and asked if he was part of the community. Sailer confessed that he had never heard of it.

    This was an attempt to demonize Rationalism, since everyone knows Sailer is a terrible racist.

  10. Christopher says:

    It’s all about the Fairsly Difference.

    The thing is, going back to that story about knowing when the Soviets really are setting up gulags and covering up famines, some groups really ARE monstrous and/or irrational and don’t deserve a seat at the grown-up table.

    Nazis, for an uncontroversial example.

    To me, the problem is when people conclude “Treating this person as a rational being open to argument is not just a waste of time, it’s actively detrimental.”

    Maybe I personally don’t want to sit and argue with every single Nazi I ever come across about what was wrong with rounding Jews up into concentration camps. I still believe there is a moral and rational case against Naziism that I could defend, and that other people can make better. And I don’t believe that making that case is bad.

    I see a lot of argument now that goes “Our opponents are so nasty, so intransigent, and so irrational that trying to argue with them only lends them a legitimacy that they don’t deserve. Our strategy must be to shout them down, because rational debate serves no purpose here.”

    I don’t agree with that stance, no matter who it’s applied to.

  11. I’d like to propose a third way to deal with this:

    Join the critics in standing against the thing they are against. In other words, be willing to adapt and change, and be willing to call out some members of the GROUP as not fully identifying with the GROUP – as being part of the OLD GROUP or the FUNDAMENTALIST GROUP, as distinct from the NEW GROUP or the BUZZ-WORD-ADJECTIVE GROUP.

    For example, Christians have done this forever. As a liberal Christian living in a liberal area, I am constantly engaging with people who have very negative ideas about Christianity and associate it exclusively with conservatives, and often with hate groups. I deal with this by distinguishing myself from conservative Christians, and by creating a new category that doesn’t support the disapproved-of ideas: liberal Christianity.

    This way has the side benefit of allowing the GROUP to be flexible enough to stay engaging with culture as it changes, even for very long time spans.

  12. Azure says:

    The third solution is something you’ve been doing an admirable job of all the time. Nobody gets to pile negative affect on anyone. I don’t agree with Fundamentalist Christians but I certainly don’t put up with people treating them as an undifferentiated wall of hatefulness and caricaturing them. It’s the same way that the Howling Mob trying to get people fired shouldn’t be acceptable to anyone, however you feel about their target.

    You can get a good bit of credibility by being equally willing to stand up for the Social Justice movements when people just layer nastiness on them instead of addressing them as you are to call members of the social justice movement out for bad behaviors. (And social justice and feminimism should, at least, be particularly amenable to this kind of action, since they’re marginalized groups themselves and work to undo systematic marginalization and de-humanizing of folks.)

  13. imuli says:

    Or, third, prepare for the event and build a movement with the same essence but with a different presentation, perhaps just a different name (and take the chance to fix things that are hard to fix in a living movement). So when it’s time to jump ship, we all have somewhere to go?

  14. mtraven says:

    Imagine that Christianity suffers a sudden total dramatic in prestige, to the point where wearing a cross becomes about as socially acceptable as waving a Confederate flag.

    This is a weird example (possibly I’m missing some dry irony). Waving a Confederate flag was very acceptable until very recently (among a subset of the population — maybe you only only talking about what educated non-southern elites think?) That it is suddenly less acceptable is the result of exactly the kind of political outgrouping that you are talking about. And for the most part we* think that’s a good thing, that that group and that flag stand for things that ought to be toxic. It is an ongoing war to make them be toxic. It’s a triumph of our political culture that they are now toxic.

    Political warfare can be ugly. But given that it is a stand-in for real conflict, what do you expect? When there is a conflict, the choices are to pick a side, or to sit on the sidelines wringing your hands about how awful it is and why can’t those people just settle their differences rationally. Or (but this is very hard) to try to broker peace and reconciliation between conflicting factions in an inclusive manner rather than by outgrouping. This requires a mix of rationality and sainthood. I admire people who can do that, but I don’t personally have that level of saintliness.

    So, is your problem with the whole process of ruling some groups and ideas out of bounds, or with the fact that it is being overapplied to groups and ideas that you think deserve more tolerance?

    *the use of “we” is reflexively problematic. The whole point of toxicity is to declare certain ideas so bad that to hold them places you outside the boundaries of “we”. I’m not entirely sure who reads this, but I assume most of you are people who would never think of waving a confederate flag. Or maybe I am just talking to that subset.

    • Jiro says:

      The Confederate flag is another example of the SJW taking over.

      In the news recently was the Dukes of Hazzard being pulled from TV Land. That’s because it has a prominent Confederate flag. Yet at the same time, the flag is clearly being used in a non-racist way. In other words, it’s being pulled because it contradicts the narrative.

      The only people who can’t be hurt by accusations of racism are people who are already obviously racist enough that being called racist has no further effect. This makes calling it racist largely self-fulfilling, regardless of whether it was before or not.

      (And yes, I know about its use in the 50s as a symbol for racism. I highly doubt most of the people who would otherwise use it now were alive and old enough to be politically savvy in the 1950’s.)

      • mtraven says:

        You are out of your mind if you think that the connotations of the confederate flag (for both sides) are conveniently limited to the 1950s and are not salient today.

        But yes, this is a very good example of a perfectly justified instance of social justice warfare. I know “SJW” is a term of abuse around here, but so what? I thought we were trying not to resort to outgrouping.

        That is, my point was that yes, there is social warfare going on, that is what politics is, that is how people work. You can argue about the “justice” part I suppose, but that was not the main point.

        • John Schilling says:

          He’s absolutely right that in the late 1970s and early 1980s a major television network believed they could prominently display the “Confederate Flag[*]” for the better part of an hour every week, as a symbol of generic rebellion against corrupt and unjust authority, and that this would be accepted without controversy. And, as it turned out, they were right.

          [*] Actually, the Army of Northern Virginia Battle Flag; the actual Confederate Flag is relatively uncontroversial because almost nobody uses it any more. General Lee had the better vexiologists.

          • HeelBearCub says:

            @John Schilling:
            TV/media narratives get symbolism, history and culturally relevant touchstones wrong all the time. They got that one wrong as well.

            Just because Hollywood tells a feel good, sanitized version of something doesn’t make it true. Frankly, most of the stuff on Dukes of Hazzard should make Southerners ticked off if one believes it will be taken as an accurate portrayal of South. It basically forwards an “all southerners are yokels” version of the South (and pretends that the flag in question is a symbol of resistance against local corruption.)

          • John Schilling says:

            They got that one wrong as well.

            What does “wrong” even mean in this context? If the dictionary defines ‘decimate’ as “destroy 10% of”, and careful historical scholarship confirms that the ancient Romans used exactly that meaning, but approximately everyone in the modern English-speaking world uses and is understood as using’decimate’ to mean “destroy 90% of”, does that make almost every modern English-speaker “wrong”?

            Symbolic prescriptivism is as bogus as linguistic prescriptivism.

            In 1979, CBS Inc wagered many millions of dollars that the Confederate Battle Ensign was in 1979 Americana understood to mean, “Rebellion against unjust authority, Southern style”. Or at least that popular usage encompassed that meaning so unambiguously that if contextual cues pointed in that direction there would be no misunderstanding or controversy. Their profits over the next six years, and the lack of controversy at that time, strongly suggests that they were correct in this assessment. The Southerners (black or white) who “should” have been ticked off, weren’t.

            If in other times or places the same symbol had a different consensus meaning, then there is an interesting question of how and why it changed. That doesn’t make CBS, or the Americans of 1979-1985, wrong. If the Confederate Battle Ensign meant “Evil Racist Scum” in the 1950s and the 2010s but not in the 1970s or 1980s, that makes the question of the change and reversion doubly interesting, but it still doesn’t make CBS or the 1979-1985 television audience wrong.

            They managed to communicate with each other successfully using a symbol, without confusing or offending anyone else. That’s doing symbolism right.

            Also, the core message of The Dukes of Hazzard was not, “All southerners are yokels”. It was, “Every positive stereotype of rednecks / Good Old Boys is correct, and none of the negative ones. Every ambiguous stereotype is either neutral or positive when you look closely”.

    • suntzuanime says:

      I don’t generally feel the need to wave a confederate flag but comments like this make me want to.

      • mtraven says:

        Why? Do you feel an urge to wave a Nazi flag? Those too have been suppressed as socially, morally, and politically unacceptable and their devotees have been outgroupped. Both symbols have large followings despite connotations of things that most people consider radically evil. The difference in the case of the confederate flag is that the connections have only recently been forced into the open.

        Note, I am not interested in calling anybody a Nazi or a racist, I am interested in the meta-level question of how political affiliation and symbolism is treated, which is in line with the OP.

        • nydwracu says:

          How did this thread get derailed into listing examples of the process Scott was talking about in the OP?

          • mtraven says:

            Well, I explicitly said I was trying to not derail the thread. I am disagreeing with one of the original points of the post, that outgrouping is automatically a bad thing, and employing some examples to explore the question.

            Outgrouping seems to be a basic part of how society and morality works. The process itself is neutral and its value depends on how it is employed and against who. By its nature, there will be disagreement about how it is used. It’s a weapon.

            Oddly this position is sort of reactionary (or at least realist) at some abstract level, as opposed to the idealism of

            To dehumanize them is to say their ideas don’t count, they can’t be reasoned with, they no longer have a place at the table of rational discussion.

            There are some people who can’t be reasoned with and don’t have a place at the table. Or to cut it another way, political arguments may employ reason but are ultimately settled by force.

  15. 27chaos says:

    The third alternative is to kill institutional credentialism. Let’s get public trust in media not personally fact checked to zero, as it should be.

  16. AngryDrake says:

    Don’t feed the trolls.

  17. Anonymoose says:

    Have you considered that your “defensiveness” might be manifesting itself precisely in seeing most occasions of legitimate criticism as “loading with negative affect”?

  18. Dave says:

    A potential third approach is for the reasonable Christians (in your hypothetical example) who gave up their association with the group labelled Christianity to go on pursuing the things they considered valuable in the first place, possibly even creating a new group that is superficially different enough from their earlier brand identity to avoid being superficially pattern-matched to it, while the enemies of Christianity devote their energies to being more and more abusive to the Christian-identified crackpots.

    Of course, this means they discard the advantages and resources which have accumulated for the brand, so it’s not a strategy to adopt lightly. The question of what to do before that point is reached is a good one. (And one that other commenters have made good suggestions about which I don’t think I can improve on.)

    But when the total weight of organized opposition is so great that it seriously threatens to throw my brand into that metaphorical black hole, it seems clear that the brand is a net negative. Continuing to identify with the brand at that point, rather than with the values that brand was initially intended to optimize for, seems like pure sunk-cost fallacy.

  19. Bill Murdock says:

    Scott, am I missing something in this post? Very recently you were calling people crackpots.

    I feel I have to point this out because I’m an “anti-vaxxer.” But I haven’t read Wakefield, and my opinion is completely independent of whether or not “vaxxes” have been linked to autism. Weird, right? No. But that’s what the average Joe would expect from what he’s heard.

    I’m not an “anti-vaxxer” because I don’t get watch the processes (the research process and the manufacture process) and I have to take someone’s word for what they are injecting into my body. Heck, I put my life in the hands of faceless manufacturing concerns every day. And I count on faceless competitors to keep their prices low (at a given standard) and their products safe (given their “right to be sued”).

    When an industry is State sponsored such that competition is non-existent or tightly managed, and when the industry successfully lobbies for immunity from suit, these incentives and constraints are removed.

    So you might be excused if you politely declined to inject a product such as this into your body. But that’s not enough! You have to fight advocates of laws mandating that you inject. That involves opening your mouth. “Anti-vaxxer” is so juvenile it’s brilliant for shutting people up (and brilliant for lumping the thoughtful voices in with the loony fringe). What adult would join a public argument to defend against the charge of being a “poopy head”? Alas, here I am.

    Of course I believe that safe vaccines of various types, costs, and efficacy can and should be developed, manufactured, stored, and administered. But if the process that produces such vaccines is being followed, why the monopoly and immunity, and per the rhetoric, the mandate? Quite simply, it is because the process is in fact flawed (if your aim is sound science and medicine). Stated differently, the monopoly and immunity are *so that they can* cut corners on safety and keep prices high.

    As I read the other post I had remarked to my wife that I was very disappointed to be considered a crackpot. This is just basic economic intuition, not even analysis.

    Yes, I sound a little defensive.

    • 27chaos says:

      Judging by the output of most vaccine companies, it seems to me that the State is doing a good enough job. Why do you want an abstract guarantee of success more than an empirical demonstration? Are you worried that the company will decide to start creating fake vaccines one day, just to trip everybody up? Because I feel like most businesses are not that stupidly pointlessly evil. They would obviously lose their contract as a consequence of something like that. They wouldn’t want to take the giant risk of ruining their business just to save a few dollars. If they were going to switch evilly, it would probably have happened by now.

      (I agree vaccination shouldn’t be mandated. But I do support ad campaigns and such for it. Maybe an opt-out system too, actually.)

      • Bill Murdock says:

        “Judging by the output of most vaccine companies, it seems to me that the State is doing a good enough job.”

        That is a very low standard for what you will be injecting into your body, but fine. Regardless, my point stands that — whether or not @27chaos is satisfied with the quality and price of the monopoly provider — I would prefer a selection of competing choices. Besides, in a competitive market you can have your bureaucratically apportioned vaccine as well (though you would not get legal immunity), so this is not an argument for a government granted monopoly.

        Usually people argue that they need government to break up monopolies, paradoxically, since force (and government is force) is necessary to maintain a monopoly or cartel; it is impossible in a free market by definition; entry is always permitted.

        “Why do you want an abstract guarantee of success more than an empirical demonstration?”

        I want both. But the current system provides neither, whereas the one I propose yields both. In a free market you get the assurance of the manufacturers liability guaranteeing he will do his best to be “successful” (where success is defined as the best quality at the lowest resource cost), and his results will be available (if not, you can choose a producer who is forthcoming with his results).

        Currently, getting reliable and objective data difficult due to various legal and procedural roadblocks. So let’s remove them. Also, without competition it is impossible to compare results from different research and production methods; in other words, I see your empirical demonstration of quality — but compared to what?

        All that aside, YES, I want an institutional structure (not an “abstract guarantee”) that rewards success and punishes failure, and I want to see empirical demonstrations. I demand that with my t.v., so why not with my injections?

        “Are you worried that the company will decide to start creating fake vaccines one day, just to trip everybody up?”

        No, and you’re bordering on childish here.

        “Because I feel like most businesses are not that stupidly pointlessly evil.”

        Okay, you are being childish now. You just couldn’t help yourself.

        “They would obviously lose their contract as a consequence of something like that.” and “They wouldn’t want to take the giant risk of ruining their business just to save a few dollars.”

        Again, this is just a market mechanism that already exists that you just ascribe to a bureaucrat to fit your preferred outcome. Why a person deciding to revoke a license? I will admit it’s easier mentally to go from A to B when you only think it through to this level. But why not customers deciding “Hey, you know that company that kills its customers? I was going to say let’s not shop with them, but they don’t exist anymore anyway because they were sued by victims (families), who now own and are liquidating the company’s assets.”

        The entire point of the market is that companies will be driven by both their desire for profits to their desire to not be sued. The entire point of a government granted monopoly is to circumvent the market. You sound like the socialists who, after conceding the necessity of an accurate price system even to plan, just said “We’ll have the planner assign the prices that the market would have!”

        “If they were going to switch evilly, it would probably have happened by now.”

        I agree! Anything that would have happened would have happened by now…

        “(I agree vaccination shouldn’t be mandated. But I do support ad campaigns and such for it. Maybe an opt-out system too, actually.)”

        Okay, great! Make some rules for all the territory you own. As for ad campaigns, I have seen some research that indicates that the market is capable of providing ad campaigns (for products that are potentially life saving).

        Anyhoo, Brandolini’s Law in action here. Yikes.

        • vV_Vv says:

          Given the enormous information asymmetry, how could you possibly make a meaningful choice if you were presented a selection of competing vaccination products in a free market system?

          Let’s say you are at the grocery store, looking at the vaccine vials sitting right next to the canned tomatoes. Which vial do you pick, if any? How do you choose?

          There are also further issues besides information asymmetry:
          Externalities: if you are infected by a disease because you aren’t properly vaccinated, you can spread it to other people. How does the market deal with that?
          Rights of the children (and other non-competent people): how does the market protects children from possibly stupid choices made by their parents?

          Free market is not a catch-all solution that works well in all the possible scenarios. It works well in some cases and fails badly in other cases.
          Economy is a complex system with complex problems that require complex solutions.
          To think that there is a simple silver bullet that would solve all problems and that it is unfairly suppressed by “Them”, is the hallmark of a fundamentalist. Or a crackpot if you prefer.

          • Bill Murdock says:

            Is this a joke? Do you have no information when you buy things at the grocery store? When you go to a doctor? When you buy a car?

            Currently there is a whooping cough outbreak being led by the vaccinated (no, I’m not going to google it for you, sealion). So, they should be sued for having infected themselves enough to spread it?

            Then your last paragraph is such nonsense there is no reply that can be made. Ugh.

            I refuse to yield the field to Brandolini this time.

  20. Vaniver says:

    The issue here, I think, is not enough air support.

    People talk about groups ‘closing ranks’ when a member is attacked. What do I mean? When person A has a positive reputation, person B says something mean about person A, and person C defends person A and publicly criticizes person B’s meanness.

    This is several times more effective than defense by person A, because of the obvious coalition politics issues. If you’re seeing people on your Facebook feed sharing criticism of you and not voicing their disagreement, well, any mind sensitive to coalition politics is going to get worried. Unfortunately, the whole ‘protect constructive criticism’ part of ‘epistemic honesty’ makes rationalists especially bad at this sort of thing, after the generally lower social skills to begin with.

    (I run in somewhat different online circles than you do, and so rarely come across criticism of you until you defend yourself against it here. But perhaps I need to start using Facebook more actively, so I can provide some of this support.)

  21. Dain says:

    It’s odd to think that being on the autism spectrum actually gets you minus points, somehow. At least that’s how it appears, when a movement is being criticized for being mostly “white dudes WITH autism.” The only way to explain it is to think that autism is code for “nerd,” which has gradually become somewhat of a pejorative as Silicon Valley-style riches – not the WASPY richness of old – and the subsequent kvetching over inequality dominates the headlines.

    • TheNybbler says:

      One of their “tenets” is that technical and “hard science” skills are overrated and softer, social and emotional skills like empathy are more important (this is extremely ironic because they consistently fail to show any empathy at all, e.g. see “Untitled”). Since autism is characterized by a lack of such social skills, denigrating anyone with it falls right in.

  22. Urstoff says:

    If the Black Hole is where all the reasonable, polite people are, then that seems like the best place to be. Why would you want to engage people that are participating in intellectually dishonest tribal signalling games?

    Also, stay out of culture wars. That way lies madness.

  23. lilred says:

    My intuition is that the best course of action is to “bite back”. Regardless of how controversial it is, feminism manages to stay relevant by having teeth.

    When a movement A tries to damage and discredit a movement B, the best course of action for B is to try to damage and discredit A. This is assuming two movements of roughly similar size and credibility.

    I think the LGBT activists did this to anti-LGBT conservative activists, depicting them as moronic and bigoted. It worked out.

    IMHO – people and movements have to be careful about advertising pacifism and non-confrontational ideals, because those mark you as an easy target for more radical/sociopathic groups, what I would call “social predators”.

  24. Viliam says:

    I believe the third solution, actually used by socially savvy groups, it to have a shield that you can use to protect yourself against the attacks, but you can later throw it away when it becomes too dirty.

    Here is an example: You are a Scholar A, and there is a Scholar B attacking you. Yes, there needs to be a defense published. But it should not be published under Scholar A’s name. It should be written by… someone else, let’s call him Scholar C.

    The result is that A’s reputation is defended against B’s attacks; and C receives the reputation of fighting too much, although there is this mitigating circumstance that they are not fighting for themselves.

    This is a good solution for A, but what about C? Does C gain or lose from this interaction? I am only guessing here, but I think it depends on C’s position, compared with A and B. If C started in a worse position, let’s say it was actually a Student C, they can get bonus points for successfully fighting a stronger opponent. On the other hand, Scholar B will lose more points for being defeated by a student. (Although there will still be other factors and luck involved, so we can’t count on this. Maybe C will succeed to defend A, but at a complete career sacrifice for themselves.)

    When organizations do it officially, C is called “speaker”. Your speaker defends you and protects your reputation. If the speaker gains really bad reputation, you fire them (and pay them generously behind the scenes for the service) and now you are clean again.

    Or, if you are a leader of a primitive tribe or a criminal gang, you let some loyal younger guy fight in your defense, potentially sacrificing himself. He does it partially because he naively believes in your cause, and partially because he expects a reward from you in the future. If you don’t have such volunteers, you are probably not a very inspiring tribe leader.

    Note that this social process is completely amoral. It works more or less the same if A was right and B was wrong, or if A was wrong and B was right. It’s just: a group that can afford to sacrifice some of their members to create a disposable shield will win over a group that is either too small to afford it, or too socially inexperienced or uncoordinated to use this strategy. It’s about a social fight, not about being right or being the good one.

    Scott, you have more or less volunteered for the role of the speaker of the group. As long as the opponent is civilized, it should be relatively safe. But there are also enemies who fight dirty — who would react by finding out your real identity, and then calling your employer and all your relatives and neighbors to tell them that you are member of a cult or a hate group; if you stand against these people too visibly, you have sacrificed yourself.

    (Also, by the same logic, it would not be the leaders of the enemy group who would do this to you, but rather their disposable pawns. So if later some people would get sufficiently angry at what happened to you, the enemy leaders could deny any connection, denounce the attackers, and come out of the fight as both real and moral winners. Which by the way means that the Scholar B was stupid for attacking Scholar A directly; he lowered the status of them both.)

  25. Troy says:

    The next Alvin Plantinga chooses a field other than philosophy of religion, because no college would consider granting him tenure for that.

    Plantinga is an ironic example, inasmuch as this more or less was the state of philosophy of religion when he got into it. It’s become a respected subfield of philosophy again in no small part thanks to his efforts.

  26. Dormin111 says:

    As a practitioner of Objectivism who has been involved with Objectivist organizations for years, this is the single best explanation for the philosophy’s stature in academia and among the general public.

    • onyomi says:

      Though I am broadly sympathetic to many of Objectivism’s main ideas, I can think of a simpler explanation for the fact that most people don’t take them seriously today (though the mechanism Scott describes may also be a factor): since Rand’s death, and arguably before, Objectivism has served as the church of Ayn Rand. Any and all deviations from her plum line are simply not considered. A philosophy based on rationality yet which is clearly totally resistant to change has obvious problems.

      • Dormin111 says:

        Your response is a perfect demonstration of my point.

        Rand herself had intolerant moments (though I think in general they were overstated), but more importantly, many of the Objectivist leaders today are quite intolerant of dissent, though again not to quite the serve degree you are claiming. According to Scott’s analysis, this is the current state of Objectivism because better actual and potential members have been chased out or never signed up in the first place, while mostly the worst members have been left behind.

        The philosophy laid out by Ayn Rand stands on its own. Its merits should not be dismissed because Ayn Rand had personal problems and the leadership at the Ayn Rand Institute has terrible judgement. But by this point, the vast majority of people who have heard of Objectivism dismiss it as some variation of “cultish,” dogmatic,” “psychotic,” “poor hating,” “elitist,” etc, so most people dismiss the ideas before attaining any real comprehension of them.

        For what its worth, there are plenty of self-described Objectivists who think and speak freely, and believe much of the Objectivist movement up until now has been a disaster.

        • onyomi says:

          Okay, but Rand and many of her followers are at least as much to blame for this happening as any outside criticism. It seems like she and the other leaders strongly discouraged dissent from the Objectivist plum line from the beginning. They themselves drove reasonable people like Murray Rothbard and Walter Block who would have been on their side out of their own movement. If this then started a vicious cycle in which Objectivists came known for being dogmatic and thus only dogmatic Objectivists remained, then I see how the cycle is vicious, but it wasn’t outside criticism which started it. If anything, from what I can tell, the very early criticism of Objectivism, such as it was, was more of the sort we expect of libertarianism more generally: it’s selfish, it’s uncaring, etc. etc. It seems like it was only a bit later that most of the negative stories about how cultish Rand’s inner circle became well known.

          Somewhat related: I think another reason Objectivism cannot effectively revive under that name today is that most of the people who would once have been Objectivists now just become libertarians (I understand Objectivism takes stands on epistemology, ethics, art, etc. but it’s the largely libertarianish political philosophy it’s known for).

          Once a more inclusive group with less associated negative affect emerges, it becomes hard if not impossible for the old group to re-emerge under the same name, I’d guess. This may actually be a decent strategy for escaping the black hole, albeit not a first line strategy: repackage, rename and rebrand yourself. Sure some will accuse you of being just like the old group, but some minor differences will tend to be enough to placate them reasonably well.

          • Dormin111 says:

            Yes, Rand and her followers have some portion of responsibility for scaring away potential adherents. But Darwin’s work was picked up by racists, Nietsche’s philosophy was inspirational to the Nazis, and Marx’s followers created the most destructive political regime in the history of mankind. Yet Darwin, Nietsche, and Marx are all still taught in universities for their historical and conceptual value, if not outright respected by many/most intellectuals.

            Whatever bad things Rand and her followers did (and again, I maintain that a lot of it has been exaggerated by detractors), they do not warrant the black hole status of the whole philosophy.

            The potential for repackaging is a good point, and has actually been floating around Objectivism for a while. Some Objectivists think we should stop using “selfishness” as a flagship ethical word since its popular connotations are so negative. On the other hand, Rand’s original core group of followers are all in their 70s and 80s now, so they won’t be around much longer anyway. If Objectivism survives their deaths, we may see a better face emerge naturally.

          • Sylocat says:

            Yes, Rand and her followers have some portion of responsibility for scaring away potential adherents. But Darwin’s work was picked up by racists, Nietsche’s philosophy was inspirational to the Nazis, and Marx’s followers created the most destructive political regime in the history of mankind.

            Well, I think one difference is, Karl Marx and Nietzsche were not themselves advocates of genocide, and Darwin… well, by the standards of his day, he was not particularly racist. And in Darwin’s case, his ideas were actually concrete and measurable ones, so it didn’t matter quite as much as with the soft and squishy political and moral ideologizings of the other two. And even Marx and Nietzsche probably wouldn’t have complained too loudly to see their ideas transcend their persons and personalities, if they had been used for greater good than they were (which they still are today, in various ways).

            Ayn Rand, on the other hand, was a cult leader who abused a number of her followers. And her philosophy is hard to separate from her person, mainly because she wanted it to be.

          • Dormin111 says:

            Ok then, Marx was an anti-semite and Nietsche was literally insane. In the former and arguably the latter, these issues were directly infused with their actual philosophies. Those who actually read the works of Rand will find nothing which supports silencing of dissent or cultism.

            More importantly, the flippant dismissal of Rand based on vague notions of her personal defects is exactly the type of dismissal Scott is talking about. How many people really know what Rand said and did in small meetings in her apartment fifty years ago? How much of it was Rand’s fault and how much of it was the fault of detractors who had other problems with Rand? Whatever problems Rand did have (she kicked people out of her discussion groups for bad reasons and had an extremely messy break up), are they worse than Marx’s anti-semitism or Heideggar being a Nazi? Hell, I’m sure most, if not all modern philosophers had crazy eccentricities which would make them look scary to most people, but few modern philosophers were ever under as much scrutiny as Rand.

            Rand has a complicated life. She was a genius who had a bad child hood, tumultuous relationships, and was unexpectedly thrust into a position of extreme adulation. And thirty+ years after her death, her detractors still refuse to actually comprehend her work, and instead fall back on ad hominems.

          • Jaycol says:

            @Sylocat, I think this might be the point in the comment section when we should taboo “cult.” It doesn’t seem to have been doing much good work for us, and the problems with this term in particular seem arise repeatedly with discussions of things like LW. I think Scott has been trying to get at this a little for a while, and the comment above sort of reminded me, since it seems to be a good demonstration of how “cult” serves more as a weapon than a fully fleshed-out category at this point. Cf. threads about “cults” above.

          • Protagoras says:

            @Dormin111, I don’t know what you mean when you say Nietzsche’s insanity “infused his writing,” but whatever it means it looks highly questionable. Nietzsche developing serious symptoms and ceasing to write happened at roughly the same time (beginning of 1889).

          • onyomi says:

            Personally, I have a ton of respect and admiration for Rand. She was ahead of her time on a lot of philosophical issues and did more to popularize libertarianism than anyone except maybe Ron Paul. I agree it’s annoying how criticism of her is usually dumb, facile, and/or ad hominem, and usually reflects only a straw man caricature of her actual views. It’s also lame that feminists don’t like her or Margaret Thatcher, even though they should be everything they claim to want (shows where their real priorities lie).

            That said, I do think she made a strategic error in the way she chose to, or unintentionally ended up creating, a cult of personality around herself, with herself as the infallible genius and everyone else her acolytes. This was not a good strategy for creating a long-lasting (especially after her death), resilient movement.

            I can admire her and still think she made some strategic errors. And I am also just pointing out that to the extent objectivism underwent “evaporative cooling,” it seems to have been because Rand herself turned up the “agree with me or get out” heat.

            On the other hand, I can understand the impulse to want to make sure your philosophy doesn’t get watered down, but I think that’s the wrong way to go about it. David Friedman made an excellent point somewhere that success for the Libertarian Party is not in getting 51% of the vote (which would likely indicate they had compromised too much), but in continually pushing the mainstream parties to adopt more and more of their ideas to prevent them getting more than 5-10% of the vote. Objectivism could have done this better by being more welcoming of diverse views, I think.

          • Simon says:

            “She was ahead of her time on a lot of philosophical issues and did more to popularize libertarianism than anyone except maybe Ron Paul.” <- Milton Friedman? Robert Nozick?

      • TheAncientGeek says:

        Actually, objectivism has a liberal/orthodox split (TAS/ARI).

        The problem with objectivism is that it adds an ‘integrated philosophy” to libertarianism, which most libertarians don’t want, and that the philosophy isnt very good, so it falls between two stools.

    • LCL says:

      As a data point for you, here is a sort of general-academia take on Objectivism’s stature. I should say upfront that I’m not well versed in your principles, but that’s why I’m a fair data point for your outside reputation.

      It looks from here like Rand’s main purpose, in original context, was to oppose communist ideas. This isn’t subtle; Atlas Shrugged beats you over the head with repeating “from each according to his ability to each according to his need” as a slogan of evil. And the main point about the perverse incentives created by communist philosophy was perceptive and turned out to be largely correct.

      Of course, like most contrarians, Rand overshoots, not content to stop at opposing communist ideas. She goes on to try to create a comprehensive ideology that, to an outsider or novice, just looks like a hodgepodge of loosely-related ideas mainly characterized by being definitely not communism. Translated, most saliently, as: communism stigmatized self-interest, so we’re going to react by glorifying self-interest.

      The problem then is that the struggle against communist ideas is, as a matter of public opinion, over. Your side won; Rand was prophetic on the main point. And it looks from outside like what’s left of Objectivism as a movement is just people trying to take very seriously some contrarian-overshoot stuff long after the main debate is settled.

      Put otherwise, what Objectivists would need to explain in order to gain stature (or even just earn closer examination of your ideas) is what Objectivism still has to offer, given that communism has already failed.

      That’s my sort of generalized academic reasoning for being dismissive; the mass public perception is probably more along the lines of “oh that’s the book assholes like, because it tells them it’s good to be an asshole.” Not sure what you could really do about that one.

      • Dormin111 says:

        This is a more nuanced dismissal than Objectivism usually gets, but I still think it misses the mark.

        I’m wary of psychologizing on people I don’t know who died decades ago. Undoubtedly, Rand was inspired by her revulsion to communism to create her philosophy. Though according to her, the direct inspiration for her philosophy was the need to create moral paradigms for her characters in her fiction works. But so what? Enlightenment values were a reaction to feudalism, serfdom, and the old political orders of Europe. A reactive start doesn’t necessitate a lack of genuine substance.

        How does Rand specifically “overshoot”?

        And as for the defeat of communism, even that isn’t really complete. Oh sure, few people still describe themselves as communists, but plenty of people, especially in Europe, are socialists, which is only a tinge better than communism. Ideally, I would like to see socialism/communism end up in the same intellectual realm of Nazism/racism.

        • LCL says:

          ETA: meant to reply to below comment as well, accidentally posted in middle.

          Guys, that’s not intended to be an argument-as-soldier. It’s a data point in explanation of disinterest/low reputation, with associated serious suggestion of what your group might do in order to increase interest/reputation. Which isn’t, to be clear, “refute the explanation of disinterest.”

          Think about it like atheism and religion, that works, but the other way around. Because in modern context atheism is the reactive movement, arising in reaction to religion. If in a hundred years religion disappears, is there any remaining usefulness of atheism after that?

          In the sense of atheism as just not being religious, no. Atheism would need to develop into an associated more complete, consistent, and widely applicable paradigm in order to have continued usefulness past that point. Something more than just do the opposite of religion.

          (As an aside, might LW-style rationality be a candidate as an attempt to develop just such an atheist paradigm? It does seem to have arisen at least in part as a reaction to religion, but with principles attempting to transcend the issue of religion).

          So, the explanation of disinterest is that Objectivism after the demise of communism seems like atheism after the demise of religion. If you’ve built a more complete, consistent, and widely applicable paradigm than do the opposite of communism, we outsiders haven’t still gotten the message about what that paradigm is.

          The suggestion is to make a broader effort at conveying it and especially focus on explaining how it has transcended opposing communism (and is thus still valuable).

          • Dormin111 says:

            How much of Rand’s work have you read? I cannot imagine that from her discussion of concept formation, fundamental ethical alternatives, the nature of creativity, you’ve gleamed that Rand’s philosophy consist of “do the opposite of communism.” If you’re saying that that is just the popular opinion of Rand, and you don’t hold it to be true, I don’t really agree either. Rand is more often smeared as a cultist elitist than as purely anti-communist.

          • Communism may be dead, but the set of philosophical/moral/political attitudes that Rand was attacking is very much alive.

            Consider, for a current example, the question of whether a bakery is free to refuse to bake a wedding cake for a same sex couple. The view that they should not be seems, by casual observation, to be the dominant one, as well as the one supported by current law. From Rand’s standpoint (and mine) that position is a rejection of the principle of free association fundamental to a free society, the idea that such transactions occur if and only if both parties are in favor of them.

            My side hasn’t won. People discovered that certain implications of the other side led to consequences they didn’t like and so to some degree backed off from those implications. But the underlying attitudes are alive and well.

            There remains the question of whether Rand’s defense of our common view is either more correct or more convincing than other defenses, including mine. But that’s a very different issue than “the battle is over—you’ve won.”

        • TheAncientGeek says:

          Examples of overshooting include ethical egoism, and the complete rejection of the apriori.

      • blacktrance says:

        There’s more than one way to be anti-Communist – fascists, conservatives, and social democrats are anti-Communist too. Objectivism isn’t just anti-Communist, but anti-collectivist in general, apart from a few deviations by Rand and Peikoff which seem to be inconsistent with the philosophy as a whole.

        But on the whole, this is a strange criticism. What do you mean by asking what Objectivism still has to offer? What does it mean for a philosophy to “offer” something in a way that can be affected by things like the collapse of Communism? For example, suppose atheism disappears from the world in the next hundred years. Would you say to the Christians, “You got the anti-atheism part right, but what else do you have to offer?”?

      • TheAncientGeek says:

        The other main strand was atheism….otherwise there would have been no need invent Objectivism, as Communism was already opposed by mainstream US conservatism. Hence, Rand needed a naturalistic ethics, to full the gap left by god-given ethics, and an ethics without any trace of altruism, to oppose communism.

      • brad says:

        I think at least for philosophers as opposed to literature professors or what have you it has more to do with Rand not fitting in with and refusing to engage with the vast bulk of the then existing philosophical tradition. It’s one thing to argue against hundreds or thousands of years worth of philosophers and quite another thing to ignore them.

        When you make up your own entire edifice without reference to a field’s jargon, existing categories, ways of thinking, hard learned lessons and so on, it becomes a lot of work to unpack your arguments and see what — if anything — is there. In Rand case specifically, the heuristics look bad *and* it would be a lot of work, so few or no academic philosophers bother. Heck even the word objectivism was already taken.

        In general, I think the argument that everyone deserves a fair hearing ignores very real opportunity costs. Opportunity costs seem like a better, more charitable even, explanation than all this business about status and signaling.

        See my comment above for a similar point:
        https://slatestarcodex.com/2015/08/15/my-id-on-defensiveness/#comment-229043

    • Vaniver says:

      I was struck by the absurdity of the following exchange (paraphrased due to fuzzy memory) at a party a while back:

      (For context, we were in ‘deep personal sharing mode.’)
      Person A: Long, emotionally engaged story about how she discovered that she could only live for herself, and how it was not healthy to put everyone else first and disregard her own needs.
      Person B: Yeah, that’s the core of what Ayn Rand wants people to realize.
      Person A: Oh, I hate her.

      I’m still not sure what interpretation I like best. I think person A’s reaction was just the knee-jerk black-holeism, and what person B wanted out of it was A to update positively in favor of Objectivism.

  27. Adam says:

    Third choice: promote ideas instead of social groups.

    You don’t want ‘rationalists’ to be tarred. Why? You didn’t invent rationality. Kahneman is a Nobel winner who is close to being the founder of behavioral finance. Judea Pearl is practically a god in the larger statistics community. Bayesian reasoning isn’t struggling in obscurity without your help. Nate Silver became the biggest celebrity of the last election cycle using it. It’s at the heart of modern tracking and localization systems used for autonomous navigation. Focusing your efforts on what other people undervalue is what made Warren Buffett the richest man in the world for a while. Michael Lewis and Billy Beane made their careers off of either doing it or popularizing it and have become cult idols. When you get away from the Pascal’s mugger crap, even the notion of not ignoring low-probability events with enormous payoffs is just a repackaging of extreme value theory, which got rebranded twenty years ago as value-at-risk and then again five years as Black Swan events. Extending the idea to charity is great, but it’s been a mainstream idea in finance and risk management for a while.

    Even on charity, ‘effective altruism’ is pretty damn far from being a new idea. Charities long ago adopted the methods of clinical trial experimental design and business analytics in an attempt to 1) improve the effectiveness of existing programs, and 2) prove to the donors that they’re actually getting what they’re paying for. Aggregating all of this information to provide a central repository for individual donors so they don’t need to request it piecemeal from each charity they consider is a great thing to do, as is promoting the idea that we should also consider marginal impact and expand the circle of concern outside our own country, but it’s looking from the outside like this effort has devolved into fighting between competing factions over what they should care about. You’ve hit an interesting pitfall that doesn’t happen to the traditional organizations concerned with program evaluation and performance measurement, usually governments and large public foundations, since they have charters, founders, legislatures, some form of higher authority dictating to them what they care about.

    From the outside looking in, though, as a person who came to you through a Facebook friend with no prior affiliation with Less Wrong or MIRI, who had never before heard the names Hallquist, Yudkowsky, or Moldbug, it doesn’t seem like your concern is with successfully promoting the ideas you care about. Your concern is with saving the face of named communities run by your friends.

    Or maybe rationalists are just a little drunk on grandeur. Using Bayesian reasoning to invent autopilot and missile defense radar and extreme value theory to run hedge funds is boring. You need to use Bayesian reasoning to invent better social utopia theories and extreme value theory to rescue humanity from certain apocalypse.

    But what’s more important to you? The use of sound methods of reasoning or that everyone who ever uses them comes to the same conclusions as you about what their priorities should be?

  28. Psy-Kosh says:

    I don’t have a solution for you, but I do have a question:

    How does all of this fit with cases of disreputable groups _becoming_ more reputable over time?

    Is it just that in those cases the opposition suddenly gets shoved into the black hole harder, or is something else going on in those cases?

    • TheAncientGeek says:

      Resurrecting the reputation of previously denigrated groups is a kind of low hanging fruit for academics…its nicely contrarian, but you don’t have to invent any fundamentally new ideas. However , you need to wait until visceral, emotional reactions have died down. Examples include witchcraft and sophistry.

      • Psy-Kosh says:

        Fair enough. But that does imply that it’s actually possible to eventually “escape the black hole”

  29. BBA says:

    I think a comment I made yesterday got stuck in the spam filter. Is there a way to get it out?

  30. Peter says:

    I think it’s definitely worth asking- where do the establishment people who leave increasingly crackpotting fields go?

    Reminds me of the negative space in the quote ‘The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.’ (shaw)

    The reasonable one, on the other hand, adapts to the fact that it takes a village to raise a field, and that not all change is progress but all change depends on the unreasonable people. If the village isn’t there with a full spectrum of reasonableness, the field won’t be, and continually prunes their publicly espoused beliefs away from fields that have gone to seed.

    I assume this is done through public refutations of group membership and being willing to sacrifice outlier elements or even those unjustly labeled as extremists. This seems to have already occurred with NrX in the rationality movement, and feels to me like a necessary part of continuing to be able to interact with mainstream society.

    I think you could then raise the same good points under a different group label, after forming a new group of the ‘establishment’ and average members of the old belief structure that’s been coopted.

    The parts of a belief structure that are _capable_ of being coopted by extremist elements will probably be coopted at some point, extremists happen, and reasonable people should be prepared to jump ship when some kind of moving average they feel is trustworthy claims that it’s happened to specific parts. In defending anything you’ve already been driven into relative extremism- establishment people are only allowed to hold beliefs that don’t need to be defended, average people don’t step up to defend things.

    • Peter says:

      OK, my plan of using my first name as a handle is now definitely doomed. I managed to persuade one person to go for “Pete” but there are just too many Peters. Such is the story of my life.

      Maybe I could come up with a distinctive avatar or something.

  31. HeelBearCub says:

    I think I identified what is really bothering me about these last few “defensive” posts.

    “I can think of criticisms of my own tribe. Important criticisms, true ones. But the thought of writing them makes my blood boil.

    But if I want Self-Criticism Virtue Points, criticizing the Grey Tribe is the only honest way to get them. And if I want Tolerance Points, my own personal cross to bear right now is tolerating the Blue Tribe. I need to remind myself that when they are bad people, they are merely Osama-level bad people instead of Thatcher-level bad people. And when they are good people, they are powerful and necessary crusaders against the evils of the world.”

    I Can Tolerate Anything Except the Outgroup

    Here you have some people who seem to be in grey-tribe, or at the very last grey-tribe adjacent, throwing some criticism at grey-tribe, and Scott, as a (at least some times) self-professed member of grey-tribe is letting his blood boil about them (letting his Id respond.)

    Is that a fair criticism? Or is it cheap? I think it’s fair, but I’m not sure.

  32. TomA says:

    I would argue that that the distance and anonymity afforded by internet-based communication is at the root of the incivility you describe. I doubt that most people would be so flagrantly rude (or destructively hostile) in person.

  33. Nestor says:

    http://plebcomics.tumblr.com/

    Plebcomics uses humor against the toxic SJWs. She’s been doxxed and persecuted over it though, so I can’t really tell how well it works. Also her life seems exhausting but she might be having fun with it.

  34. “It is really easy for me to see the path where rationalists and effective altruists become a punch line and a punching bag. … That seems to leave only two choices.”

    You could appeal to people’s reason and say ad hominem really sucks, but then that probably only works when they’re a little bit rationalist already. Catch 22 or something.

    I guess there’s always a lot of competition for public political attention, and any group is going to get a few knives flying their way anytime they step into such a competitive arena. But antagonistic bastards only have so much time and energy, and there’s oh so many targets. An authentically politically neutral view like “try hard to be logical” might be a less appealing target than most others competing for public attention. Especially if you manage to keep it politically agnostic (views at the door please). There’s also a certain advantage to a dove strategy in an arena of overzealous hawk’s pecking eachother’s eyes out. After all, the big political groups are pretty practiced at the whole reputational retaliation business, and maybe its better to not to play their game.

    Of course, there’s always the scorpions, so I guess then you might need to be prepared to go battle with them rather than just soak it up. There’s also the problem that in some situations the big players are ALL pretty much happy with their perpetual warfare, and don’t want rationality coming along with its pesky reasonableness and ruining the party.

    Of course, I’m assuming ‘nice’ rationalism here… I personally find that “rationality is systemised winning” is a bit borderline on that.

    • Publius Varinius says:

      > An authentically politically neutral view like “try hard to be logical”

      The ones who feel instead of think don’t believe in politically neutral views. Trying hard to be logical is strongly linked with sexism and totalitarianism.

      Aiming to be politically agnostic makes you a target for entryism, and if you try to put a stop to it, you suddenly look like you’re excluding people based on their political views.

      • Re your first point – *facepalm*

        Re your second, hoepfully enforced political agnosticism would make it a waste of time for political entryism, but given the increasing proliferation of stuff like your first point on both the left and right, perhaps I shouldn’t be too sure.

  35. Thrasymachus says:

    Fascinating!

    This explains why Jesus said it was so bad to say to another person “Fool!” It also explains why he said not to be too concerned about defending yourself. Christianity is hard to understand it, but the things I don’t understand eventually are shown to be true, sometimes in the strangest places.

    The thing is the process you describe is fundamental to progressivism. It can’t function without it, because if it tried to use facts and logic, people would die laughing.

    Have you ever looked into the events surrounding the Gage Park Decency Drive? The public image homosexuals insist we accept is perhaps not completely in line with their actual behavior. Just an idea…… facts are always interesting…… to me anyway………

    • Nita says:

      The thing is the process you describe is fundamental to progressivism. It can’t function without it, because if it tried to use facts and logic, people would die laughing.

      Ah, the irony 🙂

  36. Jeremy Jaffe says:

    Ok so my brain thinks this post is overly paranoid.
    Why does my brain think this?
    Well what does this post say in a nut shell?
    1. Many groups have become “so toxic that it becomes somewhere between a joke and a bogeyman. Their supporters will be banned on site from all decent online venues.”
    Therefore
    2. We should be scared this will happen to SSC
    and
    3. The best way to prevent this is to respond to every criticism of SSC

    So first of all I think 1 is definitely an exaggeration.
    Take for example Mens Right’s Activists – their supporters are not popular but I don’t think they are “banned on site from all decent online venues”.

    Second of all, point 2 does not follow from point 1
    Like, even if this kind of thing does happen and has happened 10 or 15 times in the past, there are like hundreds of online communities out their so the prior of this happening to SSC should be small. Granted SSC does talk about more “touchy” subjects but I still don’t think that’s enough to mitigate the low prior. It’s also hard for me to imagine this happening to SSC, Lesswrong, EA etc.
    For one thing, these groups have been around for years and it never has happened. For another, the presence of actual “weak men” makes the “weak man” argument easier to make – and I think actual members of the aspiring rationalist community are normally pretty rational and smart.
    Also, people normally hate things that they perceive as being a threat to them politically – Lesswrong avoids politics and SSC normally waits a few weeks after a particular political issue has been all the rage to comment on it and even then only comments in some well thought out way with lots of data and graphs and stuff – like SSC wouldn’t make a meme saying “the Y people are wrong about X” so I doubt it will be a target that people try to take down

    3 – certain criticisms are worth responding to but not all. In particular, any criticism that comes from a blog significantly less well read than SSC I think should be ignored because by responding to it your giving the criticism more publicity.

    – Speaking of publicity, I think it would be good for the world if GiveWell and SSC had more publicity – which is another reason to maybe hope some people started hating SSC enough to give it more publicity – but again, I don’t think that’s likely.

  37. John Henry says:

    I haven’t read all the comments yet, but in the ones I’ve read, there seems to be a common assumption that there must be some simple, basic “third way” to counter social bullying. There isn’t. If there was, it would have been discovered at some point in the past few millennia, and we would all be doing it, and bullying would be an historical footnote (if that.) The fact is, bullying works as a tactic for social climbing: distancing yourself from – and contrasting yourself with – low status people is an effective way to gain social capital. In most of the cases you’re complaining about, that’s all that’s happening: Amanda Marcotte writes mean things about nerds because that’s the most effective social climbing tactic available to her in her current milieu. If she were living in Puritan New England, no doubt she’d be accusing unpopular low status women of witchcraft instead.

    There are ways of combating bullying, but nerds are (by definition) bad at social combat. I think the first thing to realize is that social combat is a complex art that occurs in a complex environment. There is no perfect defensive (or offensive) social combat technique, but there are a lot of good ones that work better or worse, depending on the terrain and the opponents you’re facing. You probably don’t like social warfare (which speaks to your character) but you should learn to wage it because (as you’ve found) just because you’re not interested in war doesn’t mean war isn’t interested in you. You don’t have to become Machiavelli to bloody an opponent’s nose, even if they’re bigger than you and better at the game – you just need to get food enough at it that it becomes costly to attack you, and most of the bullies will look for easier pickings elsewhere. It doesn’t mean all your troubles will be over, but it will weed out a lot of the nonsense.

    This thread does contain quite a few good suggestions – try a few on for size and see how they work out for you. Just realize that this is an ancient and complex game, and it’s probably not hackable.

  38. “They are already closer to the black hole than I am, and so they have no power to load me with negative affect or destroy my reputation. This reduces them to the extraordinary last resort of debating with actual facts and evidence. Even better, it gives me a credible reason to believe that they will. “

    I love this flip!

    Moloch – “Hey look, here’s a rational reason to marginalise anyone you don’t agree with.” *laughs as human society fragments*
    Scott – “Whatever, here’s a much better reason to do the opposite.”

  39. goocy says:

    A possible third way is evasion by aggressive rebranding.

    “Oh, yeah, [old group] was terrible. That’s why we left them and founded [new group], with much better goals and more open-minded people.”

    The public will never know how much ideological/personal overlap exists between the old and the new group, and it can very well be 100%. Important is just that names are changed and that bridges are burnt.

    In the corporate environment, Blackwater/Xe/Academi/Constellis is famous for this strategy. After every PR disaster, they just rebrand themselves and restart with a clean PR slate.

  40. Lorxus says:

    Option 3 is pretty clearly to murder their reputation first by any and all means possible before they can finish gathering the torch-wielding mob. There’s no way out. Moloch wins.

    Please, someone tell me why I am wrong, I beg of you. I have been in a pretty bad and persistent mood about this sort of thing.

    • beleester says:

      I would say that attempting to murder other peoples’ reputation pre-emptively typically makes you seem crazy and thus make people turn against you. And they would be right to turn against you, because you’re attacking random people, probably using dark-arts debate tactics, in some bizarre attempt to protect your reputation. In short, you make the PR black hole into a self-fulfilling prophecy.

      Plus, what are you even asking for, here? There’s no “them” to target here. The thing that pushes you into a black hole is public opinion, and there’s no way to put the entire public into a PR black hole. Fox News could write hit pieces for you around the clock and they wouldn’t be able to do it.

      It’s not a Moloch trap, where people do reasonable-sounding things but the aggregate result is unreasonable for everyone. This course of action sounds unreasonable on the face of it, and anyone with an ounce of PR instinct will avoid it.

      • TheAncientGeek says:

        If you don’t like fights, dont start them.

      • Lorxus says:

        That’s not what I’m talking about, here. I’m saying that the clear (but deplorable) course of action is to spot whoever’s trying to do this to you and crush them first if at all able. Don’t even bother trying to fight fire with reason, fight fire with airstrikes.

        Another thing I was thinking about here: what if this phenomenon is itself fractal? I mean, what if this evaporative cooling effect applies to human discourse as a whole? Is there any reason to suspect that discourse as a whole is becoming more likely to be loaded with Ethnic Tension ™?

  41. LCL says:

    Looked all through the comments and was surprised not to find this (apologies if I missed it):

    [After being pushed into the event horizon] there are just the horrifying loonies, who, freed from the stabilizing influence of the upper orders, are able to up their game and be even loonier and more horrifying . . . [the group is now] entirely the sign wavers.

    then, shortly thereafter:

    the safest debate partners, the ones you can most productively engage, will be the people who have already been dismissed by everyone else . . . [being inside the black hole] reduces them to the extraordinary last resort of debating with actual facts and evidence.

    The paragraph with the above lines jarred as 180 degrees from the previous section. What part of reducing a movement to GOD HATES FAGS sign-waving “loonies” supposedly turns them into productive partners for evidence-based debate? That’s a very surprising claim and one I feel could use more explanation.

    • Sylocat says:

      That struck me as odd too.

    • Vaniver says:

      What part of reducing a movement to GOD HATES FAGS sign-waving “loonies” supposedly turns them into productive partners for evidence-based debate? That’s a very surprising claim and one I feel could use more explanation.

      If a field is heretical and true, it will appear to be nothing but loonies by a weirdness heuristic, but on close examination it will turn out to contain many people who primarily pursue truth and and willing to be loonies. If a field is heretical and false, it will appear to be nothing but loonies by a weirdness heuristic, and on close examination it will turn out to contain few people who primarily pursue truth.

      But yes, that section does seem pretty Straussian; Scott doesn’t pay any attention to God Hates Fags loonies because there’s no there there.

  42. John Henry says:

    Showing a sense of humor goes a very long way toward signaling that you’re a bad target for attacks. Make fun of your critics and make fun of yourselves in equal measure. Here’s what that looks like for a group like EA:

    On the one hand, you embrace the more ridiculous aspects of the criticisms that get leveled against you. Make some memes of nervous nerds in basements, tapping away at a computer, feeding more and more information into SkyNet, all the while sweating about AI. Turn it into a digital badge (“I attended NervousNerdCon 2016!”) and put it on your blog. Laugh at yourselves at every opportunity.

    On the other hand, laugh at your critics too. Laugh at how they have nothing better to do than to criticize nerds who decide to spend their free time figuring out the best way to help other people! For Pete’s sake, there’s a world full of people with arguably useless hobbies (sports, needlepoint, video gaming, celebrity gossip, etc. – not to mention all the arguably harmful hobbies like big game hunting or ATV riding.) Are these Warriors for Perfection attacking any of those people? Nope. They’re going after nerdy hobby altruists, accusing them of not being altruistic enough. That’s their hobby. That is ridiculous and eminently mockable, and it should absolutely be laughed at at every opportunity.

    Laughing at yourself shows that you’re willing to listen to criticism, and even acknowledge that some of it has merit. It also protects you from the “angry, bitter, defensive professor” scenario. Laughing at others makes it potentially reputationally costly for others to attack you, and shows potential casual allies in the general public just how bad the arguments against you are (without the need for them to read thousand-word posts or to educate themselves on the niceties of the debates.)

    • multiheaded says:

      100% endorsed.

    • John Schilling says:

      This is sometimes useful, but it only really works when you don’t need it to work.

      Self-deprecating humor by high-status people and groups, enhances their status. “Look, they are so secure in their status that they don’t need to defend it”. Self-deprecating humor by low-status people and groups, reduces their status. “Look, they know how ridiculous they are, and they can’t stop doing it!”

      Same for humor directed against outsiders. John Stewart ridiculing politicians he doesn’t like, does make it reputationally costly for those politicians to attack John Stewart. Bigots telling racist jokes about black people, does not make it at all costly for black people to attack racist bigots.

      For members of high-status groups, this is a useful defense against having that status eroded. If you’re already looking down the maw of the black hole, it’s too late. And if, from a relatively secure position, you counsel those on the edge of oblivion to adopt this strategy because it has worked for you, that’s not helpful.

      • HeelBearCub says:

        @John Schilling:
        Yes, all of that is true. But still, there is something to the idea that unwavering confidence, done well, is a good defense.

        I think Trump is a good example of this. After the comments he made about Mexicans, someone less committed to unwavering confidence would have back-pedaled. But he plowed forward, full steam. I think a broad swath of the public saw Trump as a sort cartoonish character before and after the statements, but he if he allowed the air to be let out of his balloon, he would have lost much of the support he does have, which is enough to put him at the top of the field currently.

        You can lose a public-debate just by being unsure of yourself. So, it’s far better to move forward with a certain kind of confidence. Not confidence that everything you say or do is correct, but confidence in your competence.

        • John Schilling says:

          A certain kind of confidence, yes. If you’re a low-status individual or group, self-deprecating humor does not signal confidence. It signals that other people shouldn’t take you too seriously. It signals that you are unwilling to plainly assert your views without an “only joking” escape hatch close at hand.

          If you are coming from a position of power and status, it is helpful to signal that people shouldn’t take you too seriously, because they probably are. And the escape hatch will look like it is being provided for the benefit of people you could crush without effort. If you’re in close orbit around the black hole, you very much need to signal confidence but self-deprecating humor usually won’t do that.

    • 95% endorsed, with the 5% caveat that this seems like the kind of tactic that might easily sneak into our response to uncomfortable legit criticism too.

  43. Anonymous says:

    Your responses to the Dylan Matthews article remind me of your response to that polyamory article in that the response seems much more likely to damage the reputation of the community you’re defending than the actual article does.

    Perhaps I’m an outlier of some kind, but a community that’s open to self criticism strikes me as far more appealing and inviting than a community where people respond to criticism harshly and defensively, and with the implication that the person doing the criticising is an outsider rather than a part of the community. I’m well disposed to both EA and polyamory but my instinctual reaction to your defences of them was along the lines of “hackles raised/back away slowly/ AVOID UNSAFE AVOID”

    Could be just me. It strikes me that my feelings on the value of a community that’s self critical may be the polar opposite of yours.

  44. Oliver Cromwell says:

    There is another alternative, which Moldbug demonstrates exactly how not to pursue here:

    http://unqualified-reservations.blogspot.de/2009/08/from-cromer-to-romer-and-back-again.html
    http://unqualified-reservations.blogspot.de/2009/09/ur-banned-in-san-francisco.html

    If you can’t be bothered to read, Moldbug points out that some mainstream academic is advocating something (charter cities) that is really colonialism dressed up in a fake mustache and a fez. Moldbug is indignant and insults this academic, pointing out that there are plenty of 19th century apologists for colonialism whose books are already available in the public domain, not to mention his own blog where he will happily defend colonialism without shrouding that defence in misdirection and obscurantism. The mainstream academic then retaliates by getting Moldbug uninvited from a conference they were both scheduled to attend.

    One could look at this and weep for Moldbug who is basically correct. Or one could learn from the mainstream academic who openly advocated colonialism without losing any reputation, and possibly even gaining reputation. From his point of view Moldbug’s dazzling display of unsolicited clarity was some sort of friendly fire.

    • Saal says:

      Heh, it’s funny you brought this up. I remember reading this and going “WTF are you doing? Why would you blow his cover, he’s on YOUR SIDE!!”

      But of course, to MM appropriating the language and symbols to get his ideas accepted isn’t permitted, because it’s letting “the Cathedral” frame the debate and makes you a conservative. This is, IMO the biggest weakness in NRx.

      As an aside, since we’ve been comparing Chomsky to NRx lately: has anyone noticed that NRxers use the word “conservatives” in much the same way anarchists/communists use “liberals”? Although with perhaps a little less vitriol.

  45. Wait a minute says:

    The best solution is to, at least seemingly, become a branch of the social justice movement. You could still have rationality, although with certain side constraints. But you would also have a secured reputation of being a good and virtuous man with an army of internet warriors and warlocks to command. The only downside would be some loonies on the other side of the internet calling you Saruman, but who cares?

    • John Henry says:

      That’s a movement that famously eats its own. You don’t really want to take this tactic. (Perhaps the suggestion was made with tongue in cheek?)

      • 27chaos says:

        It attacks its own, yes, but it doesn’t eat them entirely. They don’t thrive, but they survive. Arthur Chu still exists despite everything, for example.

        Oh God, there’s an analogy here to how good rationalists ought to be willing to criticize community norms or their leaders but still should have community norms and leaders. Lel.

  46. Anónimo says:

    This is probably not going to be helpful in practice, but I feel like it needs pointing out anyway because it’s true: the problem is Yudkowsky. Yudkowsky is a nut. He is visibly, obviously a megalomaniac nut to pretty much anybody outside your clique, and I have to assume it’s only friendship or residual allegiance from younger days that keeps you from seeing and admitting it too.

    You personally are one of the best active essayists I know of, and I would defend the idea that you’re the best active on the internet, period. There is very, very little you’ve written that I haven’t found to be somewhere between insightful and brilliant. And your persistent, voluntary affiliation with and defense of Yudkowsky is completely incomprehensible to me, as a person on the outside looking in. To you, or to your id anyway, this probably seems like one of those tarring arguments you mention above to make people disassociate from the undesirables/enemies, but this genuinely is a good-faith attempt to explain that you’ve chained yourself to the anchor and thrown it overboard, so to speak. It doesn’t matter that you chained yourself to it because you love that anchor, you’re still going to sink.

    During that whole meltdown at Gawker, someone, I forget who, said that the editors who resigned had “picked a strange hill to die on”. That phrase strikes me as an apt description of both you in particular and the rationalist movement in general.* I know of exactly one rationalist I think of as crazy, and almost all the criticism that I see, admittedly, is about him in particular, and the rest of you all take that personally, even though the aim of most of that criticism seems to be about wanting to protect rationalism from the damage he’s doing to it.

    Reading the smug bad logic of HPMOR couldn’t be a stronger contrast to reading this blog. And yet you defend its author consistently. And I don’t know what I could possibly say to make the disconnect in that seem as gigantic and incomprehensible to you as it does to me. But you have to understand it’s mystifying to the outside observer. Baffling. I’ve fumbled around for analogies for(? To? Of?) just how baffling, here, but I’m painfully aware I’ve drawn a blank.

    *Those guys seemed like bastards to me, mostly because I find it hard to imagine a good person voluntarily editing Gawker for any length of time, so don’t think I’m comparing you to them. It’s just the phrase that fit, nothing else.

    • suntzuanime says:

      Yudkowsky taught me to think. He is the Wise Master. It’s hard to overstate the intellectual debt I owe him. As a matter of rationality I’ll recognize when he’s wrong, and it’s more often than he thinks, but as a matter of allegiance I won’t be disloyal to him. I hope this helps you understand why some people are willing to die on this hill.

      • MicaiahC says:

        Chipping in to say that I understand this point of view, even when I disagree a lot of things with EY on an object level and think that he’s ridiculous on a meta level that he crystallized a lot of thoughts I had and introduced me to Kahneman’s work.

        I will also add that a good portion of my (consciously available) camaraderie with LW comes from the fact that many of EY’s critics seem to do things in bad faith and pattern matches to a lot of low status members of a previous online community I participated in. The good object level criticisms are so far between that reading through all the other stuff makes it seriously unpleasant to remain skeptical of most of his claims; but I make myself do it because it’s honest and I feel more allegiance to being right than to liking EY.

      • HeelBearCub says:

        @suntzuanime:
        “Yudkowsky taught me to think. He is the Wise Master.”

        Yudowsky, of the comparatively little I have read, seems to write engagingly, have lots of good ideas, and some entertaining ways of cutting to the core of some problems.

        And he has some really un-appealing tendencies as well, ones that he seems to think are assets, or something.

        Being unwilling to acknowledge those un-appealing things because he also has a number of appealing qualities seems like an error in thinking.

      • John Schilling says:

        Isaac Newton taught us all classical physics, and Linus Pauling was almost as vital to quantum chemistry. But when the former is all about alchemy and the latter extolling the virtues of megadosing vitamin C, any further defense has to be very carefully framed. Yes, they’ve earned the right to be taken seriously. When we take a serious look and find that, nope, alchemy and vitamin C are still bunk but the “Wise Master” is still promoting them, further praise has to be tempered and qualified and must not look like worship.

      • Adam says:

        I’d say you owe your debt to the people who invented the ideas you were exposed to, not the grand wizard of your favorite Internet forum who turned the ideas into Harry Potter fanfic so you’d be exposed to them.

      • Deiseach says:

        Okay, I understand loyalty, I can’t fault you on that. If that’s why you stick with the guy, then I salute you.

        At least it’s not on the basis of his Harry Potter fanfic 🙂

    • John Henry says:

      You (Scott) personally are one of the best active essayists I know of, and I would defend the idea that you’re the best active on the internet, period. There is very, very little you’ve written that I haven’t found to be somewhere between insightful and brilliant.

      Seconded. I know nothing about Yudkowski, so I can’t speak to the rest of the post.

    • TrivialGravitas says:

      Even as somebody extremely critical of Yudowski, a meglomaniac nut, really?

      The guy has issues with not being nearly as aware of how hard the problems he’s tackling are and how easy it is to look like you have the Right Solution, but that’s a criticism I can level at basically the entire western world on the topic of critical thinking.

      • Anónimo says:

        Yes, a megalomaniac nut. He’s written things in the vein of “I might be the one to save the universe” enough times that that moniker’s not just hot air — and he acts like he believes it. Normal healthy people don’t think things like that, and even your typical inflated-head nerd outgrows it by age 17.

        If he’s a well man, he does a great job of hiding it online.

        • Nita says:

          Hey, what do you have against mentally unusual people? Sure, he’s a bit weird, but lots of people have found his essays engaging and useful. AND, get this: his plan of luring people into rationalism by writing a wacky Harry Potter fanfic TOTALLY WORKED. Furthermore, it seems like the topic of AI risk is actually becoming popular, too! How crazy is that? Maybe Eliezer is better adapted to the real world than us boring “sane” people.

          • TheAncientGeek says:

            I’ve noticed that raving egotists are good a starting things that other people are too apathetic or underconfident to start….but they tend to be bad at the things you need to keep something going, like alliance building, and PR.

    • 27chaos says:

      I’m like 60% on board with this, yes. I think Yudkowsky is fine as someone we hide in the back of our community, like a wise yet awkward uncle. He can teach us kung-fu zen rationality and we’ll ignore him if he says something a little too zany about quantum physics. But he’s not well suited to his current position of prominence. He should focus on research, not leadership. In fairness, he seems to basically do this currently. It’s just that the rest of community doesn’t seem to be on board with this, and continues to center around him as much today roughly as much as they did several years ago. Swapping equilibria is hard, I guess.

      I think Julia Galef is perhaps the best person to represent rationalism. She doesn’t seem to produce all that much content, however, whereas Yudkowsky wrote an entire freaking Bible of rationality. So perhaps not. Her title as president of CFAR doesn’t seem to carry quite the same kind of Leader Status and respect Yudkowsky has, unfortunately. If she steps up her content creation game, hopefully this will change.

      Alternatively, Scott, you should take over. You are bald and slightly less pretty, but you produce a lot of good content and I think your current community is better than LessWrong’s with better norms. You’re literally a psychiatrist, that sounds like an excellent occupation for credibly talking about how normal everyday people are a little bit insane.

      (I am not a shill for anyone, although I know I might sound like one. Sorry. I’ve never even met a rationalist in real life, though. This is solely about my impressions over the internet. Being blunt about these sort of things is uncommon, but if anyone can pull off a reasoned yet blunt discussion on this topic it ought to be this community.)

    • Jaheira says:

      The way I think about it is that Yudkowsky has a really high charisma score. Some of managed to make our saving throw, but a bunch of people despite having high INT and WIS just happened to miss it. We look at them and wonder what the heck is going on, but there’s a good reason for charisma to be on the character sheet, it can be really strong.

      What we need is to find some sort of remove charm object …

  47. beleester says:

    It’s an interesting theory, but I think there are other alternatives to “respond to each and every hater you see, because letting anyone through runs the risk of blackholing us.”

    As the saying goes, “if you meet one asshole, you met an asshole. If everyone you meet is an asshole, you’re the asshole.” It’s common for public opinion to turn against people because they’re getting into constant fights and sounding defensive, but it’s much less common for public opinion to turn against you because one vocal individual is making baseless claims. And even if it does happen, that people are turning against you for no good reason, it’s probably better to disengage from the current argument, where opinion is already solidified against you, and try to establish your not-a-crackpot credentials on firmer ground.

    If you’re getting a bunch of well-publicized blog posts about you, and you feel there’s some sort of pattern forming around you that needs to be addressed, you should probably speak up. But if it’s just one guy, responding will just stir things up more. I hadn’t even heard of Hallquist until you brought him up.

    Maybe Hallquist et. al. seem like a bigger threat when you’re viewing them from inside a small community like this one. But there’s a vast gulf between “Blogger considers us crackpots” and “The entire general public sees us as such crackpots that nobody will take us seriously for a thousand years,” and you don’t want to respond to the former like it’s the latter.

  48. Rob says:

    I don’t really know of a third solution either, but I have to imagine that one exists. The two choices you present are both strategies that a hypothetical adversary would be quite happy for you to pursue, but it would seem incredible to believe that there is no available strategy that frustrates your adversary. If you believe that your adversary really does have you beaten and there’s no way out of it, then either a) you should reconcile yourself to this or b) consider that you may be mistaken about some aspect of the situation.

    If your adversary has superior numbers, or some reputational capital that enables them to attack you, the best way to defeat them is to turn these advantages into weaknesses. Larger groups are harder to hold together, and reputational capital is fragile. Every campaign for minority rights ever has involved a larger group deciding, ultimately, that it’s better to treat the smaller group fairly than to persecute them. Every authoritarian or religious system that ceased occupation or persecution did so because it became impossible to reconcile the values they needed to uphold in order to protect their reputation with the reality of their actions. If your adversary really is a madman in the game theory sense then attempts to split their coalition or appeal to their stated values will fail, but I don’t get the impression that this is true here. (If part of the adversary coalition is comprised of madmen then you merely need to make this apparent in order to split the coalition).

    I’ve spent pretty much my entire online life reading the extremities of political opinion (I already know how the mainstream thinks, so there’s not much to learn there), and I have developed something of an instinct for understanding online conflict. It seems to me that your problem is that you care quite a lot about what this specific adversary group thinks, and this makes their criticisms more salient than they appear to others. I mean, most people don’t have an especially negative view of geeky white men – in fact, right now is a golden age for geeky white men in popular culture. What you perceive as an existential crisis for your group identity might, in fact, be an existential crisis for your personal self-image instead, brought about by a challenge from an adversary who you feel really ought to be on your side, or at least neutral, but isn’t. And this is the most frustrating kind of adversary possible! When you can’t think of a good reason why this adversary might oppose you, the only conclusion must be that they’re acting out of some kind of deep malice, of a kind that is quite unfamiliar in adult life. (Our definition of “being a grown-up about things” largely precludes obsessive malice towards other people).

    I can’t really take this any further without knowing the specifics, but I would presume that the way you win here is by gaining some better understanding of what motivates your adversary, so as to be able to separate those parts of the adversary group who are acting from genuine malice from those who are not. Consider that some of their grievances may be legitimate, but also that the cost of addressing those grievances will be much lower than the cost of prolonging the conflict, and that doing so would isolate those who have far less reasonable grievances. Most of all, you probably can’t win whilst feeling so uncomfortable about the whole thing – I think you’ve grokked this, hence the self-awareness about being defensive. This is not easy, and I can’t claim to have solved it entirely myself (I have some not-dissimilar difficulties with some not-dissimilar groups, but I think I’ve found a safe place to stand where I don’t need to compromise my own views and values whilst also not needing to get into conflict with the truly malicious). The idea that this needs to be a war to the death between two implacably opposed tribes is the very definition of a political mind-kill.

  49. brungl says:

    this might not be possible without fixing the problem first, but i think it’d really help to develop stronger ties with news outlets/corporations that identify as “data-driven”.

    scott, with his interests/aptitude for statistics and writing, would be a really good fit as an occasional contributor at 538 and the upshot (at the NYT), or maybe even vox. there’s a pretty strong overlap in the worldviews of EA/rationalist members and the people who run/read those outlets. engage with the writers for those sites on twitter on a regular basis, eventually convince them to let you write the occasional article, gain status (and exposure to a large audience) for yourself and the community.

    similarly, the EA movement seems like a natural fit for partnerships with companies like SAP, IBM, tableau, etc.
    those companies’ schticks are “we use data to help people make better decisions”, which is almost exactly what the EA movement is trying to do. a partnership with one of them would instantly add credibility (and $$$) to the movement.

    those things are easier said than done (and it’s possible/probable that people in EA/rationalism are trying them already) and maybe “get powerful people on your side” isn’t the most insightful advice but there ya go.

    • David Moss says:

      The weird thing is that EAs have written for Vox a farcically high number of times already, and Dylan Matthews himself has written so many pieces for Vox where he just repeats and approves of EA ideas wholesale (pretty much copy-pasted from some EA blog) that he was generally hitherto viewed as “the EA at Vox”.

  50. I’m surprised that you find talking with people who have already fallen into the black hole more congenial, or at least less likely to provoke status anxiety. I don’t think that squares with your experience off being partially tarred by your willingness to take neoreactionary ideas seriously.

  51. CJB says:

    So I think Lambert upthread hit on the crux of the issue:

    “For a movement like rationalism, having kind words, or at least having ideas accepted, is a (the?) primary objective. Money and power are much lesser considerations.”

    Do you want to be the Ideas Folks or the Money and Power folks?

    There’s actually a very successful, very good example of this transition- Christianity. Pre-Constantine, Christianity survived a lot of purges. One of the ways it did this was that Christians were really, really nice folks who mainly just wanted to talk about Jesus and pray. So, for example, one trick was to require all Roman citizens to sacrifice a pinch of incense to the Capitoline Jupiter. Obviously a good way to root out Christians. According to my Roman history professor, many Christians were so popular that their friends would literally physically drag them to the temple and force them to put incense on the altar. This is because Christians were no real competition to anyone.

    So the question is this- does Rationality, as a movement, want to just remain a group of people spreading the idea that you can use certain logical techniques to improve the quality of your thinking?

    There’s nothing wrong at all with that. It can be very useful to a lot of people. But it also means that you need to keep away from Money and Power.

    People go after other groups because they want Money and Power. Money and Power are finite resources. ALTRUISTIC money and power are even harder to come by, and large chunks are commanded by groups already pretty much beyond moral attack- Red Cross, hospital wings, churches, homeless shelters. So by creating a new, techy-preppy-modern-rationalist charity, you’ve entered a strong new contender for a fraction of a fraction of a fraction of the available money and power.

    No shit they’re coming for you. You’re literally taking food from a starving pack of animals.

    You want to have Money and Power because you want To Change The World? So does literally everyone else, from Presidential candidates down to hobos begging on the street.

    Should Rationality to exist to play The Game of Thrones, or do you want Rationality to exist to write long articles about Logical Fallacies?

    Is EA correct, moral, good, useful, better, ideal- whatever? Should it win the GoT? Obviously, the worlds where Donald Trump is president are different from the ones where Hillary Clinton is, and those differ from the Scott Walker universes and so on. Is the universe where the Rational King sits upon his throne of (using whats inside your) Skulls better than one where Rationality is more like the Amish?

    Is a world where Rationality or EA is a significant repository of power and money BETTER than ones where it’s not?

    If so, read Machiavelli, and Cialdini, and Trump and the Catholic Church. Get to work. Kill your enemies and bed their women. But no half-measures. Half-measures are deadly.

    You want to be treated like the harmless guys just talking about some ideas over coffee, but you also want Money and Power. You’re Ned Stark-ing, and anyone who is even a little better at the Game of Thrones is going behead you- probably just to prove a point to someone else.

    Humans do politics. Humans who explicitly agree that “politics is the mindkiller” has their first conference ever develop a serious political scandal.

    My suggestion? Go into the Game of Thrones. Frankly, you’re going to end up there pretty soon anyway- as the Christians learned….even if you aren’t interested in power, power is interested in you.

    Is what you have worth fighting for, and evangelizing for? Then you’re gonna need to fight and evangelize. Or is it just a fun local subculture with some clever ideas, like fans of Kantian Metaphysics? Then you’re gonna need to stay out of fights.

    • Deiseach says:

      The EA movement needs to figure out pretty fast what does it stand for. At the moment, there are three or four or five loosely-affiliated groupings, all with their own notion of what The Most Vital Issue is, and they’re starting to claw each other’s eyes out about who gets top billing.

      Is EA about rationality? Then emphasise that.

      Is it about AI risk? Then if you’re going to appeal to the mainstream, you have to SOUND REASONABLE. And that means DUMP THE QUADRILLIONS OF FUTURE POST-HUMANS IN AN INTERGALACTIC CIVILISATION talk, because that sounds nuts and cultish and geeky nerdy losers.

      You want to push AI risk? Use Y2K as your go-to example. Write up something like a press release for somewhere like Fox news or some other media outlet (yes, yes, let us all ritually spit and curse but do it anyway; hit your hardest rivals). Start off with something like “Remember the big collapse of society in the year 2000? No? EXACTLY. That was because people were aware of the problem and worked to prevent it”.

      Go on from there and KEEP IT SMALL SCALE AND RELATABLE. If anyone challenges you “So you’re talking Skynet?” go “No, no, no, we’re talking something like hamster-level intelligence in about 20-50 years” (rats may be smarter, but people associate “rats = vermin”. Get and keep the idea of “small, cute, friendly” in the back of their minds). “Just as the Y2K problem was baked into the coding of the time and needed a lot of work to be overcome, we simply want to make people aware that we should be working on getting it right from the start”.

      Emphasise small but not disastrous problems; your AI running the home systems being too literal minded so that when you say “Turn the air conditioning on, I’m boiling”, not only does it crank it up to freezing cold levels, it calls an ambulance because it takes what you say too literally.

      “Meg (or it may be Greg or Sally or Joe), all we’re saying is, this is gonna happen soon, we’re gonna have really smart thinking machines that can think for themselves. But if we get them wrong, things are gonna go astray in ways we didn’t intend. What we’re trying to do is get people to think of how things can go wrong, and how we can do it right the first time round. An ounce of prevention is worth a pound of cure!”

      FOR THE LOVE OF THE COSMIC SPIRIT OF SAGAN, DO NOT TALK ABOUT EXISTENTIAL THREAT, THE SINGULARITY, GOD-LEVEL UNFRIENDLY AI OR ANYTHING ELSE THAT MAKES YOU SOUND LIKE CRACKPOTS AND NUTTERS. At least not out in public where the mundanes can hear you, keep that for amongst yourselves at the conferences, and confine it to specific panels.

      You’re going to have to start playing politics and paying attention to the rest of the people in the world out there, the ones you’re looking down your noses at for having sub-130 point IQs and being too dumb to understand and apply probability theory, because there’s a heck of a lot more of them there than you.

      You’re going to have to decide: what is the issue we as a movement are pushing here? What is our raison d’être and are we all on the same page about it? You cannot start ripping your movement asunder over arguments about whether animal rights or AI risk or other risk or maximising efficiency in charitable donations is the Most Important Issue, because all you will do is splinter into nothingness.

      You’re going to have to pick a cause and stick with it, and if “rationality” is the cause, then AI risk or the rest of them will have to take a back seat.

  52. Here is a thought for a “third way”.

    Find another, similarly socio-politically powerful, community of individuals to your own, committed to the general (“meta-“?) principles of good faith rational debate, but with clearly different “object level” (if I understand correctly the way you use the term) (political?) beliefs. And invite them to a well-moderated(!), civilised, good faith, dialogue, discussion, debate with a sincere attempt to understand each other’s world view and see what you can agree to at the object level and what you can agree to disagree about.

    There are probably people like that somewhere. I’m one of them I think. I’m well to the left of SA when it comes to my beliefs about the disconnect between the social ideals of justice and equality and the effectiveness of its delivery by the actual social institutions that have been constructed. However, I very much appreciate his commitment to reason when examining object level questions. (On the other hand, I somewhat suspect that a more thorough inquiry may result in the determination that often the social research data are inconclusive, and so that an assumption that social institutions are probably unfairly biased is generally no less defensible than an assumption that they probably aren’t).

    But you won’t hear much from me on the internet. I’m interested neither in arguing with rabid social conservatives who think that I am personally attacking them if I express concern for people I consider to be underprivileged and who are only too willing to dismiss me as a card carrying member of the loony left, nor in getting involved with SJWs who insist that my lack of interest in their pet social cause is evidence of an ethical deficiency. And I don’t even spend any time arguing here either, because I’m too busy taking my own advice and picking my battles, ’cause I don’t have so much interest in my politics that I have nothing better to do like, say, going to the beach or doing other important things like the washing up.

    So I don’t know whether or not I would contribute to such a constructive discussion.

    On the other hand, if you can manage to pull that off, there will be two positive outcomes. One would be a very striking win for reason and rational discourse (or alternatively a recognition of its limits!), and the second positive outcome would be a potentially powerful new coalition, a larger community with more moderate, but stronger arguments than either community on it’s own (perhaps).

    And then you would go on to assimilate another intellectual community in the same way, again and again, until…

    There you go. New strategy. You’re welcome.

    I’ve done all the hard thinking work. You can go off and implement now.

    I’ll watch.

  53. Rhino says:

    Your id is correct to be defensive when people are attempting zero-sum encroachment on Schelling Points that are important to you or your tribe. Small encroachments can easily spiral into larger ones.

    One approach to dealing with encroachment is to have allies. The allies can distribute the rebutting and defending so that one person doesn’t have to do it. Having a coalition is especially important when your adversaries have their own coalitions. (And yes, adversary is the correct term in zero-sum games. This doesn’t mean that someone is your mortal enemy and you are trying to go to town on them; it just means that you have conflicting goals in a certain context. If you think it’s just a misunderstanding and they might not be acting in bad faith, then take a different approach.)

    If you are just one person and your adversary is a very prolific individual or a group, then of course it’s going to be an uphill battle.

    It’s important that your coalition have its own status hierarchy. Members of the coalition can lend each other status when individual members get attacked. They also need to not join an attack on the group or on of its members (which should go without saying, but there are lots of groups where members will join in attacks or engage in self-flagellating in order to gain status).

    In short: Stop trying to take on the world. Find or form a coalition of people with similar values who will defend each other.

  54. A good utilitarian principle is not to waste effort on trying to achieve the impossible, no matter how noble. While converting the entire world to rationality is no doubt a noble cause, recent musings have started to provide insight into just how impossible it probably is.

    Keynes said that the market can remain irrationally mispriced for a lot longer than the rational investor can remain solvent. Global socio-political space will remain irrationally dominated by ideological tribalism for a lot longer than any one individual can remain emotionally solvent. Its a practical constraint and a real world challenge, as much as it need not be, in principle.

    You’ve identified two extremes. On the one hand, one can tell the truth to everybody, with the result that you end up spending most of your time arguing to no effect with people who think that you are batshitcrazy. Or on the other hand, shut down conversation with anyone who doesn’t already agree with you, and hello solipsism. The twin goals of developing a community and having a growing political impact are ineffectively served by both extremes.

    Somewhere in between you’ll find a “sweet spot” of political impact where you spend lot of time reaching out to people who are not already dedicated footsoldiers of your rationality “movement”, nor so prejudiced and hostile that dedicated argument might result in conversion at the rate of less than one per lifetime. I think that’s another example of Aristotelian moderation. It’s also what is referred to in the advice to “pick your battles”. The unhappy reality is that this so called “sweet spot” is not particularly actually all that sweet. It’s probably still a lot of hard work and effort, and will have an impact that is, in the grand scheme of things, rather small. It’s “sweet” only in the sense of being the sweetest practically possible, and a far cry from the sweetest in principle outcome.

    It can also be seen as a learning problem. A good teacher won’t teach the students material that’s way above their heads, they simply aren’t developed sufficiently to take advantage, and will likely become discouraged or hostile. It’s similarly useless to teach material at the level that the students are already at, there’s no progress there either. There is a moderate sweet spot, known by some as the “zone of proximal development” (Vygotsky). Schools of thought (eg “rationalism”) and political movements (eg SJM) have a mission to teach the world. Either extreme is poor strategy and the right strategy isn’t straightforward. But that is expected – being an educator is hard work – some people dedicate their entire lifetime to it as a career choice, right?

    And there is furthermore the trade-off of time spent proselytising versus time spent developing the ideology. In academic terminology, between teaching and research. Every hour spent being defensive is an hour not spent developing a constructive practical theory of rationality (notwithstanding the fact that the lived experience of engaging in defensive discourse provides empirical data on the practical difficulties of implementing a theory of rationality, and therefore provides some pragmatic direction to the rationality theory research programme).

    • Thomas Brinsmead says:

      I think that I can summarise by saying something like – take consequentialism seriously.

      It is with both sincerety and just a little bit of tongue in cheek that I suggest Scott A could read this blog post
      http://raikoth.net/libertarian.html, especially the section 12.4 on Consequentialism, and maybe 12.1 on Moral systems, which provides a good reminder that “truth telling” is one good amongst many (includes “telling truths that are particularly interesting”, and “disseminating truth as widely as possible” – two epistemic goods amongst many), as is “Not being considered as batshitcrazy”. “Having political impact” is another good. So is “developing a coherent theory of politics”.

      We live in a finite, limited world, and we can pursue epistemic goods, but in our contingent, limited universe, we must sacrifice some epistemic goods for others.

  55. magicman says:

    Definitely agree. There was a great link at Marginal Revolution, a year or so ago, to this guy who studied what he called political calibration. I can’t remember his name but he made a very similar point about the proliferation of uninformed opinion. The slogan he came up with was something like “If have a strong opinion about a subject and you are not an expert then you have been (in LW parlance) mind killed”. Of course you can also go to far in the other direction and become too passive towards experts.

    • Carl Shulman says:

      @magicman You’re thinking of Philip Tetlock.

    • TrivialGravitas says:

      How is there any middle ground at all allowed there? Expertise is a really high bar, most of the general population wouldn’t be allowed to be anything other than passive on even one topic.

      • Adam says:

        Properly calibrated opinions. ‘I’m no expert, but estimate the likelihood ratio of proposition B to proposition A at 5:4.’ Not ‘I’m completely certain that B is correct and will argue to the death even against experts in the field who believe A.’

  56. Zorgon says:

    The first thing that leapt to mind reading this was: How does this interact with meta-contrarianism?

    There are certainly a number of movements which have been punted so far into the black hole that it’s fairly unlikely that they’re going to become the Next Big Thing for intellectual hipsters, barring a major generational change – neo-Nazism leaps to mind. But what about the ones that orbit in the radiation shell at the event horizon? They seem to be really solid candidates for the next metacontrarian cluster.

    This does go some way towards explaining the bemusement I felt at the sudden resurgence of moralistic censorship. I witnessed it lose influence in the public sphere during the 80s-00s, getting punted out into the same regions creationism et al lived, so watching the moralistic identity left begin to literally treat free speech as a joke was both surprising and deeply unsettling. What was worse is that they defend their abuses using much the same language and the same appeals to emotion the traditionalists used.

    Maybe, then, this is the ideal long-term plan for any ideology and/or group that wishes to survive being punted: Ensure that you do not get sucked fully into the black hole. Whether this is by default (I think it is perhaps impossible for the urge to censor to vanish entirely) or because of intentional seeding of the idea in as many hard-to-pursue parts of culture as possible. Retreat into assistant professorships and wait, basically.

  57. Darcey says:

    I haven’t read the comments, so maybe someone has said this already, but… well, two things.

    (1) You describe a self-reinforcing process, where when sane people start abandoning a movement, even more sane people will follow them out the door. The argument makes sense, and I agree that this happens… but it will happen less when the movement focuses on some higher purpose beyond mere social status. A lot of Christians would probably remain Christian even in a hostile environment, because they believe that their duty to God is more important than being liked and respected in the human world. So the “higher purpose” thing creates a sort of memetic immune defense against the kind of process you’re describing. Historical examples seem to support this: Jews have remained Jewish, even when they were looked down on, or even when it was physically dangerous to be Jewish. And Christians (at least in the early days) remained Christian even when they knew they’d be martyred. (Actually, the Bible contains additional memetic immune defenses: it tells stories of the many times in the past where the whole world has turned away from the faith, and then God has punished the wicked and rewarded the righteous. E.g. Noah remained faithful to God even when all the rest of the world was corrupt, and so, in the situation you describe, modern Christians might compare themselves to Noah, and decide to keep the faith even when it’s looked down upon in the eyes of society.)

    (2) There is indeed a third option — keep your ideology, don’t get defensive about it, and people might end up respecting you and your ideology more. Maybe it’s a form of countersignaling; you’re basically saying “I’m so sure that I’m right that I don’t even need to defend myself against your accusations.” Or maybe it’s just that… we tend to respect people whose principles outweigh their need for status. If someone says “Well, maybe everyone’s going to hate me for this, but I truly believe this is right, and I’m going to stick to my convictions (unless someone convinces me otherwise with actual rational argument and not just yelling)”, then I tend to gain respect for that person. One of the reasons people criticize Eliezer is that he gets too defensive about attacks on the rationalist community. I think that, if rationalists calmly accepted criticism, while sticking to their beliefs and continuing to debate and examine them, people would respect them more.

  58. Skaevola says:

    This post helped crystallize some of my own views on the Vox critique of EA. I still can’t shake the feeling that somehow it wasn’t written in good faith. My gut feeling is that instead of being an honest criticism of the movement, it was really an attempt to demonize Effective Altruism among people on the left who are interested in charity.

    For one, it wasn’t written towards people in the movement, it was written to people who had never heard of EA before. And what’s the first thing these people learn about EA? Oh, it’s in the title: they’re a bunch of “Nerds”. Then EA is linked to the criticism from the left that the tech industry constantly receives: “At the moment, EA is very white, very male, and dominated by tech industry workers”. You hear various combinations of “white”, “male”, “autistic”, and “nerd” repeated 4 more times throughout the article.

    Why weren’t the substantive criticisms of the movement sufficient? Why did the author add the low status men shaming to the article? I have two hypotheses, and I don’t know which is more cynical. 1) It’s a way to get more page views by generating controversy. 2) The piece is really meant to drive a wedge between people on the left who are interested in charity (e.g. people who work for non-profits, political activists) and EA.

    Am I reading too much into this?

    • Earthly Knight says:

      No, I agree. I’m receptive to criticism of the AI-doomsaying, and thought the article made a number of reasonable points, but was turned off by the repeated underscoring of the whiteness, maleness, and autisticity of the crowd. These struck me as not-terribly-subtle dog-whistles designed both to draw clicks to what would otherwise be a dull item about a charity conference and to warn female and social-justicey readers away from the movement, like in ye olde racist days when every news report about a crime committed by a black man took great pains to make sure the reader understood just how black and how male he was (with the usual caveat about how these two situations are in no way comparable except in the respect just specified).

      • Zorgon says:

        Never ceases to amaze me that a group as dog-whistle-politics sensitive as the identity left still inexplicably thinks it can pretend that the constant references to “straight white cis males” etc are anything else.

        • TrivialGravitas says:

          It comes down to the ‘ism is privilege+power’ thing. It’s not that those aren’t dog whistles, it’s that they simply don’t care because it targets the ‘dominant’ class (never mind the damage when it’s combined with gas-lighting people who aren’t straight or cis).

          • Zorgon says:

            True enough, although you’d hope that at least some of them could spot such an obvious post-hoc rationalization. Damn the monkey-brain.

            (I might actually agree with them about the “power + privilege” thing if there was any serious and nuanced attempt at examining what ‘power’ and ‘privilege’ actually mean. But I suspect I’m about as likely to find that as I would be reading Scientology materials for solid critiques of psychiatry.)

    • It still baffles me that “autistic” can be a term of abuse for faux feminists.

      Have these people never heard of intersectionality? Ableism?

      • BBA says:

        In their social circles they almost never encounter people who really are autistic. They do encounter a lot of boorish nerds who use a self-diagnosis of Asperger syndrome as an excuse for their boorish behavior. (I’m describing myself in that category. But I’m trying to overcome my character flaws, dammit.) It’s nearly guaranteed that the only people who are offended are the people they’re trying to offend.

        • Ever An Anon says:

          To be honest, if anything the boors make high functioning autism seem more sympathetic.

          This is purely anecdotal, but as a non-aspie with 100% of my immediate male relatives on the spectrum (with professional diagnoses and everything) people with Aspergers are extremely insulting and often in novel ways. I love my father and brother to death but if they weren’t blood relatives I would have cut them off years ago. Even when you know it’s unintentional it’s very difficult to be around someone like that.

          I don’t think people should use “autistic” as an insult or bully people with Aspergers but it’s certainly easy to see why it picked up such a strong perogative meaning.

          • BBA says:

            The implication is that the boorish nerds aren’t really on the spectrum, or are mild enough cases that they can, with effort, engage normally in society. They just don’t want to put in the effort.

            I’d say this implication is partially true and partially SJ types equating failure to totally conform to the SJ agenda with malice.

      • Robert Liguori says:

        “When it’s us doing it to the bad guys, it’s different!” said every movement ever.

      • Echo says:

        It’s ok when you’re Ableisming UP.

  59. StataTheLeft says:

    I don’t think this always works but I think quite often another solution is to thank your critic for the feedback, steel man them, and try to dig out any useful advice from the criticism. Most of the time there is at least some grain of truth to the criticism. If you can’t find it, ask the critic questions in a way that exhibits genuine curiosity. In my experience, people don’t like to destroy communities who successful exhibit earnest interest in engaging and learning and it’s much harder to make such communities look like “cults” or crackpots.

    This isn’t to say that a community should never point out where their critics are flat out wrong but I think at least 80% of the time that is neither the most strategic response nor the best epistemic attitude.

    I’m worried that the EA movement has largely responded to the recent increased critical attention by closing ranks and playing defense or focusing on the critics’ worst points instead of their best points. For example, I’d prefer responses to Dylan’s piece that spent 85% of the time trying to address the fact that Dylan (himself an EA) felt out of place at the community’s annual conference and 15% of the time countering his arguments against AI risk.

    I’ve noticed a similar dynamic with respect to responses to the Boston Review’s debate on EA. Much of the response seems to either say: 1) in theory, political activism (or other attempts at leveraged change) are part of EA if they are truly effective; or 2) some EAs do work on policy issues. I’ve seen far fewer responses that sound like:

    “EA’s critics might not quite be distinguishing when they’re talking about EA as an ethical system v. the current makeup of the EA movement and might slightly overestimate the lack of focus on policy in the EA movement as it is today. But smart critics consistently are worrying that our focus is wrong. Even if their criticism isn’t quite phrased correctly, we should take a hard look at whether when people say “EAs ignore policy” they actually mean “EAs put too few resources into investigating policy.'”

    • TheAncientGeek says:

      Well, there were some things that needed saying. The framing of the OP was that all criticisism is intended to wound and destroy, and the only thing to do is defend yourself against it. It ain’t necessarily so on either count. Of course a rationalist should try to get useful information out of criticisms …why would that even be in doubt? And not casually offending other groups is pretty helpful, too.

  60. Nyx says:

    In response to your main question, I do think there are interesting ways to play this game that don’t require you to have higher status than your opponent, or to set up status games stacked in your favor.

    Singer’s drowning child argument is one such case. It sets up a situation where you either have to refuse to engage, or you have to be an extreme effective altruist (or look like a monster). Most people aren’t utilitarians, and believe that Morality says “don’t do bad things, and do some good”. But they don’t think it means you should give away money whenever someone would get higher marginal utility from it.

    So what they should want to say is “it’s really really good to save the child, and I would do it, but it’s not morally obligatory”. But if they do this, some *saint* who comes along and says “you just think it’s a good thing, but I realize it is morally obligatory [because I’m a much better person than you are]” can give themselves higher moral status and make the non-obligatory person look like a monster (“you wouldn’t always save the child!!??!”).

    Therefore, people who want to be viewed as having high moral status have to say saving the drowning child is morally obligatory, not just really really good (even if there were no arguments put forward in favor it being obligatory). From there, extreme effective altruism (donating everything that doesn’t contribute to you making more money/surviving) seems to be a direct consequence, at least in current circumstances where it costs <$5000 to save a life.

  61. Nyx says:

    I don’t think you can make it reputationally costly for people who are surrounded by people who don’t care about argumentation quality, and who have more power/status than you. Not sure if I have a solution, just don’t think that statement “make it reputationally costly” is accurate/possible.

    However, it’s useful to respond so that when third parties, who aren’t already invested in either side, look up your movement, they see that there are reasonable responses to criticism. And, importantly, they don’t see a bunch of angry criticisms with no response (if that’s what I see when I look into a movement, it creates a strong urge to run in the opposite direction [unless I don’t like the people doing the criticism, e.g. slate]).

  62. This explains much of the reaction to attempts to turn Donald Trump toxic. It’s a preemptive strike.

    It might make sense for people who disagree with Donald Trump’s current opinions to praise him on other grounds instead of calling names. I recommend appointing Trump ambassador to the UN.

    • onyomi says:

      I feel like Trump is some sort of super-mutant antibiotic-resistant strain of bacteria in the meme-pool created as a result of political correctness and SJ shaming gone too far.

      Basically, SJ and political correctness have gone so far that they’ve created a dedicated contingent of say, 10-20% of the voting population who like you more the more you refuse to apologize for offending people. Even if the remaining 85% of the voting population thinks he’s a troglodyte, having 10-20% of the voting population on your side is enough to put you in the lead for the nomination of a major party, especially when there are 16 other candidates in the race (which is why he probably won’t really win the nomination).

      Though I don’t like Trump himself, and I worry a bit about the conflation of non-pc with just plain crude, I do think his relative success may signal that SJ superweapons have started to lose their efficacy.

      • Saul Degraw says:

        Why did SJ and political correctness create this? Perhaps this 10-20 percent always existed and is just a strident minority that doesn’t want to be a strident minority?

        https://www.washingtonpost.com/posteverything/wp/2015/08/10/meet-the-new-political-elites-same-as-the-old-political-elites/

        Perhaps the same is true for supporters of Bernie Sanders.

        People complain about PC but all I hear is people complaining about not being able to make vulgar and cruel jokes anymore. There is some (but not complete) truth behind the “distress of the privileged” idea that Scott mentioned earlier. There is always going to be a rump reactionary part of the population that dislikes any form of progress and sees everything as zero-sum.

        • onyomi says:

          I definitely think SJ has gone too far when people are losing their jobs for comments made in private, off work. I am also so tired of the phony apologies we have to endure every time some clothing CEO accidentally makes an insensitive comment about women’s thighs.

          I think there has definitely been some progress in that some genuinely nasty jokes and comments which would have once been acceptable are now rightly seen as nasty and hurtful, but surely you have seen examples of this going too far as well? What about the special room full of puppies Brown had to provide for students to play in as an alternative to listening to a speaker express controversial, potentially “triggering” views?

          On teaching evaluations today one is more likely to see a high ranking from the student who felt “comfortable,” than from the student who felt “challenged.”

          • bartlebyshop says:

            > What about the special room full of puppies Brown had to provide for students to play in as an alternative to listening to a speaker express controversial, potentially “triggering” views?

            Are you talking about this (NYT warning)?

            I think providing a separate area for people to calm down after talking about something like rape, or your children dying of incurable brain cancers, or your friend killing themselves is fine. I guess I don’t see a substantive difference between providing a puppy-room and Brown U paying for tissue boxes in the Counselling Center. It’s a nice gesture. No one was forced to attend the debate, the puppies were there in case they chose to and found it too overwhelming. This feels like more of a case for the “pillow-lining of America” than “when overzealous SJWs attack.”

            If you want to get after people for censorious conduct it seems like this quote:

            Emma Hall, a junior, rape survivor and “sexual assault peer educator” who helped set up the room and worked in it during the debate, estimates that a couple of dozen people used it. At one point she went to the lecture hall — it was packed — but after a while, she had to return to the safe space. “I was feeling bombarded by a lot of viewpoints that really go against my dearly and closely held beliefs,” Ms. Hall said.

            is a better option. Brown let the “controversial” speaker say their piece and didn’t ban the talk. It seems like the Steven Salaita case, in other topical news, might be a better choice?

          • onyomi says:

            Actually, yes, the Brown example is not the best example of what I’m talking about, since at least the speaker was not uninvited. But there are many nastier cases.

            In general, I think one of the biggest problems is the general impoverishment of debate: there are too many things now which you just can’t talk about. Ironically, race is definitely one of them, and yet also the one on which we are constantly hectored for failing to have a conversation.

          • bartlebyshop says:

            Part of the reason I chose Salaita is it’s hard to call him a victim of SJWs but he was certainly persecuted for his speech & beliefs. There has been very serious push-back against the people at UIUC who engineered his un-hiring, at least.

            Is it true that we can’t have a conversation on race? Charles Murray still gets op-eds printed in the NYT and WSJ. The NYT had a Room for Debate recently about Sanctuary Cities that had 2 out of 4 writes against allowing cities to not enforce immigration laws. I think the tide is certainly turning against Murray and cohorts.

            Maybe I have a blinkered perspective here but I am a woman in a STEM grad program and men have argued with me about whether the gender gap is due to biological differences or culture (they were on the bio side). I asked them if they were afraid of hurting their career prospects by holding those beliefs and they said no (could have lying to me, I guess, presuming I was playing 2nd order SJW chess). Universities are supposed to be on the front line of the censorship war over this but I know of lots of profs & grad students who believe “non-PC” things and are unafraid and doing fine.

          • onyomi says:

            STEM fields are more right wing-ish in the American sense than the liberal arts (though admittedly that’s not saying much). I would certainly be afraid of expressing that view to most of my colleagues.

          • Saul Degraw says:

            Colleges and Universities have had pets around for just easing stress during finals time, this is nothing new. My alma mater did stuff to take some pressure off during finals time and I went to college in the 1990s and early aughts. So I don’t see why it is different for other stressful situations. I am personally not a fan of the trigger warning but I do think there are forces that think things should be hard and cruel and it is completely unnecessary.

            I don’t quite remember all the finals stuff that my alma mater did except having hot sundae night in the dorms but there was more. Do you think it was coddling to do this for the students or was it a nice gesture?

          • Saul Degraw says:

            @bartlebyshop

            As far as I can tell, liberals and the left were generally on Salaita’s side. The right-wing was angry at Illinois for hiring him because of his tweets.

            I admit that I am kind of perplexed at twitter and why anyone would use it because as a medium it seems prone to getting people’s knee jerk and most hyperbolic qualities when they are tweeting.

          • Kojak says:

            “I admit that I am kind of perplexed at twitter and why anyone would use it because as a medium it seems prone to getting people’s knee jerk and most hyperbolic qualities when they are tweeting.”

            I think you just answered your own (implied) question.

        • Eggo says:

          When you’re targeting and hurting people to take what you want from them, do you get to be surprised when they consider it a zero-sum game?

        • HlynkaCG says:

          >Why did SJ and political correctness create this?

          Because the more you accuse people of racism, the less value “racist” has as a categorization tool. SJWs say that Trump is horrifically racist. *yawn* so basically he’s like every other white male ever.

        • The original Mr. X says:

          People complain about PC but all I hear is people complaining about not being able to make vulgar and cruel jokes anymore.

          Brendan Eich, #shirtstorm, Sir Tim Hunt, Memories Pizza… It’s not too hard to think of examples of SJ going too far.

      • TrivialGravitas says:

        I don’t think SJ superweapons were ever actually effective at targeting the right wing support bases of people. If anything they seem super effective here, Trump seems to actually be more moderate than the other candidates once you look past him being a massive asshole but he’s held up as the worst case scenario to win. But compare the ‘defund planned parenthood’ approach of Bush III to Trump’s “we find a way to either stop them from performing abortions or to replace them with somebody else because their other work is important”. Compare “build a giant wall to stop illegal immigration” to the “arrest everybody brown with no ID” shit Arizona tried to pull.

        • Saul Degraw says:

          I think Trump is moderate only because his policies seem to be a mix-mash of things. He takes a bit from column A and a bit from column B. Vox (yes Vox) had a good essay about this:

          http://www.vox.com/2015/8/15/9159117/donald-trump-moderate

          But voters who aren’t as interested in politics and who don’t attach themselves to a party push the ideas they actually like, irrespective of whether they’re popular or could attract 60 votes in the Senate or would be laughed at by policy experts. Those ideas are often pretty extreme, but because they fall both on the left and the right, pollsters often mistakes them for moderates.

          The way it works, explains David Broockman, a political scientist at the University of California at Berkeley, is that a pollster will ask people for their position on a wide range of issues: marijuana legalization, the war in Iraq, universal health care, gay marriage, taxes, climate change, and so on. The answers will then be codedas to whether they’re left or right. People who have a mix of answers on the left and the right average out to the middle — and so they’re labeled as moderate.

          But when you drill down into those individual answers you find a lot of opinions that are far from the political center. “A lot of people say we should have a universal health-care system run by the state like the British,” Broockman told me in July 2014. “A lot of people say we should deport all undocumented immigrants immediately with no due process.”

          • onyomi says:

            This (along with Trump’s popularity in general) just prove that the electorate (of both tribes) is stupid, and should be grateful they’re not getting the government they want. This is why I’m super skeptical of efforts such as Lawrence Lessig’s which target money in politics as the root of all evil. Yeah, maybe money in politics is preventing “the people” from getting what they want all the time, but “the people” as a group are just a mass of stupid feel-good biases: anti-foreigner bias, mercantilism, “throw all the bums out, except not my bum,” “cut spending for everything except any specific program you care to name,” etc. etc.

            Not that I blame them: being well informed about politics is not in their interest. But policy by people who have no incentive to get it right is not a good recipe (not that I think the politicians are capable of doing it well either, due to Hayek’s knowledge problem).

      • FacelessCraven says:

        @Onyomi – “I feel like Trump is some sort of super-mutant antibiotic-resistant strain of bacteria in the meme-pool created as a result of political correctness and SJ shaming gone too far.”

        This idea may be plausible, and is certainly fun for me to contemplate. I think, however, when you are assigning blame for the red-tribe response to an egregious red-tribe leader to the actions of a section of the blue tribe…

        …It seems to me that stuff like this is probably at least a little of what people mean when they talk about how this place leans heavily to the right, or at least anti-left. Food for thought.

        I think you’re right that the Superweapons are running out of steam. I think the crazy stuff that has made Social Justice such a controversial label over the past few years is probably around its high-water mark, and it’ll be rolling back from here on out.

        • onyomi says:

          It’s interesting: I have on my desk a pen with a picture of a woman wearing a bikini, which, if you tilt it in the right way, causes the bikini to slowly disappear. When I was a kid one would have been embarrassed (adults included), perhaps to be seen with this pen because it’s racy. Now that we are sexually liberated, one can still be embarrassed by this pen, but because it’s sexist, objectifying, etc. Coincidence?

        • Saul Degraw says:

          @faceless

          I concur. I think Trump is the natural product of decades worth of red-meat throwing and dog whistles. It is sort of ironic to see people like Erik Erickson and Jonah Goldberg freak out and panic about Trump’s popularity and call him vulgar and uncivil. Erik son of Erik called Justice Souter, a “goat-fucking child molester” and now he is freaked out about the Trumpster taking all the energy from the room. This seems to be the chickens coming home to roost.

          But team red will just blame team blue because it makes them feel better.

          • onyomi says:

            Why can’t it be both?

          • HeelBearCub says:

            @onyomi:
            Why can’t it be neither?

            Put another way, populist rhetoric, from the left or the right, has always been outraged and outrageous.

          • onyomi says:

            I think there’s definitely something qualitatively different about Trump. Never before have we had a celebrity gain such traction for the presidency (perhaps Jesse Ventura and Arnold would be earlier examples, though only Ventura, as Trump, ran on basically being an anti-establishment loudmouth), or seen a major political figure so utterly resilient to scandal (though I think his unwillingness to back down or apologize is, in many ways a good strategy which hardly anyone has dared employ to this point).

          • Saul Degraw says:

            @onyomi

            I think Trump’s unapologetic vulgarity is probably a good part of his appeal for many. What I get from Trump is a vibe of “This is how a billionaire should live! Not like some Bill Gates nerd in rumpled shirts with wonky non-profits. Let’s goldplate everything.”

            http://www.slate.com/articles/news_and_politics/politics/2015/08/donald_trump_s_alpha_male_appeal_what_explains_the_loutish_billionaire_s.html

            “The second form of resentment Trump channels concerns social class. The last effective exponent of this political undercurrent was Sarah Palin, before she quit her job as governor of Alaska to make money as a celebrity. What is curious about the Palin-Trump style of cultural populism is that it has little to do with the growing economic disparities in American life. The resented betters are social elites in the media, politics, and Hollywood, whom less educated, lower-middle-class Americans regard as behaving in a condescending or hypocritical way.

            By contrast, Trump’s relationship to his own wealth conveys an honesty that his followers say they like. Though he built his empire out of his father’s empire, he has never suffered from the sense of decorum or noblesse oblige that sometimes accompany inherited money. His style isn’t even nouveau riche so much as it is last-week-lottery-winner. To Trump, being a billionaire means plating everything in gold and slapping his name everywhere in huge block letters. It means that he gets to say whatever pops into his head and never has to say he is sorry. His celebrity “brand” is an alpha-male fantasy of wealth and power, revolving around the pleasure he takes in firing and suing people who displease him. He is the only 69-year-old white guy in America who gets to live like a rap star.”

          • HlynkaCG says:

            @ onyomi

            “…or seen a major political figure so utterly resilient to scandal”

            Have you heard of the Clintons?

    • Murphy says:

      They have to try for that? When I saw his twitter I thought it was an obvious joke account that people were re-tweeting until I found it to be linked from his real website.

      Seriously:

      https://twitter.com/realdonaldtrump/status/449525268529815552?lang=en

      It’s wonderful!

  63. Toggle says:

    In politics, there’s a distinction to be made between a candidate with low favorability and low name recognition, and a candidate with low favorability and high name recognition. The unknown’s numbers are often referred to as ‘soft’, because they’re subject to rapid change as more people find out about the candidate. At the moment, rationalism is basically unknown, with soft numbers. And since rationalism is evangelical, and since it has enemies- the game is afoot. The goal is to make a *very good* first impression for as many people as possible, while simultaneously limiting ‘their’ ability to introduce rationalism in a bad light.

    It seems like your id is noticing a lot of people suddenly going from ‘rationalism? Never heard of it’ to ‘aren’t those the people that want to spend all the AIDS research money on a new Terminator movie?’ But ids are dumb, and you’re naturally going to notice/fixate on the small number of people that don’t like you rather than the much, much larger number of people that don’t know about you at all. If you want to avoid contraction, you need to make sure that you’re not spending all your energy and goodwill reducing ‘their’ effectiveness to some still-not-zero amount of success.

    Now, if you’re an opponent of rationalist thought-viruses spreading around in their memely fashion, you don’t vaccinate people at random. It does no good to show up at a Methodist church in Ann Arbor waving a picket sign about how Eleizer is totally ignoring the experts’ lack of consensus on Quantum Theory interpretations. The people making fun of rationalism are going to specifically target (and must specifically target) those that are most at risk of agreeing with rationalists. This means that as a ‘new idea’, rationalists themselves have something of an advantage- people that want to make rationalists look bad can only be really effective in a social environment where their audience has a reason to click on an article dissing rationalism, but rationalists can find an airport, land in fresh territory, and start making a good impression while controlling the terms of the engagement every time.

    I’ll bet you could usefully frame both HPMoR and EA in these terms- unexpected vectors for the ideas to spread (on fanfic sites and charities, respectively). There are a great many other opportunities, because we’re still in a world where almost everybody has never heard of rationalism. And until people know enough about it to care when their Facebook friend makes fun of the XRisk people, rationalists will largely get to decide where, when, and how their core message is introduced.

    The defensive game still has a place, of course. And as a larger fraction of the universe develops an opinion about rationalism, then sustaining a positive reputation slowly becomes a zero-sum game. But I don’t think we’re actually anywhere close to that point yet- whatever the attrition rate on rationalism’s ‘air support’, it’s probably much more efficient to evangelize new members than it is to shore up the weak borders.

    All of which is a very long way of saying that I think the best work you’ve done for the rationalist community is to be a positive example and role model. I’d rather show ‘I can Tolerate Anything… Except the Outgroup’ to twenty new people, than show your defensive posts to the three people I know who have negative opinions about LW.

  64. Tracy W says:

    One thing I’m working on is dismissive responses to attacks. Apparently there’s a rule for politics: if you’re explaining you’re losing. So, I’m trying to implement that. If someone strawmans me with some loathesome ideas I attack the ideas as if my opponent was advocating them (although, for the sake of honesty, I use words like “you say” instead of “you believe”.
    If they accuse me of a lack of reading comprehension, I note that if I believed what they’re telling me then reading incomprehension would be the least of my problems.

    Basically put them in the position of being the person trying to explain that they’re not actually a creationist and a totalitarian cultist.

  65. Saul Degraw says:

    I am from New York (I moved to SF when I was 27) and am very used to people saying stuff about how gruff, unfriendly, and mean New Yorkers are.

    One time a friend of mind independently made an observation that it isn’t that New Yorkers are not friendly but the issue is that you have a lot of different groups living in very close proximity to each other. All of these groups have very different worldviews and attitudes about what is the proper way to do things. The result of this is a kind of silent detente to avoid explosions from happening. My brother lives in Williamsburg which has substantial populations consisting of the Hassidic Jews, Latino(a) immigrants (they do funeral parades down the street on Good Friday!), Eastern European Orthodox Christians, 20-something artsy hipster types, and 30-40 something upper-middle class professionals. Most of the businesses now cater hipsters and upper-middle class professionals but all the groups need to share space and resources and overlap. The Hassidic are largely in their own world but you see them about frequently and they do own some local businesses. There was an issue a few years ago where the Haredi were upset at the “immodest dress” of the hipster women and the hipster women rightfully got offended.

    So you have a lot of people who really really disagree with each other and I wonder what to do about that. I see what you are saying. I would like to think that there can be reasoned and respectful debate between liberals, conservatives, and libertarians but it requires admitting to faults in the system that you believe in. I am not sure why someone who is black, latino(a), Jewish, gay, etc should be required to listen to someone from the Occidental Review with respect or someone who would say that they think I am burning in hell forever because I don’t accept Jesus as my messiah. I can be respectful if someone is trying to tell me about how Christianity is good and great and have done so but I draw the line at being respectful to someone who says I am going to burn in hell for being Jewish.

    No system is perfect and it could be that neo-reactionaries have some good critiques of democracy but the problem is that their overall solution is worse and very unattractive for most people. Plus they don’t tend to be aware enough to realize what is wrong and off-putting about statements like “The problem with democracy is that the right people rarely get elected. The right people are people like me who openly talk about removing civil liberties for vast swaths of the population and I can prove it by quoting from some dusty and obscure books that no one but me has read for centuries!!!!!”

    • Publius Varinius says:

      “I can be respectful if someone is trying to tell me about how Christianity is good and great and have done so but I draw the line at being respectful to someone who says I am going to burn in hell for being Jewish”

      It sounds to me like you have trouble with putting yourself into someone else’s shoes, with imagining the thoughts and feelings of others.

      • Saul Degraw says:

        Not really.

      • Saul Degraw says:

        So I should be grateful for people who tell me that I am going to burn in hell because I don’t believe that Jesus is the messiah? I get that on a fundamental level they are trying to “save” everyone but it is still very offensive. If Jesus is supposed to be all loving and all compassionate, how can he be so petty as to punish people with eternal torture for not believing in him as the messiah?

        • Toggle says:

          But in the universe where Jesus is an all-loving deity who will punish you with eternal torture for not believing in him as the messiah, why on earth would you be offended that someone is trying to get you to believe in him as the messiah?

        • Publius Varinius says:

          Well, that’s why it is a difficult problem. Phrases like “admitting to faults in the system that you believe in” do suggest that you fail to consider things from the perspective of the people who offend you, though.

          > If Jesus is supposed to be all loving and all compassionate, how can he be so petty as to punish people with eternal torture for not believing in him as the messiah?

          I have no idea either. Should we ask Deiseach? 🙂

          edit removed some superfluous parts

          • Saul Degraw says:

            I never said it was easy. It might even be impossible for the overwhelming majority.

          • Saul Degraw says:

            Another thing I would say is that most people are not ideological and generally their beliefs are a mishmash of things and very little efforts seems to be put in to make a consistent ideology.

            However, event the most ideological people seem to have issues where their ideals suffer a disconnect and they can’t square the circle. A lot of Tea Party and Blue-Collar Republicans like to complain about how labor and hard work are not respected anymore. They would probably be horrified if you told them that they were quoting Marx’s Labor theory of value.

            Erik Loomis at Lawyers, Guns, and Money is extremely pro-Union (as in trade union) and anti-police brutality. He gets very angry and strident whenever somebody suggests that the power of the police unions is part of the issue of police brutality. Even a suggestion of “What if police unions were able to collectively bargain for wages and other econ benefits but not for being fired for misconduct?” causes him to double down because it is a limit on a union power. Yet he can’t be bring himself to say that he values trade union rights > the right to be free from police brutality. Nor can he admit that unions are the problem.

            I admit that even as a fellow traveler, it is easy for me to see these contradictions because I am not as strident as Erik Loomis on union power even though I am friendly and supportive of unions in general.

          • HeelBearCub says:

            I admit that even as a fellow traveler, it is easy for me to see these contradictions because I am not as strident as Erik Loomis on union power even though I am friendly and supportive of unions in general.

            Put another way, everything has failure modes.

          • CJB says:

            ” A lot of Tea Party and Blue-Collar Republicans like to complain about how labor and hard work are not respected anymore. They would probably be horrified if you told them that they were quoting Marx’s Labor theory of value.”

            Pet peeve here-

            Arguments of the form “They would be HORRIFIED!!” always irk me. Like- you think that they don’t know at least in rough outline that communism claims to be All About The Worker?

            I think most of them, if directly confronted with the Squamous Horror of Marx would say something like “Yeah, but I don’t mean some big gubment type tellin’ me what type of plumbing is ok!”

            They may not have super-well articulated arguments beyond that, but they would’ve hit on the fundamental split between your “gotcha” argument and their point of view.

            Seriously- do people who make these arguments that (GROUP) would be horrified by (ARGUMENT) not know that we’ve got near universal literacy and most people have Google in their pocket….and use it?

        • Montfort says:

          I’ve been told a few times that I would go to hell if I didn’t shape up and do whatever, but I personally never found it offensive at all (nor did I find it persuasive, but that’s another matter).

          I’m not saying my response is more correct, I’m just very curious what you find offensive about it. Is it the way they say it? The very fact of the belief? That they think it could persuade you?

          Would you also find it offensive if people told you the US government was secretly marking all those who refused sushi for death, so would you please, please eat at least one california roll a week?

        • John Henry says:

          If Jesus is supposed to be all loving and all compassionate, how can he be so petty as to punish people with eternal torture for not believing in him as the messiah?

          Catholics call that the sola fide heresy, and it’s pretty well refuted by a glance at Matthew 25.

    • nydwracu says:

      Plus they don’t tend to be aware enough to realize what is wrong and off-putting about statements like “The problem with democracy is that the right people rarely get elected. The right people are people like me who openly talk about removing civil liberties for vast swaths of the population and I can prove it by quoting from some dusty and obscure books…”

      How is this not also done by ~respectable~ progressives?

      • Saul Degraw says:

        It is done by people all over the political spectrum. Where did I claim otherwise?

        • nydwracu says:

          Unless the offputting part is that the books are old, statements of that form can’t be inherently offputting to a general audience, because progressives can get away with them with no problem. The only people who find them offputting, as far as I’ve seen, are the people who are already off the reservation, and they mostly exist on the internet.

          • HeelBearCub says:

            @nydwracu:
            There are certainly some progressives who would say in frustration that “only the right people should be able to be elected/vote/whatever.” But that is in no way a defining characteristic the way it is for NRx who want to do away with voting altogether.

      • Saul Degraw says:

        To be less glib, I think of politics more as a circle than a line. The far left and far right have more in common than they want to admit. Both are utopian, both tend to also be rural or ruralish and disliking of consumerism and materialism. The distinctions are more in the details. A left-wing utopian commune is more hippie and the right-wing utopian small village bears a striking resemblance to Toklien’s shire. Both are the products of fantasy more than reality.

        I don’t consider myself on the far left but I am a firm liberal. I believe that there is a role for government to balance against the worst savageries of capitalism and I don’t think that capitalism cures all. On the other hand, I do believe that the profit motive is important and I am not rural by nature but an urbanly inclined person as a friend said.

        • nydwracu says:

          The far left and far right have more in common than they want to admit … both tend to also be rural or ruralish and disliking of consumerism and materialism.

          I take it that you don’t live in America.

          • Hari Seldon says:

            I believe that is as true in the United States as it is anywhere. It is just harder to see when you are especially vested in one side. It’s much easier to see from the outside looking in. From a libertarian standpoint, the far right and far left share the common goal of government power over the lives of its citizens.

            Right and left differ somewhat on the controls they want to implement, but the specifics are bound to change every couple of years anyway.

          • Saal says:

            @Hari I think nydwracu was disagreeing specifically with your claim that the far left/far right are characterized by ruralism and a distaste for materialism/consumerism, rather than addressing your horseshoe/circle idea as a whole.

            So, eg, in America, the “Far Right” (libertarian and/or post-libertarian (NRx)) is associated with:

            -Bitcoin
            -Crypto in general
            -Seasteading
            -Hi tech city-states in general
            -3D printing (guns, other stuff banned by govt)
            -Silicon Valley (there’s your urban) and tech city strongholds in general

            And the “Far Left” is associated with:
            -Bitcoin
            -Crypto in general
            -Social-media driven urban protesting/direct action, eg Occupy, rather than yesteryear’s communes in teh wilderness
            -3d printing (post-scarcity economics)

            I don’t know if the “Far Left” has a perceived hometown like the “Far Right” does. You still get volkish rightists and agrarian leftists, but the fringes have (wisely) mostly kept up with the times.

          • Saul Degraw says:

            No, I do. I consider myself loyal to the Democratic Party if more on the liberal wing than the neo-liberal wing. Interestingly this has gotten me called an extremist and communist and a right-wing nothing depending on who you ask. There is a section of the left (I always liked Chait’s distinction that liberal and left-wing are not the same thing) that has never been part of the Democratic Party and never wanted to be a part of the Democratic Party. The leftists I know have a utopian-anarchist streak that would abolish laws (I would not) and seemingly thinks we were meant to live in small agricultural communities/communes of a kind of primitive variety. I disagree with this notion but in their own way they remind me of Dreher saying to hell with modern society and going for the Benedictine option as he calls it.

            I am an absolute anti-utopian under the general observation that one person’s utopia is another person’s dystopia. I think utopianism fosters the idea of THE GOOD LIFE (one size fits all). I believe in the concept of a good life. So I am deeply distrustful of utopians.

            That being said, I don’t think it is possible in a modern globalized economy of most people to be self-sufficient yeoman nor do I think it is desirable for this to be the case. I believe in the concept of a welfare state to protect people from boom and bust economics and other slings and arrows of outrageous fortune. I do think it is possible and indeed common for people to lose jobs for things beyond their control as evidenced by the Great Depression and Recession. I don’t believe in Austerity which seems to mean that the poor and middle class need to suffer for the excesses and mistakes of the rich.

          • Troy Rex says:

            @Saal – those are interesting left/right pairings! I agree those are areas where lefties and libertarians overlap.
            When I think of Far Left and Far Right, I tend to think of hippies or social justice warriors, and then of religious fundamentalists. Some hippies and many fundies live rural lifestyles; many SJW and fundies disapprove strongly of pornography.

          • Saul Degraw says:

            @troy

            I think the Andrea Dworkin ring of the left is more dead than alive.

          • HeelBearCub says:

            @Saul Degraw:
            I don’t know, I see plenty of friction between “pro-sex” and “non pro-sex” feminism. Given how deeply the Puritan streak runs in America (as opposed to other Western countries), I don’t find this surprising.

            Even the “pro-sex” wing of feminism I think has trouble avoiding argument from a Puritan perspective. But all that is just my opinion, I wouldn’t call myself deeply versed.

    • brad says:

      To defend NYers, I think we do a pretty good job of getting along and what outsiders don’t like about us is actually a successful get-along technique. No, we might not smile and start conversations with people on the street, on line at the supermarket, in coffee shops and zues knows where else (went to school in the south if you couldn’t tell) — because if we did we might get into an argument about whether or not Jesus is the messiah. On the other hand, I see NYers helping tourists with directions when they ask all the time. I don’t see what’s wrong with the solution we have in place to the problems you document.

  66. Shmi Nux says:

    The difference between Eliezer’s and your description of evaporative cooling is that his is free exposure to the outside, while yours is squeezing the piston.

  67. TheAncientGeek says:

    I certainly know how it feels to have the urge to defend something, as I have defended mainstream philosophy on some of the occasions when LessWrong has attacked it. Stemming from that, my third way would be something like “if you don’t like fights, don’t start them”.

  68. TheAncientGeek says:

    I certainly know how it feels to have the urge to defend something, as I have defended philosophy on some of the occasions when LessWrong has attacked it. Stemming from that, my third way would be something like “if you don’t like fights, don’t start them”.

  69. LTP says:

    I feel that my comments on this blog, in their own tiny microscopic and barely significant way, might have slid into a version of the EA/rationalist bingo from time-to-time. So, I apologize for that if they have.

    I think this is a very interesting post. I do think that while some movements/ideas have undeservedly been pushed past the event horizon by the mechanism you describe, there are also movements/ideas that have deserved to fall past the event horizon, and some that haven’t but probably do deserve it.*

    I think there are ideas/movements where the “loonies” really are the core of the group, and represent the logical conclusions of the group’s ideas. The respectable and moderate people in these groups don’t realize the logical conclusions of their, at first blush, more moderate beliefs and rhetoric are actually pretty much the same as the loonies’ ideas. Or, similarly, even the moderate and respective people are advocating for ideas that are false and possibly dangerous, if less so than the loonies’ version, but the respectable moderates’ rhetoric and arguments have a veneer of reasonableness, which makes them seductive and misleading. Or, maybe the movement’s leaders were always the loonies themselves, and the moderates are on the fringes (and probably should just make a new movement for themselves). Finally, there’s also straight up dishonest motte-bailey tricks, where there are no real moderates, just evangelicals intentionally dressing up their loonie ideas in the moderate language and tone and leaving out the “weird” or “unreasonable” bits for later (see Scientology recruitment).

    I think that in those cases, this process you describe can be a beneficial way to prevent bad ideas or movements that are good at having a veneer of plausibility to many people (but are actually false and possibly dangerous) from spreading. Mocking them and playing bingo with their ideas is an effective way to contain bad ideas. In the subcultures I frequent, for example, homeopathy and astrology are past the event horizon, and I think this is a good thing. At some point, for some bad ideas and movements, continually being charitable and engaging them in reasonable debates is actually a waste of time and intellectual energy that could be better spent elsewhere, and probably makes these bad ideas stronger by legitimizing them. Nothing good could come from Scott making a long, charitable, and reasoned post refuting flat-earthers point-by-point, for instance.

    Of course, then the question is identifying which groups deserve this fate and which don’t, but at that point you just have to venture into the object-level and do a case-by-case look at them.

    *To be clear, I don’t think rationalists or EAs, specifically, deserve to have that happen to them, even though I disagree with them sometimes.

    • suntzuanime says:

      The problem is that these sorts of social dynamics interfere with case-by-case object-level looks. I think about persuasive technique in terms of “Light Side” versus “Dark Side” persuasion, where the difference is that Light Side persuasion is easier to use to persuade someone of something if it’s true, whereas Dark Side persuasion works without much distinction as to the truth of the underlying claims. The sort of snowballing toxicity cascades this post describes seem like the Dark Side to me, you can do it to anybody if you have a big enough megaphone.

      • TrivialGravitas says:

        Are there any light side techniques that actually work in public discourse?

        • suntzuanime says:

          Sure! The spectacular public demonstration is always a good one. If you want to persuade people of your military might, blowing up the moon is pretty effective.

          I’ve mostly been thinking in terms of one-on-one persuasion rather than propaganda. I think the main thing keeping propaganda honest is the fear of being discovered to be lying, which means that any fact-based propaganda probably qualifies as “Light Side”.

          • TrivialGravitas says:

            One on one works too. My experience in general is that, unless you’re arguing with rationalists or scientists on topics of relative familiarity, light side techniques don’t work.

            That is, I’m confident I could convince a doctor that medical views of weightliftings effects on metabolic health are wrong by bringing up research showing how previous research didn’t understand weightlifting well enough to distinguish between good and bad programs. It’s a light side technique because the evidence could go the other way (and is blessedly free of experimenter effect even, since the studies that used good lifting didn’t actually KNOW they were using good lifting, it just sorta happened). But if I try to argue science with the general public I get a hell of a lot of ‘that sample size is to small, no I never took a class on statistics but my gut tells me that’ and people end up prone to either always trusting what Important medical Body says or never trusting what Important Medical Body says.

          • suntzuanime says:

            “Light Side” doesn’t mean “stupid”. Yes, if you line up your arguments like soldiers and march them straight into the emplaced machine guns of cognitive dissonance you’re rarely going to get very far. But there are more sophisticated persuasive techniques that still work better for truth than for lies.

            The key to a lot of Light Side persuasion, I think, is to trick your opponent interlocutor into taking your ideas seriously before they realize what they are, so that they give a fair airing to arguments they’d rather reject out of hand. For example, I think a lot of obscurantism works this way; if you have to struggle to figure out what I’m saying, then you’ve already spent a lot of time thinking about my argument by the time you realize that it’s something you’re supposed to knee-jerk oppose. The Last Psychiatrist blog had a really cool technique where he would present a facially reasonable argument and set his reader up to agree with it but it would actually contain fatal flaws, which he would then proceed to explain and mock the reader for nodding along with such a ridiculous idea. Basically building up a proof by contradiction of some unpalatable idea but concealing the fact that it’s going to be contradicted until the very last second. These are a couple examples, but they rely on the opponent being at least thoughtful and willing to consider nuance in some scenarios, so they’re not really suitable for propaganda purposes.

    • JDG1980 says:

      I think there are ideas/movements where the “loonies” really are the core of the group, and represent the logical conclusions of the group’s ideas. The respectable and moderate people in these groups don’t realize the logical conclusions of their, at first blush, more moderate beliefs and rhetoric are actually pretty much the same as the loonies’ ideas.

      I have a real problem with condemning people (or even movements) based on “logical conclusion” arguments. The fact is that most people aren’t systemic thinkers, and have neither the ability nor the desire to take their beliefs to their “logical conclusion”. I’m not sure that a logically consistent morality is even possible; most attempts to do so violate human nature by leading to conclusions that the vast majority of people find intuitively repugnant.

  70. BBA says:

    Obligatory link: I’m not being defensive! YOU’RE the one who’s being defensive!

    Freddie deBoer has a few posts on this from the opposite side of the table, begging his fellow leftists to stop piling on people for trivially minor offenses, as it just makes the movement look like a bunch of exclusionary bullies. Of course deBoer has a history of criticizing his fellow leftists which makes it easy for them to dismiss him as just another misogynist dudebro etc.

  71. Sam says:

    Take some advice from the movement that you cited, which has lasted basically intact for nearly 2000 years:

    1) Diversify. Christians may be criticized for being split into so many denominations and movements, but it’s also a healthy part of the faith’s vitality. With the collapse of the Roman Empire, monastic life kept the faith alive and was able to evangelize the tribes that had taken over Rome. More recently, with the social upheaval of the 60s and 70s, some denominations took different approaches to unfamiliar terrain. Now the numbers show the branches that resisted the changes ended up ultimately healthier in the long term than those who accommodated them, but this was far from obvious at the beginning. Letting different subsections of a movement choose their own direction helps to keep the whole alive.

    2) Include enough nuance and generality in the core teachings to fit whatever comes along. Establishing what formed the basics in the form of the creeds was an important part of defining Christianity early on as they learned how to contextualize the message in different cultures. A focus on the essentials of the gospel (eg CS Lewis’s Mere Christianity) has also served Christians well in recent years, so not everyone falls into various dead-ends like creationism and predicting end-times. For this movement, keep the focus uniting all of rationalists on rationality, and let people pick and choose different particular causes (x-risk, applied rationality, vegetarianism, effective charity analysis, and so on), without the expectation that everyone will care about all of these. If criticism comes against one area, it can be addressed by strongest proponents of that area, without you or anyone else feeling like they have to defend every cause against every attack.

    3) Keep spreading the word further. Christianity might seem dead in a former stronghold, Europe, but European missionaries historically started new communities around the world that are now re-evangelizing the West. Even China is close to being a net sender of missionaries, with more going out than coming in. Obviously, the fact that the original lands of Christians in the Middle East and Northern Africa are dominated by Islam suggests that this has happened before. For rationality, HPMOR is probably the best example of successful outreach, so try to do similar things or think up new ways to bring others into the fold.

    • Sylocat says:

      The risk of relying on 1) and 2) is that you risk the movement getting bogged down in infighting between factions/denominations. The whole Protestants-versus-Catholics thing has had a hefty body count (and I don’t know how SJWs find the time to go after Scott given the amount of time they spend at each other’s throats over trivial ideological differences).

      • The original Mr. X says:

        I think the point of (1) and (2) is that people should deliberately avoid “getting bogged down in infighting between factions/denominations”.

    • TrivialGravitas says:

      Your entire point one lacks a relationship to actual Christian history. Christianity is diverse now, but in the wake of Constantine’s legalization of just one kind of Christianity it was rather violently homogenized. All of the invading tribes were also Christian (save in England, which the Roman’s pulled out of much earlier), and being the one example of still diverse (Arian) Christianity they converted to trinitarian Christianity not because of anything to do with the locals but because the single most powerful family of Franks was trinitarian and it was advantageous for other Germanic nobles+a tendency for them to think that God favored the group that was most successful.

      • The original Mr. X says:

        Your entire point one lacks a relationship to actual Christian history. Christianity is diverse now, but in the wake of Constantine’s legalization of just one kind of Christianity it was rather violently homogenized.

        When? Arianism didn’t exactly go away after the Council of Nicaea, and several Emperors were Arians themselves. Despite what a few of the more zealous bishops may have wished, later Roman emperors were generally pragmatic enough not to waste resources and goodwill trying to impose creedal uniformity on their Empire, and edicts against heresy and paganism were rarely given any serious enforcement. (Theodosius, for example, continued to have open pagans at his court even after pagan worship was officially banned.)

        That’s not to mention the fact that Christianity had long since spread beyond the borders of the Roman Empire, where of course the Emperor’s power to “violently homogenise” anything was rather limited.

  72. Anonymous says:

    The GamerGate approach is a hybrid “nitpick” and “document.” The nitpicks worked so well that many of the offending outlets disabled their comments sections altogether, rather than have their reporting pulverized by a throng of gamers. Don’t believe for a second this was about “disruptive” posts or trolling; these are gamers we’re talking about who are playing to win a very long game. They might get pissy, but by and large they know civil objection’s a fantastic tactic given the black hole is a narrative of incivility.

    The “document” approach is more experimental, but seems pretty effective. When a reporter fucks up the facts hard enough, members of respectable space will take note. If, at this critical moment, they’re exposed to an easily-metabolized dose of solid documentation of a previous factual faceplant (particularly one they fell for), it rattles their trained emotional responses to GG. If they are curious enough to keep digging, they’ll end up “redpilled” and the gravitationally-overburdened have managed to slip a message into a mind outside their sandbox, kind of like that shit in John Carpenter’s Prince of Darkness. Eventually, enough respectable—whoops, I need to vacate the premises

    • TrivialGravitas says:

      While Gamergate does that, I think suntzuanime’s point about setting up your opponents as all toxic group X (in this case all Ghazi/Gawker readers)+framing the group as being in part about opposing the tactic Scott describes are what have kept the moderates from evaporating.

      • Zorgon says:

        If anything, GG has been losing many of its extremists, who think the movement is too soft on SJ ideals and fixated on “muh PR”. There’s been a massive split on 8chan directly related to this.

        Interestingly, the flashpoint has been GG’s treatment of trans issues. Given most of the movement is left-libertarian or thereabouts, the majority of GGers are pro-trans and comfortable using trans-friendly language, but the strong /pol/ element is overwhelmingly anti-trans and intentionally misgenders and deadnames people with a degree of impunity. This impunity comes from two sources – a “team unity” drive, and the fact that the /pol/ members tend to keep their anti-trans language to a specific set of very rabidly anti-GG transwomen. Defending them thus becomes defending The Enemy and difficult to square.

        Meanwhile, of course, Ghazi remains ever-watchful for anything they can find and clip free of whatever dangerous levels of context it contains and spread it across the SJ-sphere as “proof” of how the GGers are really all just transmisogynistic racist terror-creeps with neckbeards (but really it’s about effics in game journalism, LOL!). Places like KiA have reacted to this by cracking down on pretty much anything Ghazi could manipulate to look bad, and instituted quite stringently applied rules, not to mention undergone changes of management in a continually more moderate direction.

        The end result of all of this has been that the /pol/ contingent have grown increasingly disillusioned with the movement and retreated back into a pure anti-SJ stance, while GG focuses increasingly on attempting to foster new media sources and create links with journalistic bodies. GG is managing to undergo the exact opposite process to the one Scott describes.

        Well, when it’s not having bomb threats called on it, of course.

        • TrivialGravitas says:

          Depends on how you define extremists. The actually antifeminist group seems to be sticking around. These are more ‘stick your head in the sand and insist there’s no problems at all’ types than the stuff GG is actually accused of, but it was still toxic enough that a lot of people boiled off for exactly that reason in the first few months.

          • Cauê says:

            “Antifeminist” is one of those words. I don’t know if I agree with you because I don’t know what you mean by that.

      • Anonymous says:

        setting up your opponents as all toxic group X (in this case all Ghazi/Gawker readers)+

        That hasn’t been the case at all from what I’ve seen. A lot of the opponents GamerGate discusses are like the CBC ombudsman who intentionally neglected telling GG’s side of the story. Ghazi is just an extension of this corrupt devotion to a factually challenged narrative; “shills,” as they say. Virtually all the highly active aGG keyboard warriors, though, do more than enough on their own to appear toxic without GG setting them up. Most of the inactive people who’d consider themselves anti-GG are too apathetic to dig past the conspicuously thick smokescreen laid down by dozens of media outlets. And most people in general can’t be bothered to care.

        From everything I’ve read, I’ve never really seen any anger at Gawker “readers,” mostly Gawker. Gawker discredits itself, and no one poked at GG half as hard as they did in the beginning. The epic hubris of Sam “Bring Back Bullying” Biddle provided a convenient and powerful P.R. cudgel, and like most bullies ended up fucking with a deceptively powerful enemy. Gawker closed ranks and reaped the whirlwind they’ve sown with reckless abandon since incorporation.

        GamerGate took square aim at the biggest, baddest inmate on the cell block and continues to pummel it like that part with the fire extinguisher in Noé’s Irreversible. Prison rules P.R, baby.

        framing the group as being in part about opposing the tactic Scott describes are what have kept the moderates from evaporating.

        Again you imply some sort of premeditation on behalf of a largely amorphous, leaderless community driven off any single social platform into the æther. I posit you’ve completely misunderstood the dynamic: GamerGate is magnetic to moderates because moderates are the targets most vulnerable to purity attacks. Not only are moderates not evaporating, the continued popularity of these tactics are only driving more moderates into GG. They don’t even get an “I told you so,” just a warm welcome to digital gulags for the purged.

        • Nita says:

          digital gulags for the purged

          Should I bring this up next time someone claims that the term “rape culture” trivializes rape?

          • FacelessCraven says:

            @Nita – I’d be pretty ok with us taking the term “rape culture” as seriously as he’s taking “digital gulags”.

            More directly, your comparison seems ridiculous. “Rape Culture” is an ostensibly serious term that people have used farcically, whereas “digital gulags for the purged” is black humor.

          • Nita says:

            I dunno, when people around here complain about the left / SJ movement / the Cathedral being both willing and able to “purge” either their insufficiently zealous comrades or inconvenient critics, they don’t seem to be joking at all.

            It honestly sounds like they don’t see much difference between being disinvited to a conference / yelled at on Twitter and being kidnapped, tortured, show-trialled, enslaved and killed.

          • Cauê says:

            It honestly sounds like they don’t see much difference between being disinvited to a conference / yelled at on Twitter and being kidnapped, tortured, show-trialled, enslaved and killed.

            Come on, it “honestly” sounds like that? If the other team is sounding as absurd as that to you, it may be time to review your model of them.

          • Nita says:

            @ Cauê

            It’s true that I don’t have a good model of the people who say things like that. So, what’s going on? Do they really believe “the revolution is coming”, as notes suggests below?

          • notes says:

            If I’d been more accurate in the threading, it wouldn’t have been below you – apologies.

            As for whether the revolution is coming… I don’t think many think that it is, on any side. Nor do they expect kidnapping, torture, trials before a court, slavery, and death.

            There are, however, people who expect – with zeal or fear – that someone might be aggressively ‘no-platformed’ – given a show trial of sorts, in public, and then lose their job, or business, or… well, much of what they think of as their life.

            This doesn’t – I think! – happen all that often. But it happens often enough, and publicly enough, that people do wonder whether it might happen to them.

            To change behavior reliably: punishments or rewards that are swift and certain, even if small. To induce terror: uncertainty and large stakes. (See the whole gambling industry for a different way of using that technique).

            Part of the reason that this stays salient is that there isn’t a natural constituency for the position that these things don’t happen all that often. It suits both sides to exaggerate the purges — one side, to give a sense that their opponents are isolated; the other, to give a sense of their own martyrdom.

            (‘Not all that often’ is an entirely different measure than ‘with an acceptable frequency’)

          • Whatever Happened to Anonymous says:

            @notes:

            While your point might be valid, I can’t help but notice that your randomly generated avatar looks kind of like a swastika.

          • Anonymous says:

            @Nita

            Should I bring this up next time someone claims that the term “rape culture” trivializes rape?

            Should I bring your post up next time someone brings up “derailing tactics”?

            I dunno, when people around here complain about the left / SJ movement / the Cathedral being both willing and able to “purge” either their insufficiently zealous comrades or inconvenient critics, they don’t seem to be joking at all.

            I don’t know, when people anywhere jump on the last line of a post, and start to go off on a tangent about how it should be taken very seriously given poor impressions of hypothetical others, it makes it seem like they’re trying to discredit.

            It honestly sounds like they don’t see much difference between being disinvited to a conference / yelled at on Twitter and being kidnapped, tortured, show-trialled, enslaved and killed.

            It honestly sounds like you’re being willfully obtuse in order to discredit the post without actually addressing its content.

            It’s true that I don’t have a good model of the people who say things like that. So, what’s going on? Do they really believe “the revolution is coming”, as notes suggests below?

            It’s also true that the question you’re asking is unanswerable in any objective fashion, so it appears as if you’re doing double duty of derailing and discrediting with this post.

            I encourage everyone to review Nita’s posts in this sub-thread, and consider how well they match the pattern of “ethnic tensions” referenced in Scott’s original blogpost.

            Oh, and keep in mind that, in the quotation, Nita omitted the previous three words which made the joke really pop:

            warm welcome to digital gulags for the purged.

          • notes says:

            @Whatever Happened to Anonymous

            The real trick was finding a swastika that looked like it was randomly generated!

            Now, if only I’d been alert to the differences between the right-facing and left-facing versions…

            @Anonymous

            Maybe. Maybe not – charity until shown? Feigning incredulity is a tactic, but it’s also natural for someone to be incredulous when encountering something sharply dissonant to their own lived experience.

          • Anonymous says:

            @notes

            Maybe. Maybe not – charity until shown?

            And how, pray tell, can one show the intentions of a forum poster? Well, heck, I’ll give it a shot.

            Feigning incredulity is a tactic, but it’s also natural for someone to be incredulous when encountering something sharply dissonant to their own lived experience.

            Well riddle me this: if Nita is so curious about my point of view, why did they start off by shaving the context off a quote, bringing up a contentious red herring like “rape culture,” and subsequently make insinuations rather than ask questions (until they they thought they had backup, in which case they deflected the query upon you)?

            What Nita is doing is trying to re-frame the discussion from “why does GG have so many moderates” to “GG is filled with paranoid loonies jumping at shadows.” Classic maintenance of narrative.

            And this might really bake your noodle: if someone adheres to postmodernist identity politics, could they even discern the difference between genuine statements and those made in bad faith?

          • notes says:

            No idea what Nita’s thinking, beyond what was written — as you imply, it’s hard to tell the intentions of a forum poster. Just slower to make a finding of disingenuity, personally — at least today.

            Do concur that who/whom logic is a different system than good faith/bad faith.

          • Whatever Happened to Anonymous says:

            @Anonymous:

            It’s the (attempted) custom here to both assume good faith and try to take a charitable interpretation of others’ posts.

            We don’t always manage, but it’d be nice to try, there is nothing to gain in calling them out like that.

          • Anonymous says:

            @notes

            No idea what Nita’s thinking, beyond what was written — as you imply, it’s hard to tell the intentions of a forum poster. Just slower to make a finding of disingenuity, personally — at least today.

            And I fully admit this could be a type I error, but the ugly truth is the general assumption of good faith inherent to healthy communities is the vector by which P.R. posters infect. They rely upon plausible deniability to push the discourse in ways beneficial to their goals. Detecting this is high art, and the chans are the ultimate training grounds; video game publicists have been pimping their wares on /v/ since forever.

            In a pseudonymous forum, the most telling sign is a poster arguing “for the crowd” rather than “with the forum members.”

          • Zorgon says:

            It honestly sounds like they don’t see much difference between being disinvited to a conference / yelled at on Twitter and being kidnapped, tortured, show-trialled, enslaved and killed.

            A group of journalists got multiple bomb threats called against them just for discussing the possibility that GG might not be terrorists after all only yesterday.

            When we talk about purging, it’s against the background of people being sent dirty needles in the post (A gay man, in fact! Such social justice!), people having bomb threats made against them, people being sent “mystery powder” in the post, people getting telephone calls to their homes telling them not to continue supporting anti-SJ causes, people getting fired because of email campaigns to their employers.

            18 months ago, the identity left could still just about pretend it wasn’t about actively bullying and creating a climate of fear in their opposition. That ship has now sailed.

          • Anonymous says:

            @Whatever Happened to Anonymous

            It’s the (attempted) custom here to both assume good faith and try to take a charitable interpretation of others’ posts.

            I am being extremely charitable. Let that sink in.

            We don’t always manage, but it’d be nice to try, there is nothing to gain in calling them out like that.

            You have everything to gain! The blog post is about Scott’s (almost certainly accurate) sense that the Effective Altruist community is about to be put through the wringer by identity politicians. Moloch has turned his baleful gaze upon EA, and you either put up a fight or become co-opted and politicized.

            It’s fucking awful, I fucking hate it, but that’s the way it is. Good intentions will not save your community, nor prevent the curdling of its discourse.

            Addendum: They probably already have a sarcastic catchphrase in mind for discrediting EA:

            Actually it’s about ethical altruism.

            A perfect fit if the co-option operations fail, or they successfully force a schism and end up controlling the offshoot. Look to the Atheism/Atheism+ clusterfuck for a well-documented example of this in the wild.

        • notes says:

          @Nita: Do think that the specific occurrence of ‘digital gulags for the purged’ is a joke.

          When you see semi-serious discussions of ‘purging’, I suspect that’s due to someone thinking that the only reason show-trials aren’t happening is lack of means, rather than lack of motive. Joking about who would be first against the wall, when the revolution comes, is only funny if the revolution isn’t coming.

          Similarly, someone using ‘rape culture’ seriously intends (I believe) the idea that many want to and would rape, given opportunity and impunity, and that this bent is culturally shaped.

          Which of these usages you think serious, and which frivolous, depends on whether you think the world is at real risk of show-trials and purges, an epidemic of rape, both, or neither.

          • A nitpick, but my understanding of the feminist usage of rape culture is less “a substantial portion of people want to rape other people,” and more “there exists a memeplex that makes rooting out the rapists who do exist more difficult.”

          • HeelBearCub says:

            @notes:

            “Similarly, someone using ‘rape culture’ seriously intends (I believe) the idea that many want to and would rape, given opportunity and impunity, and that this bent is culturally shaped.”

            To echo James, I have issues with the whole “rape culture” framing, but I think only very deluded feminists in their angriest moments argue that some very large portion people of want to and would rape unless held back from doing so.

            On the other hand, rape as a weapon in war really is a thing, and is essentially condoned by those groups that use it, so saying rape isn’t culturally shaped seems to ignore some pretty big evidence.

          • notes says:

            Concur that the incidence is culturally shaped; concur that part of the usage of ‘rape culture’ is focused on the boundaries of the definition of rape. One can characterize that as ‘making rooting out the rapists who do exist more difficult’ (by defining some rape as, e.g., drunken hookups).

            Still, I do think that some uses of the phrase are focused on the idea that ‘many’ would or do rape. See the various poorly sourced statistics about lifetime incidence of rape, or the narrower and easier to evaluate statistics on college incidence of rape (e.g. the ‘1 in 5’ from so many headlines): it’s hard to understand those estimates charitably except as coming from a belief that rape is real (it is) and enormously prevalent on first world college campuses (doesn’t seem to be so).

        • Echo says:

          Get out of here with your luchador mask, Gamer Boy. Don’t make me call Dick Wolf on your ass.

        • Nita says:

          Hi, redchan Anon! I’m honored by your assessment of my humble comment. At the time of posting it, I was a merely a random SSC reader slightly tired of people overusing the term “purge”. But now, I’ve turned into a mastermind of discourse control. Muahahahaha!

          Sorry, sorry — here’s a more serious response. I was going to just quote your gulag quip and say “Please don’t do this”, but flippancy got the better of me. If I knew my phrasing would set you off like that, I would have re-written my comment.

          As for derailment, I perceive Scott’s blog as having a rather free-for-all vibe. But I’ll try to tell you apart from other Anons and not derail your subthreads in the future. As a first step, I’ll refrain from bickering with Zorgon about his examples of bad behavior.

          • Anonymous says:

            But now, I’ve turned into a mastermind of discourse control. Muahahahaha!

            Don’t flatter yourself, sweetheart. I merely identified your approach, I didn’t say you were any good at it. A talented troll would have stirred exponentially more shit with a fraction of the word-count.

            If I knew my phrasing would set you off like that, I would have re-written my comment.

            I have a perverse fetish for meta-art, and you served yourself up like you were hog-tied in an S&M dungeon higher than the mesosphere on poppers.

            And your original response was shitpost to the core; re-wording the sentiments would at best scrape off the crusty outer layer exposing the moist, fragrant center.

            slightly tired of people overusing the term “purge”.

            was going to just quote your gulag quip and say “Please don’t do this”, but flippancy got the better of me.

            Where the fuck do you get off, policing what words are over or underused? Your posts are shit because they are devoid of content; rather than roll with the metaphor, you felt compelled to make a superficial critique with an utterly idiotic analogy. Are you so intellectually shallow that you confuse form with substance?

            As for derailment, I perceive Scott’s blog as having a rather free-for-all vibe. But I’ll try to tell you apart from other Anons and not derail your subthreads in the future.

            You only directed your flippant snipe at me because you perceive anonymous as lacking in social capital. That’s why you shit on my language choice: you saw an opportunity to steal on me with impunity, and with subsequent posts demonstrated beyond any reasonable doubt your consideration of me as an “Other” to be mocked. You still betray your sense of personal superiority by the condescension percolating through this sarcastic mea culpa.

            And no, none of this is in conflict with my analysis of your derailing and discrediting tactics: likely you’re so philosophically inept you’re unable to realize the hollowness of your comments.

            I don’t want you to avoid my threads, I want you to either post like there’s more than just a couple marbles rattling around in your skull, or not at all.

          • Nita says:

            1. My comment contained a clearly marked transition between sarcasm (the first paragraph) and sincerity (the rest of it).

            2. Rather than othering or mocking you, I explicitly singled you out as an individual whose wishes and feelings should be respected, despite your username making it a little less convenient.

            3. I didn’t express an intention to avoid your subthreads, but to avoid derailing them — that is, to comment only if I have something substantial to say on the intended topic. Thanks for affirming that I’m welcome, although welcome messages are traditionally phrased in a more polite fashion.

            4. That said, you seem convinced that everything I say is an elaborate insult, so I’ve discounted your tone by about 90%.

          • Scott Alexander says:

            This anon banned for this comment and a few others. Not too happy with Nita either but she will not be banned for now.

  73. AlphaCeph says:

    > “I don’t know any third solution. If somebody does, I would really like to hear it.”

    The third strategy is that “reality gets to bat last”, i.e. you produce some real-world result that thoroughly vindicates your position. For example, if effective altruists identified some amazing altruism strategy that got results that were clearly good, visible and exceptional, then their critics words would sound extremely hollow. The problem is, it has to be visible and blatant.

    • Scott Alexander says:

      You mean, results like in $30 million going to effective charities? Or projected GWWC can pledges of $457 million? If those don’t convince people, what will?

      • suntzuanime says:

        Isn’t one of the tenets of Effective Altruism that you should measure charity not by money spent but by results?

        • Carl Shulman says:

          The GiveDirectly money has been supported by RCTs of the actual program by the actual charity, and there is other evidence from follow-ups for the other charities available here:

          http://www.givewell.org/charities/top-charities

          • Murphy says:

            I think the parent meant the claim should be of the form “x lives saved” or similar rather than of the form “x spent” on top ranked charity even if the charity is supported by RCT’s.

      • Deiseach says:

        Scott, loads of people give money to charities. EAG is boasting (yes, sorry, but that’s what it comes across as) that they can do it better because they’re using the magic powers of RATIONALITY! and EVIDENCE-BASED APPRAISAL!

        They have to put up or shut up. Show, for instance, that they got more bang for their buck than Oxfam or Médecins Sans Frontières when they evaluated a charity working in the same field and encouraged donations. That their picked charity (selected by the rational evidence-based appraisal method) spent the money better, more effectively, and got to a wider range of people/solved tougher problems/ended a seemingly intractable cause of suffering.

        Say, for instance, “In the Lake Ogopogo area, rates of infection of malaria ran at 20% per annum despite the state-services and traditional charities interventions for the past ten years. By contrast, after three years’ work by our selected charity, we succeeded in reducing new cases to 12% per annum, and we are working via year-on-year reduction for a target of no new cases by 2020” and back that claim up with incontrovertible proof.

        Otherwise, why should I give my money to some group I never heard of, when I know and trust Gorta or Trocáire or World Vision or St Tiggywinkle’s Home for Poorly Hedgehogs?

        • Linch says:

          The AMF has post-distribution reports. Do you think this is reasonable proof?

          Incidentally I’ve tried to post another comment in this thread but was not able to. It was pretty long so I’m a little sad that it got lost. :/ If peeps can tell me why that’d be great.

        • AlphaCeph says:

          > “Say, for instance, “In the Lake Ogopogo area, rates of infection of malaria ran at 20% per annum despite the state-services and traditional charities interventions for the past ten years. By contrast, after three years’ work by our selected charity, we succeeded in reducing new cases to 12% per annum, and we are working via year-on-year reduction for a target of no new cases by 2020″ and back that claim up with incontrovertible proof.”

          If you can’t say it in one line, then it’s too complex for the average skim-reader to really understand, and it won’t do the job of torpedoing the critics. They’ll just come up with a one-paragraph complex counterargument, people will skim both paragraphs, get confused and call it a tie, and then go back to the exciting bit where the effective altruists are evil because they’re white and male.

          • Deiseach says:

            One line summation: “We reduced new cases of malaria from 20% to 12% and can eliminate it completely by 2020”.

            Someone says “Prove it!”

            You show them hard figures backing up your claim: over past ten years when government or Old Group were working there, the year-by-year new cases of malaria made up 20% of existing population. Your new group took over in 2016. Result: Only 12% of population contracted malaria in 2017. Same result in 2018. Give us money and time and we’ll bring that down to 0% in 2020 because our method works and here’s empirical proof, baby.

          • Murphy says:

            @Deiseach

            Unless you structure every intervention as an RCT (expensive) that’s insanely hard to do.

            You can look at existing RCT’s to pick the best intervention, you can estimate the number of lives saved etc based on the numbers from the RCTs but it can be really hard to prove that in a *particular* case that malaria cases actually went down due to your intervention rather than because in the same 10 year period some other program drained some nearby wetlands or something else.

      • AlphaCeph says:

        That’s not exceptional and blatant enough.

        If effective altruists could pull off something like picking one particular small third world country and completely lifting it out of poverty (to 1st world levels of GDP) in a decade, THEN the critics would look really silly.

        Or if they could pick one particular widespread disease or parasite (e.g. malaria) and totally eradicate it, THEN the critics would look really silly.

        I’m not saying that those are particularly feasible or wise things to shoot for, but they’re examples of what would be so impressive, so obvious, exceptional and blatant that they could be used as a memetic superweapon against critics.

        • Murphy says:

          Even if they did that how would they prove it was them and not chance? Even if it was them there would likely be a lot of leverage such as inducing governments and investors to put money in and remember that all of them will also be trying to take all the credit.

          So imagine that the EA’s are effective and successful beyond their wildest dreams and the country turns out great in under a decade. Now prove it. Prove it was them rather than the actions of one of the other hundred groups or politicians who are also trying to claim all credit. The Glorious leader will be claiming it was his wise rule, the international corp will be saying it was the jobs they created etc

          Anyone who doesn’t like EA’s can still point credibly to anyone else involved and claim that there’s no way some minor group could have achieved what happened.

          • Alphaceph says:

            There are a lot of suitable countries, like ~100. If you pick just one or two of them, and succeed, and none of the others do, then your critics will look like idiots, sure, others always try to claim credit (and they’re always right to some extent), but they way discussions seem to work, you fight the critics with blatant results, not with careful analyses of what caused what.

    • Linch says:

      YES!!! Oh my g-d! I’ve been waiting for this comment this whole time(and am a little surprised that it’s so near the bottom). People seem so stuck on the meta-level issues that the most obvious object-level response, being “so good that they can’t ignore/deride you” basically fell off the wayside.

      To quote Julia Wise in Cheerfully (http://www.givinggladly.com/2013/06/cheerfully.html) quoting George Fox:

      “Be patterns, be examples in all countries, places, islands, nations, wherever you come, that your carriage and life may preach among all sorts of people, and to them; then you will come to walk cheerfully over the world, answering that of God in every one.”

    • Linch says:

      YES! I’m actually more than a little surprised that I only found this near the bottom of the comments. It’s like people are so stuck on meta-issues that the most obvious response to people bitchin’ – doing work so good that others can’t ignore you – somehow fell off the wayside.

      Do real, genuine, positively life-changing work. Then the people who try to deride you will look as small-minded and petty as the people who character-assassinated Bill Gates (Okay, probably a poor example considering how long that lasted, but you get the point! Gates pulled through didn’t he?). Also I’m not sure about rationalists, but for EA peeps, remember that the population we’re trying to serve are very explicitly not fellow members of the community. Our Mission is Nobler.

      This is starting to sound very pep-talky, so I might as well go the whole way and quote Julia Wise quoting George Fox (http://www.givinggladly.com/2013/06/cheerfully.html):

      “Be patterns, be examples in all countries, places, islands, nations, wherever you come, that your carriage and life may preach among all sorts of people, and to them; then you will come to walk cheerfully over the world, answering that of God in every one.”

    • Zorgon says:

      Deiseach above demonstrates the fundamental problem with this approach. Reality batting last is a great principle, but the problem is that any set of empirical results can be interpreted in any number of ways. You need to have the framing already in your favour before reality can back you up.

  74. Alex Mennen says:

    > I remember seeing a blog post by a moderately-well known scholar – I can’t remember who he was or find the link, so you’ll just have to take my word for it – complaining that some other scholar in the field who disagreed with him was trying to ruin his reputation. Scholar B was publishing all this stuff falsely accusing Scholar A of misconduct, calling him a liar and a fraud, personally harassing him, and falsely accusing Scholar A of personally harassing him (Scholar B). This kinda went back and forth between both scholars’ blogs, and Scholar A wrote this heart-breaking post I still (sort of) remember, where he notes that he now has a reputation in his field for “being into drama” and “obsessed with defending himself” just because half of his blog posts are arguments presenting evidence that Scholar B’s fraudulent accusations are, indeed fraudulent.

    Sam Harris and Reza Aslan?

  75. suntzuanime says:

    The third solution is to associate insulting or arguing with your group with a toxic group, so that you are automatically defended without having to step up individually and put your own reputation on the line. Make it seem more frightening to betray you than to obey you.

    • Scott Alexander says:

      Not sure I understand. Reframe?

      • suntzuanime says:

        The idea is to set up a toxic group diametrically opposed to you, so that opposing you is associated with negative affect automatically. To take your Christianity example, if everyone knows that the godless Communist infiltrators are working to undermine Christianity, then attacking Christianity makes you look like a Communist. This makes people less inclined to attack Christianity, and it also means that the NYT doesn’t want to fire Douthat lest they look like they’re bowing down to the Communists, it means that universities will give Christian philosophers tenure to demonstrate how not-infiltrated by Communists they are, and the moderates who might otherwise waffle a bit about their Christianity will stand tall rather than let the Communists win.

        • onyomi says:

          So basically you must find a way to become an oppressed minority (TM)?

          This has worked pretty well with “antisemitism” and Israel. If you say that maybe Israel should not take over this one particular bit of Gaza then you can be tarred with a label also applied to Nazis, etc.

          But is there a way to obtain such a strong victim deflector card without actually suffering centuries of enslavement, genocide, etc.?

        • Scott Alexander says:

          Oh! Like the way that whenever anyone criticizes psychiatry, we say they’re shills for Scientology? Yeah, that’s a great strategy!

          (to be fair, we are right about 50% of the time)

        • Zorgon says:

          You don’t even need to set up an opposing group. Any group with a strong identity and ideological base will inevitably develop opposition. Even if said opposition doesn’t coalesce into a definable group, all you have to do is look for any small fragmentary element with a reasonably memorable name (acronyms are nice for this) and then loudly declare that all opposition must be coming from them.

          The Internet is particularly fun for this, because it’s full of almost entirely undirected minor assaults that fly under the radar most of the time. Just fish for some, publicise the results, and declare them the work of The Enemy. Job done!

        • HeelBearCub says:

          I’m with onyomi here.

          All this talk about Machiavellian means to crush your enemies without even having to lift a finger and put your group above criticism seems like exactly the kind of tactics that Scott is complaining about.

          • Cauê says:

            I’m almost sure suntzu and Zorgon weren’t suggesting this in earnest, but criticizing groups that do it.

          • suntzuanime says:

            I was just pointing out that Hufflepuff bones can be sharpened to a deadly point and used as a weapon. Why does everybody gotta get all moralistic all the time?

          • HeelBearCub says:

            @Cauê/suntzuanime:
            I missed the criticism, if its in there.

            IOW, I reread it and don’t see the implied criticism.

          • suntzuanime says:

            I don’t think it’s necessary to criticize every bad thing I describe. I try to keep moral valance out of my descriptions of reality, when feasible. I deleted a longer, angrier comment here, because you are not necessarily fully to blame for the creeping moralization of this comment section and I don’t want to be too unfair to you, and I feel bad about taking advantage of our host’s forbearance. But your pushing on me for failing to sufficiently normatively tag my positive claims is something I find very upsetting.

          • HeelBearCub says:

            @suntzuanime:
            If I rephrase you as saying “Here is one of many options that are available. I am just trying to enumerate, rather than advocate” is that accurate?

            If that was your point, I simply missed the implied enumeration only part of it. I was only objecting to advocating for it.

            Sorry if I ruffled your feathers.

          • Cauê says:

            HBC, I think he described a dynamic that’s clearly negative when you see and identify it, as a way of criticizing groups that employ it. And that you took it as an endorsement of the tactic.

          • Zorgon says:

            In contrast to suntzuanime, I definitely was criticizing groups who use the technique I describe.

  76. Professor Frink says:

    I’m not sure this post is what is happening in the groups you mention. i.e. the criticism of rationality was made on Topher’s blog and he seems to be read primarily by rationalists and has been a somewhat active member in the community.

    Dylan Matthews identifies as an EA and was criticizing EA from within the community, with criticism I’ve seen in a lot of place (the conference really was dominated by AI-risk. If you believe AI risk is non-central to AI, then you probably would have been annoyed by how much time was spent on AI-risk if you had been there.)

    A friend of mine likes to say “if you have big ideas and no one is criticizing them then no one is paying attention.”

    • Scott Alexander says:

      Again, I distinguish between criticism of principles, versus describing someone as “crackpots” or “abusive” and so on.

      • Professor Frink says:

        So I can’t find a mention of “abusive” in either article you criticized. Hallquist did use the phrase “crackpot tendencies.”

        But I want to ask you, given your assertion that MIRI isn’t central to the EA movement, should they have been the only object level charity given a main stage talk? Do you think an EA like Matthews who doesn’t support EA should be able to complain about MIRIs outsize presence at the conference (again, given that you assert they aren’t a main pillar of the movement)? Are you worried that articles like Matthews’s are harming EA, or worried they are harming MIRI within EA?

        If your first response is to get defensive, are you worried that you are missing valid criticism? “Critics are trying to push my movement into fringe territory to destroy the movement” seems like an excessive judgement of motivation, particularly given that I think Hallquist is a rationalist (or at least pretty involved in the community) and Matthews is an EA!

        • Jeff Kaufman says:

          In several fb discussions I was seeing Matthew’s article turning non-EAs off EA.

          • Professor Frink says:

            I’m an EA who won’t be attending another EA conference because both of them I’ve attended felt inordinately concerned with AI-risk.

            I think Matthews sees a problem in a movement he is a member of, and wants to correct it (unlike me, who is just somewhat quietly disengaging from the movement)

        • Carl Shulman says:

          “MIRI…only object level charity given a main stage talk”

          MIRI didn’t have a main stage talk (meaning one without any simultaneous talks). Nick Bostrom of FHI did, which led into a panel with one MIRI person out of four: Stuart Russell, Elon Musk, Nick Bostrom, and Nathan Soares.

          That did highlight AI as being in a single-track slot (which I recommended against, but the organizers justified on the grounds that Musk was a huge draw and competing talks would get little attendance), but neither Bostrom’s talk nor the panel was a MIRI talk. The overwhelming majority of Elon Musk’s AI safety money (and Open Philanthropy’s $1MM cofunding) is going to non-MIRI grantees, Stuart Russell is pursuing his own research agenda, etc. AI safety is not identical with MIRI.

          The panel was named Superintelligence, as in Nick’s book, after Nick’s single-track talk. Why would you say that MIRI had a special place rather than Nick, or FHI, or Elon Musk, or AI?

          http://www.eaglobal.org/program

        • given your assertion that MIRI isn’t central to the EA movement, should they have been the only object level charity given a main stage talk?

          You repeated this a few times on the other SSC thread, and I pointed out a few times that this wasn’t true. I’m confused by why you’re bringing this claim back; do you mean something else by “main stage talk” that would rule out GiveDirectly’s Olympic talk?

    • Deiseach says:

      I think it’s growing pains. The Effective Altruism movement is finally starting to get bigger than “twenty of us here in this place and sixteen of us overseas in that place” and now people are starting to want to flex that muscle, so to speak.

      The problem is, the founding principle is very vague. Nice, upbeat, but vague. “We’re gonna make things better by being rational!” Lovely. And how it’s starting to shake out is the people who say “we’re an animal-rights charity” versus the people who are all about existential risk versus the people who are “yes, AI risk is the biggest existential risk!” versus the people who are “But what about the mosquito nets?”

      Peter Singer, for instance, will be the figurehead of the animals rights people. And since he’s a hometown hero in Australia, naturally he’s going to be the leading speaker at their conference there. Which is a whole different slant and emphasis on things to how the conference in California went (the infamous “It was all about AI!”) and I have no idea how the Oxford conference is going to go; looking at the programme, there’s a speaker from the Fistula Foundation which is one charity I would be very damn interested in supporting, but there’s animal rights and more importantly, Nick Bostrom giving a talk, which makes me wonder if AI is going to be perceived as dominating that conference. We’ll have to wait and see.

      At the moment, you lot are herding cats. These are teething pains, but if someone doesn’t set priorities, the movement will splinter and the energy will dissipate. It may happen, of course, if (say) AI gets priority over animal rights; with the whole Thing I Should Stop Mentioning row in mind, there’s obviously a substantial minority who are willing to walk if their pet cause is not given top billing. But at least that would be clarity.

      As it is, the other main danger I see is the whole thing turning inward and falling prey to the same faults as it critiques in traditional charities: becoming a kind of ‘jobs for the boys’ more interested in perpetuating the interests and aims of its members than the work it set out to do.

      • I actually think splintering might not be a bad idea. Or at least, keep Effective Altruist as an umbrella term while hiving off the object-level activism into the three main camps. Agree to disagree, and realize that EA principles can’t really mediate between the three priorities (animal rights vs. malaria nets is a philosophical question of the moral valence of animals, the question of AI x-risk is sufficiently unclear enough that there’s no straightforward way to use EA principles to evaluate its importance).

        In order for that to work, we’re probably going to need more intellectual charity in the movement than what we’ve got. I fall pretty squarely on the malaria nets camp, being unconvinced about AI risk and viewing animals as not worth moral consideration – but I can certainly understand and appreciate how the vegans and MIRI got to where they are. I can also understand why someone would call x-risk presentations at an EA event a moral outrage, or why a vegan might think serving meat at the lunches was a slap in the face. I’m hoping that we can all kick things up a meta-level and apply the principle of charity with regards to the three divergent aims.

        • Adam says:

          There is also a question of incommensurate values that rationality calculations can’t handle in the prioritization of actual lives versus potential lives. I personally feel, as a basic unjustified value, that there is no moral difference between a universe with humans in it and a universe without humans in it. There is, however, a tremendous difference between a universe with suffering humans and a universe with not-suffering humans.

          It’s quite obvious that someone arguing from the position that we should prioritize one course of action over another because it will increase the absolute number of future people who will exist is coming from a completely different value set-point. That is not something that can be resolved with cocktail napkin probability estimates.

          There are two components right now to EA. First, given that you want to achieve goal X, what is the most effective way of doing so? Answering that question is completely amenable to adopting the techniques of econometrics and statistical decision theory and bravo for that. To some extent, charities already do this, especially any that receive federal funding because they’re required to, but more broadly encouraging this and improving the best practices is a good thing to do.

          But the other component is asking what goals we should care about achieving in the first place. That is completely a matter of philosophy, and to the extent that person A tries to convince person B to shed their own values and adopt person A’s values using math, it isn’t going to work.

        • Deiseach says:

          I think you have something there, James. Keeping EA as a philosophy and setting up the various groups to work towards X, Y or Z instead of how it’s happening now.

          At the moment, it looks like everyone thinks “EA obviously means my cause because it’s the rationally proven most vital one and that’s what we should all be aiming for!”

          It’s a floor wax and a dessert topping at present, and lots of people are saying the topping is too waxy and the floor is all sugary. Treating it as a motivating philosophy and rule of behaviour, while your main aim is “animal rights” or “X-risk”, would allow friendly co-operation between various groups instead of tussling for who gets their hands on the tiller of the one global movement.

      • Linch says:

        Thank you for your comment, Deisach. It’s really insightful and an accurate representation of the problems EAs have ahead of us.

        I think splintering off into 3 different camps is problematic, if for no other reason than because it will underserve causes that are incredibly important and that other EAs may find value in learning about. Life extension is the one that jumps to mind first, but I’m also talking to several non-EAs who are interested in the ideas but have traditionally focused on things like health in middle-income countries and sustainable urban development.

  77. PSJ says:

    I want to be careful here in order to not add on more emotional attacks, so please take everything here with a grain of salt.

    I’ve been following your blog for a long time. A significant part of the way I think stems from your writing. There felt like there was something different about this blog-a sort of openness to new thoughts and willingness to reassess opinions. A lot of this came through in your writing, but it was supported even more by the quality of comments. There was an astonishing lack of ideology. When you read comments on your standard news article, it’s immediately obvious the politics of the commenter. Here, when somebody asked a question it felt less like a diatribe and more like a place where that person really wanted to be challenged or have an evolving discussion about a topic.

    People have been complaining about the increase of NRx, HBD, or extreme right-wing factions. And I had been in agreement, but now I think this isn’t right. More than an increase in any specific viewpoint, I feel like this blog has had an increase in staunch ideology. More than ever, it seems like discussions are less about discovering new truth and more about parading every fact as proof that you are right about how intellectually corrupt everyone else is after all. (this isn’t entirely fair…Foghorn Leghorn is gone)

    In the first post the blog ever had, you said:

    This blog does not have a subject, but it has an ethos. That ethos might be summed up as: charity over absurdity.

    Absurdity is the natural human tendency to dismiss anything you disagree with as so stupid it doesn’t even deserve consideration. In fact, you are virtuous for not considering it, maybe even heroic! You’re refusing to dignify the evil peddlers of bunkum by acknowledging them as legitimate debate partners.

    Charity is the ability to override that response. To assume that if you don’t understand how someone could possibly believe something as stupid as they do, that this is more likely a failure of understanding on your part than a failure of reason on theirs.

    This, to me, is the principle that made this place special. And whether it’s your discussion on feminism or NRx comments (or mine…my bad), this may have fallen by the wayside. We’ve see a rise of the terms “rightist” and “leftist” as conversation enders. (I mean, I just used NRx as one generalization for non-charitable comments. That’s shitty and untrue.) There’s been a rise in tribalism.

    Maybe this is the inevitable result of your garden growing. Maybe I’m just delusional. But I feel like you already knew how to deal with attackers. Tell them where they are right and where they are wrong and make sure one doesn’t overflow into the other. Don’t use insults. Assume the best of their intentions. Don’t assign them to some century-long ideology and use that as a straw-man. Above anything else: Be charitable.

    • Eggo says:

      This only works until there’s something tasty growing in your garden, and there’s tribal competition for it. In EA’s case, that thing is access to charity money.
      Choosing not to fight over the garden is the same as giving your garden to the first tribe that will fight over it.

      • PSJ says:

        But Scott’s whole thing was not fighting dirty. It’s the belief that fighting cleanly and charitably is exactly how you keep winning, or at least not lose badly. (at least I think, Scott can probably clarify)

    • shrubshrubshrub says:

      In contrast to PSJ, as someone who has started reading your blog relatively recently (although I’ve still read all your posts), I think their analysis is sound.

      You seem much more afraid than you used to be and I think this has affected your judgment a little. No doubt you stand to lose more now than in the past. This saddens me; your blog has given me a lot of pleasure.

  78. Aaron says:

    If you want an example of distress of the privileged…

    • onyomi says:

      This is a very serious issue for everyone in the world if you consider the broader implications.

    • Alexander Stanislaw says:

      I can’t complete the ellipses, are you saying that this post is an example of such?

    • The original Mr. X says:

      Being in a position to make fun of people worrying about social ostracism sounds like a pretty clear marker of privilege to me…

  79. The version of this pattern I’ve seen most commonly of late is in climate arguments. Part of the tactic is to focus on the least defensible views on the other side, attack those, and imply that those are the views of everyone who disagrees with you on the issue. I call it a tactic, but that may imply too much deliberation. If you are convinced that your conclusion is obviously right, it’s natural to conclude that only people who disagree with it do so for bad or crazy reasons, and so to target those reasons for your attack.

    • gattsuru says:

      It’s not a tactic, I’m afraid.

      Unfortunately, the nature of current media drastically encourages folk to only see the least strong of views that oppose theirs, and most strong of views that match theirs. Even if you’re actively trying to fight it, you have to go quite far to find people who disagree with you, and you may not even have the tools or background to evaluate what actually is a good argument for an opposing view.

    • Jon Gunnarsson says:

      I think this is an instance of the motte and bailey doctrine. Motte: The Earth has gotten warmer over the last century and human emission of CO2 contributes to this trend. Bailey: If we don’t take immediate and decisive action to sharply reduce CO2 emission, global warming will have catastrophic results and threaten our very existence.

      • JRM says:

        I am for significant global warming action, but I agree that the global warming argument has generally separated into:

        1. There is no anthropogenic global warming. Let’s leave our Hummers running overnight so we don’t have to start them in the morning.

        2. There is, so AIIIIEEEE!

        Go to Google and read a hundred global warming stories. (Or read 15. I tried that, trending toward known media.) You can save a lot of time, probably all of it, by just reading the ones that talk about the financial/social/animal cost of reducing carbon output versus the same of global warming, and that talks about specific amounts of global warming that would be abated by whatever proposal they want to talk about. Numbers are apparently for jerks. None for you!

        I think we too often undersell the value of competence in politics. I feel pretty strongly that climate change is an issue that needs competence rather than rallying. There are a bunch of other things like that. But I’m also unconvinced that I can call competence during any government’s activity, much less beforehand.

        Which leads us back to Scott’s point, so this totally isn’t threadjacking no matter how much it obviously is. I do think describing the full-on deniers (and there are some) as unhelpful is fair. I want to have a conversation with the person who has good estimates of the actual costs. But she’s either hiding or arguing over whether there is or is not global warming, and whether the people who think the the wrong thing are terrible people who should be shunned by polite society. Which kind of sinks the 190 of us who want good estimates of the damned numbers.

        • Jiro says:

          I suspect it goes a little deper than that. The people who won’t mention any numbers and are reacting to only the worst “global warming deniers” aren’t motivated by truth at all. Global warming is merely a politically convenient way for them to go for their agenda, and their agenda is all they’re really interested in. The fact that global warming happens to be real is almost irrelevant to this; if it was fake, they would be doing exactly the same thing except they wouldn’t be adding “and scientists believe in global warming”.

        • I don’t think I have a good estimate of the costs of warming. I think I can offer reasons to believe that nobody else does either. Also that there is no a priori reason to expect the net effect to be large and negative, although it could be.

          http://daviddfriedman.blogspot.com/2011/09/what-is-wrong-with-global-warming.html

          I encountered the same problem about forty years ago in the context of population growth, when I wrote a piece for the Population Council in which I tried to add up the externalities, positive and negative, from having one more child. My conclusion was that I couldn’t sign the sum.

          http://www.daviddfriedman.com/Academic/Laissez-Faire_In_Popn/L_F_in_Population.html

    • James Picone says:

      Can you list some prominent global warming skeptics that you think have been treated this way?

      Because from where I’m standing, the median and modal global warming skeptic doesn’t think that it actually has gotten warmer since 1970, and they’re leaning towards “and the people who say it has are fraudulent or utterly compromised by bias”. I can’t think of any prominent global warming skeptic that doesn’t push that position every so often.

      • TheNybbler says:

        Most of the skeptics on WattsUpWithThat say it hasn’t warmed in about 18 years, not 45.

        • James Picone says:

          Do you know if there’s a survey or the like? Because that’s absolutely not my impression. My impression of WUWT is that it’s a mix of:
          – Greenhouse effect doesn’t exist
          – Anything-but-carbon, no positive claims (probably the majority)
          – The entire thing is a fraud (close second)
          – Oh it’s all so uncertain
          – Various weird minority positions (CO2 increase is natural, ENSO breaks thermodynamics, God promised not to flood the world again).

          I’m kinda surprised to hear the place brought up in this context. It’s one of the least sophisticated skeptic sites, easily.

          • Bruce Beegle says:

            First time poster.

            Of the people who have published several articles on Watts Up With That, I’d say that all of them recognize that the (misnamed) greenhouse gas effect exists, human greenhouse gas emissions have increased the surface temperature, and that the surface is warmer than it was (say) 50 years ago. (Of your other points, some of them may also think that there has been significant fraud, that there is often more uncertainty than is admitted, and/or that some of the CO2 rise is natural.)

            I don’t have numbers (and don’t know of a survey), but my impression of the commenters is that the bulk of them would agree. Some don’t–more than once I have seen a comment saying that the greenhouse effect violates the second law of thermodynamics, but there are usually quite a few people correcting the error.

          • James Picone says:

            Of the people who have published several articles on Watts Up With That, I’d say that all of them recognize that the (misnamed) greenhouse gas effect exists, human greenhouse gas emissions have increased the surface temperature, and that the surface is warmer than it was (say) 50 years ago.

            Tim Ball thinks the greenhouse effect doesn’t exist, he has posted articles on WUWT.

            Willis Eschenbach has recently claimed, on WUWT, that there’s a feedback loop that has limited temperature variation to +-0.3c over the 20th century. Given that the range of temperature variation over the last century is approximately 1.2c, that implies something about his opinion of the current temperature datasets.

            Bob Tisdale broadly thinks that global warming is caused by ENSO, a view which implies exceptionally low or no forcing from CO2. Here’s one example. From down the bottom: “The El Niño events of 1986/87/88 and 1997/98 are shown to be the cause of the rise in sea surface temperatures since November 1981, not anthropogenic greenhouse gases.”

            Steven Goddard posts on WUWT. He’s a CO2-is-saturated greenhouse-denier – see here.

          • Bruce Beegle says:

            Thanks for the information on Tim Ball and “Slaying the Sky Dragon”. My mistake.

            However, I doubt that that view has been expressed on WUWT, at least in the last several years. WUWT’s policy includes:
            For the same reasons as the absurd topics listed above, references to the “Slaying the Sky Dragon” Book and subsequent group “Principia Scientific” which have the misguided idea that the greenhouse effect doesn’t exist, and have elevated that idea into active zealotry, WUWT is a “Slayer Free Zone”. There are other blogs which will discuss this topic, take that commentary there.

            A reviewer of “Slaying the Sky Dragon” on amazon.com ripped the book to shreds. The only good thing he wrote was on Tim Ball’s two chapters: “I found these chapters interesting and informative.” Tim Ball’s promotion of that book is a negative (at least I think so–I’m got a hold at the library so I can see if the book is as bad as it looks), but I think he writes about other things at WUWT.

            I didn’t see any evidence that the other people you mentioned disagreed with what I wrote above.

          • James Picone says:

            WUWT’s policy includes:

            WUWT’s moderation policy is selectively enforced, mostly against people making mainstream climate statements. I was just there doing a quick survey of a thread for mainstream climate opinions; see my post below directed at Glen Raphael. I saw one post claiming a climate sensitivity of 0.1 to 0.2c, and given that the raw forcing of CO2 is ~1c, that counts as greenhouse denialism in my view.

            Here and here are Tim claiming that the entire IPCC output is propaganda. Direct quote: “IPCC computer climate models are the vehicles of deception for the anthropogenic global warming (AGW) claim that human CO2 is causing global warming.”.

            Steven Goddard explicitly claims that the greenhouse effect at current CO2 levels is tiny. My original reply to you linked to that statement. This is in contradiction with “human greenhouse gas emissions have increased the surface temperature” and arguably “the greenhouse gas effect exists”. I would argue that he also doesn’t think that it’s warmer today than 50 years ago – that’s definitely the case for the US, but he talks about global data much less.

            I linked to Willis Eschenbach claiming temperature variation is limited to +-0.3c over short periods of time. The temperature record has about 1.2c variation. Eschenbach evidently thinks that the temperature record is completely wrong. That is in tension with the surface being warmer.

            I linked to Tisdale claiming that sea surface temperatures are warmer because of ENSO events, not global warming. That contradicts “human greeenhouse gases have increased the surface temperature”.

          • Bruce Beegle says:

            WUWT’s moderation policy is selectively enforced, mostly against people making mainstream climate statements.

            I don’t see evidence to support that. I can think of several reasons why people who make “mainstream” climate statements might not post on WUWT.

            However, as I said, “Slaying the Sky Dragon”, which is certainly not “mainstream”, is against WUWT policy.

            Steven Goddard explicitly claims that the greenhouse effect at current CO2 levels is tiny. My original reply to you linked to that statement. This is in contradiction with “human greenhouse gas emissions have increased the surface temperature” and arguably “the greenhouse gas effect exists”.

            I assume you mean “doesn’t exist”.

            Saying something is tiny does not say it has no effect and certainly does not say that it doesn’t exist.

            I linked to Willis Eschenbach claiming temperature variation is limited to +-0.3c over short periods of time. The temperature record has about 1.2c variation. Eschenbach evidently thinks that the temperature record is completely wrong. That is in tension with the surface being warmer.

            I don’t know what you mean by “tension”. I can think of several reasons why Eschenbach might have made that statement, and none of them suggest that the earth is not warmer. One reason might say that the earth is about half as much warmer as you believe, but that is still warmer.

            Also, do you have any evidence that Eschenbach thinks “that the temperature record is completely wrong”? I’ve read most of Eschenbach’s posts, and I don’t think he has ever suggested that.

            I linked to Tisdale claiming that sea surface temperatures are warmer because of ENSO events, not global warming. That contradicts “human greeenhouse gases have increased the surface temperature”.

            I assume you mean anthropogenic global warming. Obviously, Tisdale believes in global warming.

            You linked Tisdale’s article discussing a paper by Foster and Rahmstorf (2011). He is saying that the paper’s finding of a linear anthropogenic signal in sea surface temperatures is erroneous and that the rise is due to ENSO events. I’d guess that if he was asked, he’s say that the rise in sea surface temperatures is slightly affected by AGW but it’s buried in the noise. (Of course, I could be wrong–do you have any evidence that I am?)

          • James Picone says:

            I don’t see evidence to support that. I can think of several reasons why people who make “mainstream” climate statements might not post on WUWT.

            Notice that nobody who questions global warming ever gets posts removed for using a handle instead of their real name. Notice that his policies explicitly include the statement that he’ll doxx people with IP addresses coming from government organisations: “Anonymity is not guaranteed on this blog. Posters that use a government or publicly funded IP address that assume false identities for the purpose of hiding their source of opinion while on the taxpayers dime get preferential treatment for full disclosure, ditto for people that make threats.”

            Notice that people regularly describe the entire climate-science infrastructure as fraudulent, throw the word ‘alarmist’ about with gay abandon, etc., and are never censured for it. Here’s the very first comment on the top post on WUWT as of my comment here. ‘Irrational fears’, ‘Delusional thinkers’, a reference to Scientology, ‘cult’, and ‘by this logic Al Gore is a demigod’. Do you think a similarly content-free screed written from a mainstream climate science point of view would last?

            The rest of the comments are similarly awful.

            Whether or not it’s against WUWT policy, greenhouse effect denial is well and alive in the comments, and occasionally posts on WUWT endorse it. Do you really think that all of the people who talk about how much complete nonsense all of this is and how all the records have been manipulated utterly and are all lies also think that the greenhouse effect part is still real? When dbstealey says things like “The rise and fall of global T happened irregardless of CO2 levels. There is no geologic corellation between icehouse earth, hothouse earth, and CO2 levels.”, do you think he doesn’t actually mean it? When Salvatore del Prete says “CO2 ha zero to do with changing the climate it is an effect of the climate no the cause.”, and doesn’t get any criticism, what do you think that means?

            When Voisin publishes a guest article on WUWT that claims “At an atmospheric concentration of 380ppm and higher the limited long-wave spectral absorption of CO2 is essentially saturated. Consequently, yet more atmospheric CO2 becomes vanishingly less relevant to a greenhouse effect (if at all). And when more atmospheric water vapor is objectively evaluated its net-effect is found to be a negative-feedback rather than a positive one (in direct contradiction to the presumption of the Models).” and “The current spike in atmospheric CO2 is largely natural (~98%). i.e. Of the 100ppm increase we have seen recently (going from 280 to 380ppm), the move from 280 to 378ppm is natural while the last bit from 378 to 380ppm is rightfully anthropogenic.”, how do you justify it? The comments are hardly tearing him to shreds.

            I assume you mean “doesn’t exist”.

            Saying something is tiny does not say it has no effect and certainly does not say that it doesn’t exist.

            , yeah, that was a typo.

            Claiming climate sensitivity is <1c is the intelligent design to claiming the greenhouse effect doesn't exist at all's creationism. It's in the same category. It's still just plain wrong.

            I don’t know what you mean by “tension”. I can think of several reasons why Eschenbach might have made that statement, and none of them suggest that the earth is not warmer. One reason might say that the earth is about half as much warmer as you believe, but that is still warmer.

            Also, do you have any evidence that Eschenbach thinks “that the temperature record is completely wrong”? I’ve read most of Eschenbach’s posts, and I don’t think he has ever suggested that.

            It’s in ‘tension’ because the claim he’s making is directly contradicted by the temperature record. Yes, he thinks the surface temperature records are fraudulent. See, for example, here. I think it’s telling that nobody called him on it.

            I assume you mean anthropogenic global warming. Obviously, Tisdale believes in global warming.

            You linked Tisdale’s article discussing a paper by Foster and Rahmstorf (2011). He is saying that the paper’s finding of a linear anthropogenic signal in sea surface temperatures is erroneous and that the rise is due to ENSO events. I’d guess that if he was asked, he’s say that the rise in sea surface temperatures is slightly affected by AGW but it’s buried in the noise. (Of course, I could be wrong–do you have any evidence that I am?)

            I quoted him too, he says “anthropogenic greenhouse gases”. He’s explaining global warming, at least for sea surface temperatures, as caused by ENSO with statistically-indistinguishable-from-zero trend from the increase in CO2. So yeah, he’s denying that the greenhouse effect has any effect on sea surface temperatures, and his proposed model violates the second law of thermodynamics.

            EDIT:Oh, and of course here and here are two posts by Goddard on WUWT where he explains that venus is warm because of atmospheric pressure, not a greenhouse effect.

          • “I saw one post claiming a climate sensitivity of 0.1 to 0.2c, and given that the raw forcing of CO2 is ~1c, that counts as greenhouse denialism in my view. ”

            On theoretical grounds, both positive and negative feedbacks are possible, and I don’t think anyone seriously claims to be able to predict their size a priori. The IPCC models have net positive feedbacks that roughly triple the direct effect from CO2. A model in which net feedbacks are negative gives you sensitivity between 0 and 1. That may or may not be correct, but it doesn’t deny the existence of the greenhouse effect.

          • James Picone says:

            @David Friedman:
            ECS is not a number that could be any old thing. Negative feedback just isn’t plausible. How do ice ages happen under a negative-feedback scheme? They’re hard enough to explain at ECS==1.5c! Why is the basic intuition that the Clausius-Clapeyron relationship implies higher temperatures -> more water vapour and water vapour is a potent greenhouse gas, so there’s a very large positive feedback wrong?

            There’s a reason the IPCC draws the ECS probability curve dropping almost to zero before 1c and with a long right tail, and it’s physics.

            Good work disproving your own thesis, though – you claim people with beliefs like yours are being tarred by association with the lunatics, and then you indicate that you think we have literally zero information about ECS!

          • Bruce Beegle says:

            [blockquote]Negative feedback just isn’t plausible. How do ice ages happen under a negative-feedback scheme?[/blockquote]
            The same way that ice ages happen under a positive-feedback scheme, except that it takes more “forcing”.
            [blockquote]They’re hard enough to explain at ECS==1.5c![/blockquote]
            That’s hard only if you assume that CO2 causes ice ages. Ice core data show that when the earth starts or ends an interglacial period, temperature changes first and CO2 responds with a time constant of several hundred years. Whatever causes ice ages, it doesn’t appear to be CO2.

            [blockquote]Why is the basic intuition that the Clausius-Clapeyron relationship implies higher temperatures -> more water vapour and water vapour is a potent greenhouse gas, so there’s a very large positive feedback wrong?[/blockquote]
            The Clausius-Clapeyron relationship describes the saturation vapor pressure. Naturally, a warmer world would almost certainly have more water vapor in the atmosphere (which would be a positive feedback), but the amount of increase is uncertain. I have heard that many climate models assume that the relative humidity of the atmosphere would be constant in a warmer world–I don’t know if this is the case, but if they do then they are highly likely to be wrong.

            It is also wrong to consider only positive feedbacks. There are also significant negative feedbacks. I think that the total feedback under current conditions is small, with uncertain sign. I also think that total feedback becomes smaller (eventually becoming negative if not already) with increasing temperature and larger (eventually becoming positive if not already) with decreasing temperature.

          • Bruce Beegle says:

            Negative feedback just isn’t plausible. How do ice ages happen under a negative-feedback scheme?

            The same way that ice ages happen under a positive-feedback scheme, except that it takes more “forcing”.

            They’re hard enough to explain at ECS==1.5c!

            That’s hard only if you assume that CO2 causes ice ages. Ice core data show that when the earth starts or ends an interglacial period, temperature changes first and CO2 responds with a time constant of several hundred years. Whatever causes ice ages, it doesn’t appear to be CO2.

            Why is the basic intuition that the Clausius-Clapeyron relationship implies higher temperatures -> more water vapour and water vapour is a potent greenhouse gas, so there’s a very large positive feedback wrong?

            The Clausius-Clapeyron relationship describes the saturation vapor pressure. Naturally, a warmer world would almost certainly have more water vapor in the atmosphere (which would be a positive feedback), but the amount of increase is uncertain. I have heard that many climate models assume that the relative humidity of the atmosphere would be constant in a warmer world–I don’t know if this is the case, but if they do then they are highly likely to be wrong.

            It is also wrong to consider only positive feedbacks. There are also significant negative feedbacks. I think that the total feedback under current conditions is small, with uncertain sign. I also think that total feedback becomes smaller (eventually becoming negative if not already) with increasing temperature and larger (eventually becoming positive if not already) with decreasing temperature.

          • James Picone says:

            @Bruce Beegle:
            Ice ages are caused by a reduction in solar forcing, not a reduction in CO2 concentation, yes. That’s known.

            The amount by which solar concentration varies over Milankovitch cycles is known pretty well, though, and climate sensitivity doesn’t change much with different forcings. The numbers don’t line up for low climate sensitivity – forcing doesn’t reduce enough to cover the world in ice.

            AFAIK the end of ice ages is thought to include a significant greenhouse component in the form of methane or CO2 building up under ice, but *shrug*

            (p.s.: Careful what you claim. CO2 increases when the world gets warmer? Sounds like somebody has pointed out a positive feedback).

            Care to point out any of those significant negative feedbacks? The only one I know of offhand is increased cloudiness, but AFAIK the magnitude of that one is uncertain because increased cloudiness can also be a positive feedback, depending on cloud height and other things.

            Have a trawl around the scientific literature on climate sensitivity some time, see exactly how many have significant probability mass around 1c. I guarantee it is low single digits, and if you’ve been around the skeptic circuit you’ll recognise the names. And do keep in mind that the planet has warmed ~1c since preindustrial with less than a doubling of CO2, and that’s TCR.

            (https://en.wikipedia.org/wiki/Runaway_greenhouse_effect is a thing as well, but that’s like +10c over preindustrial for Earth and isn’t going to happen. Probably happened on Venus though.)

          • James Picone says:

            Correction: Runaway is closer to 30c over preindustrial on Earth than 10c. So yeah, definitely not happening. But it is a thing, and it demonstrates rather neatly that the feedbacks do not reduce until you are boiling the oceans off. And there’s some evidence it happened on Venus.

      • Glen Raphael says:

        Because from where I’m standing, the median and modal global warming skeptic doesn’t think that it actually has gotten warmer since 1970

        From where I’m standing, that view isn’t even a weakman, it’s a strawman. Albeit one that is extremely popular with the skepticalscience crowd. Can you even name ONE prominent global warming skeptic who thinks it hasn’t gotten warmer since 1970? There’s been lots of argument over whether recent warming trends have been exaggerated and over whether the warmists and related doomsayers have been claiming an inappropriate degree of certainty in their predictions or cherrypicking to exaggerate or emphasize the badness of every conceivable trend, but I honestly haven’t noticed anybody claiming unchallenged in skeptic spaces that there’s been no warming since 1970. Or no warming in the 1900s generally. Some warming happened. There’s healthy disagreement over how much more is likely to happen in the future, whether we should care about that amount of warming, if we do care whether it’s worth trying to do anything about it, and if so whether any particular response is worthwhile.

        [My relevant sample space includes: ClimateAudit, BishopHill, JudithCurry and WattsUpWithThat. Oh, and from the other side: RealClimate and Tamino’s blog.]

        • James Picone says:

          Can you even name ONE prominent global warming skeptic who thinks it hasn’t gotten warmer since 1970?

          Steven Goddard. Who posts on WUWT. And then there are the ones who think that the greenhouse effect doesn’t exist or is somehow saturated and 100% of the recent warming is natural, which is similarly insane – Ball and Tisdale.

          Also, the people claiming the warming has been ‘exaggerated’ are playing the same game. Which includes Monckton, Watts, Eschenbach, etc.. Notice Watts reaction to BEST, for example. He clearly thought it was going to demonstrate that the trend was much lower than it actually is.

          My relevant sample space includes: approximately every argument I’ve had with someone about this, with a few notable exceptions. I mostly don’t read skeptic blogs; my experiences with them are mostly when they’ve ventured to places like RC or Tamino’s blog, and on the local news website.

          Curry and McIntyre are probably the most reasonable skeptics I’m familiar with, but Curry’s entire argument is obviously fatally flawed (uncertainty doesn’t only go in one direction) and McIntyre is isolated demands for rigor incarnate.

          Principled holding of the position Friedman is talking about? I could count on one hand the number of times I’ve run into that, and most of them would be here. People adopting that position after their more extreme claims have been attacked, because Anything But Carbon? Sure, but that’s a different animal.

          • James Picone says:

            Other relevants: Murray Salby claims that the CO2 rise is natural and not at all anthropogenic. I don’t know if he’s appeared on WUWT, but he has been boosted by Bishop Hill.

            Andrew Montford and Doug Keenan play the what-if-climate-were-a-random-walk game, claiming that the warming over the instrumental record isn’t statistically significant, and with interesting implications about the behaviour of the climate system (for example, it would imply that thermodynamics is wrong). That’s Bishop Hill, of course.

          • Glen Raphael says:

            Can you point to an example of any of these people specifically claiming it’s not warmer today than in 1970? I’m interested in that specific claim. Finding random people who say things like “temperature is a random walk” or “greenhouse forcing is saturated” or “greenhouse warming accounts for less than half of recent warming” does not establish even that those very people think “it hasn’t gotten warmer since 1970”, much less that this is the median or mean position of skeptics in general, which was what you claimed.

            WRT Goddard: Do you have a link on that? (Note that claiming 1934 might have been the warmest year on record is talking about the U.S. record – it doesn’t tell us what he thinks about the worldwide trend. )

            WRT Tim Ball: in the link you posted he describes his own views as quite different from that of skeptics. He portrays warmists as people who say CO2 is 90% responsible for recent warming and skeptics as people who think CO2 is much less than 90% responsible for recent warming, but in his own view both sides accept that CO2 is a greenhouse gas. He is trying to convince people toward his own view, which is clearly not mainstream warmist OR mainstream skeptic. (And he thinks he’s winning converts, though I think he’s wrong about this. Curry and her commenters regularly make fun of the Sky Dragon contingent, as do occasional ClimateAudit commenters.) So Tim Ball (1) isn’t a typical “skeptic”, and (2) even if he were, you haven’t established that he thinks there’s been no warming since 1970.

            I get the sense that your view of skeptics has been overly filtered by the internet outrage-generation machinery. If a skeptic says something moderately sensible that comment won’t get featured on Tamino’s blog or at RC so you’ll never see it – you only get to notice the out-there stuff that was worth passing on in order to scoff at it. In addition to following their native inclination as to what’s worth discussing in the main posts, Tamino and RC (and John Cook) also have some history of simply deleting any good points skeptics make in their comment sections while leaving up that which is easiest to demonize. (Though RC got MUCH BETTER after Climategate happened, the early history made smart skeptics inclined to give up on trying to set the record straight there.)

          • James Picone says:

            Can you point to an example of any of these people specifically claiming it’s not warmer today than in 1970?

            Steven Goddard mostly talks about US temperatures , which makes a gotcha money-quote tricky. here he claims that “All US warming over the past century is due to USHCN adjustments.”. Here he says “As you can see, most of their land data is fake. Then these fraudsters declare it to be the hottest month on record by a couple of hundredths of a degree.”. Here he claims that “Climate experts claim that increasing CO2 causes more hot weather, because they are paid to lie about the climate for government propaganda purposes.”. Do you really need the i dotted and the t crossed here?

            Andrew Montford and Doug Keenan are claiming that there is no statistically significant warming over the entire instrumental period. See here for example. “Until research to choose an appropriate assumption is done, no conclusion about the significance of temperature changes can be drawn”. and “This method shows that the alternative is so much better than the IPCC assumption, that we conclude the IPCC assumption is insupportable. … Under the alternative assumption, the increase in global temperatures is not significant. We do not know, however, whether the alternative assumption itself is reasonable—other assumptions might be even better.”. That’s a bit bet-hedgingy, but I think you’re hair-splitting if you claim that this isn’t the same cluster I’m talking about.

            Tim Ball is not mainstream; I agree. He’s apparently serious enough for WUWT to feature him occasionally, though, which seems telling. If that’s not outside WUWT’s Overton window, and mainstream climate science isn’t in WUWT’s Overton window (I can link you to arbitrarily large numbers of people claiming comments have been censored or moderated, or they’ve been harassed and/or had doxxing attempts from Watts or moderation staff, merely because they expressed a mainstream climate-science opinion in the comments perfectly politely. I’ve seen critical comments on Tamino’s blog or RC. Do they ever last on WUWT? I just grabbed the recent Eschenbach thermoregulator post (here). There are no comments that are unambiguously coming from a mainstream climate position. All I could find was:
            – Mosher making a snide remark in one comment
            – Wickedwenchfan saying “Heat capacity of water. Takes longer to warm up, takes longer to cool down. Regulates weather extremes. Does nothing for average temperatures”, which is difficult to interpret, but could be a “this is dumb” comment.
            – Someone named ‘stevenreincarnated’ dumping a chunk of the abstract of a paper about sea surface temperatures not changing with changes to ocean transport, which I don’t see the relevance of, ambiguous.
            – And there was apparently a deleted comment that included a line along the lines of ” … tell us …how your hypothesis accounts for/allows glaciation… ” that some commenters speculated was John Cook. Note that literally no commenters pointed out that there is more than .6c of variation in the last century’s climate.

            I suspect you’d get similar results from WUWT posts selected at random. I don’t really want to do the experiment – categorising WUWT comments is not my idea of a good time.

            Most of the skeptics I run into I see on an unrelated-to-climate-affairs news site popular in Australia, which has quite a high percentage of skeptics relative to most other countries. I’ve seen single-digit greenhouse effect deniers there, single-digit to zero Friedman/your position, the vast majority were somewhere between “It’s entirely fraudulent” to “They’re completely incompetent and the data is wrong”, with individual issues being decided based on what was convenient at the time (e.g., “climate is insensitive”, up until paleoclimate is brought up, at which point solar sensitivity is massive, no feedbacks exist unless ice cores are mentioned when suddenly CO2 is added to the atmosphere when it’s warm, etc.). Most of them definitely read WUWT; you could tell because they would be very synchronised in bringing up the most recent Big Thing that popped up on WUWT.

          • Bruce Beegle says:

            I repeat Glen Raphael’s question:
            Can you point to an example of any of these people specifically claiming it’s not warmer today than in 1970? You gave lots of examples of people saying things that you disagree with, but (as far as I can see) nothing to back that up.

            Eschenbach thermoregulator post

            Note that literally no commenters pointed out that there is more than .6c of variation in the last century’s climate.

            I noticed that (not the first time I saw it, probably the second) and didn’t comment on it because it was basically irrelevant to the article. Does it matter whether the thermoregulator kept the earth’s temperature to within 1/10% or 1/5%? Not to me, anyway.

            If I had to guess (and it’s purely a guess) I’d say that Eschenbach goofed, and took half of the increase in temperature of the entire century instead of half of the difference between the high and low points (which is why I missed it the first time).

            If it means a lot to you, why didn’t you post a question on WUWT? As far as I can tell, Eschenbach is good at responding to questions and has several times admitted to an error and corrected his post. Of course, you should probably clean up your language, since this is also part of the WUWT policy:

            Trolls, flame-bait, personal attacks, thread-jacking, sockpuppetry, name-calling such as “denialist,” “denier,” and other detritus that add nothing to further the discussion may get deleted;

          • James Picone says:

            I repeat Glen Raphael’s question:
            Can you point to an example of any of these people specifically claiming it’s not warmer today than in 1970? You gave lots of examples of people saying things that you disagree with, but (as far as I can see) nothing to back that up.

            Goddard claiming that “All US warming over the past century is due to USHCN adjustments” isn’t enough for you? Keenan explicitly saying that no conclusion can be drawn about whether the recent warming is significant? (Claiming that it isn’t statistically significant is the same as claiming it hasn’t warmed).

            I hope you apply similar skepticism whenever one of WUWT’s ‘Claim: ‘ posts gets made, but I rather doubt it.

            I noticed that (not the first time I saw it, probably the second) and didn’t comment on it because it was basically irrelevant to the article. Does it matter whether the thermoregulator kept the earth’s temperature to within 1/10% or 1/5%? Not to me, anyway.

            If I had to guess (and it’s purely a guess) I’d say that Eschenbach goofed, and took half of the increase in temperature of the entire century instead of half of the difference between the high and low points (which is why I missed it the first time).

            Yes, because it’s a trivial mistake that Blog Science is apparently unable to detect, and because it gives us a sense of exactly what Eschenbach thinks has actually happened over the last century.

            If it means a lot to you, why didn’t you post a question on WUWT? As far as I can tell, Eschenbach is good at responding to questions and has several times admitted to an error and corrected his post. Of course, you should probably clean up your language, since this is also part of the WUWT policy:

            So many reasons:
            – I don’t think Eschenbach is arguing in good faith.
            – I expect to receive personal attacks and name-calling for making that criticism, and I don’t want to play that game.
            – I don’t think ‘people who want to learn things’ is a significant fraction of WUWT’s readers
            – I don’t look forward to Anthony looking up my email address or place of work and making vague threats.

          • Glen Raphael says:

            James: I don’t want to follow you on a Gish Gallop through random climate-related claims made by random people you disagree with. I just wanted you to identify some specific person who has specifically claimed there’s been no warming since 1970…and you couldn’t do it. If the “median and mean skeptic position” included that claim, you ought to be able to find at least one person willing to say it. You weren’t able to. You apparently reached that conclusion by “reading between the lines”, but I’m looking at the same lines you are and just not seeing it – and of the two of us I’m the “skeptic” so I ought to be seeing it if it’s really there.

            Goddard fails on multiple counts to be an example. First, he’s only talking about the US trend. Second, LOOK AT HIS CHART – the unadjusted US trend as HE shows it hits some high points in the 1930s, noisily drops until the 1970s, then noisily warms again from the 1970s through the 1990s. So even if you asked Goddard just about the US trend he’d obviously say it got warmer since the 1970s. LIKE EVERYBODY ELSE DOES.

            My impression is that the WUWT crowd thinks there’s been warming since the 1970s (both in the US and worldwide) but thinks there has been somewhat less warming than is commonly believed – perhaps as little as half as much. As for what percentage of the warming seen has been human caused, I don’t think that group has a consensus on that. Wide range of views, wider range of confidence levels.

            (I don’t read WUWT regularly – it has too low a signal/noise ratio. I much prefer ClimateAudit.)

          • James Picone says:

            James: I don’t want to follow you on a Gish Gallop through random climate-related claims made by random people you disagree with. I just wanted you to identify some specific person who has specifically claimed there’s been no warming since 1970…and you couldn’t do it. If the “median and mean skeptic position” included that claim, you ought to be able to find at least one person willing to say it. You weren’t able to. You apparently reached that conclusion by “reading between the lines”, but I’m looking at the same lines you are and just not seeing it – and of the two of us I’m the “skeptic” so I ought to be seeing it if it’s really there.

            This isn’t quite being maximally uncharitable to me, but it’s getting there.

            I say that the median and modal skeptic “doesn’t think that it actually has gotten warmer since 1970”. You challenge me, I produce two skeptics who think that the US warming trend over the last century is zero and that the warming trend over the last century cannot be said to be statistically significant, respectively.

            You’re interpreting their remarks as charitably as possible in not reading between the lines or following their chain of logic. You are interpreting my remarks uncharitably by, as far as I can tell, taking the interpretation “they think 1970 is a maximum for the period 1970->present”.

            “Warming cannot be said to be statistically significant” is “doesn’t think it has actually gotten warmer”. “Thinks that the US trend over the last century is zero, rarely mentions global trends, accuses everybody involved in everything of fraud” is “doesn’t think it has actually gotten warmer”.

          • Glen Raphael says:

            Here are three claims:
            (a) “there has been no significant warming since the 1970s, worldwide
            (b) “there has been no significant warming in the unadjusted US record over the last century
            (c) “although a linear fit to the overall worldwide temperature trend is clearly positive, the movement of temperature over time fails certain statistical tests if we choose to analyze it in a certain way, which should lead (those who like that way of doing the statistics) to conclude that the overall increase is not very significant.”

            (a) is what you claimed most skeptics think. (b) is what Goddard claims (and his claim seems plausible) but we don’t know if he believes (a). (c) is what your Keenan link says. Believing (c) is not inconsistent with believing (a) but does not quite establish (a) either.

            To me, the essence of skepticism includes the belief that the literal truth of specific claims matters. If you say “most skeptics think there’s been no warming since the 1970s” and in discussion I leave that unchallenged, other people who read it might think that’s what skeptics actually think. Since *all* of the temperature trend estimates (including satellite-based ones and various older/less-“corrected” versions of the current ones) show warming since the 1970s, letting that claim stand would make it look like skeptics are either ignorant of the data or too stupid to evaluate it properly. I don’t know if that’s what you believe, but that is the implication I was responding to when I tried (perhaps too aggressively) to pin down what you actually mean when you assign claim (a) to “the median skeptic”.

            Had you said something more defensible such as, say, “most skeptics think there’s been some warming since 1970 but there was cooling before then and the overall trend for the century might resemble a random walk” I would not have called that a strawman. That would be a weakman at best, since one can find skeptics who would say something along those lines. Had you made that claim, pointing at Keenan would answer the challenge. (But then, had you made that claim, you wouldn’t have gotten a “name ONE” challenge to begin with.)

            Incidentally, I disagree with this:

            “Warming cannot be said to be statistically significant” is “doesn’t think it has actually gotten warmer”.

            That is, it is perfectly reasonable to believe BOTH “it has actually gotten warmer” AND “the amount of warmer that it has gotten is (based on some model somebody likes, over some period somebody has specified) not statistically significant” – one claim does not really deny the other.

          • James Picone says:

            @Glen Raphael:
            Now you know what I meant: I meant people like Keenan and Goddard. There’s an entirely reasonable interpretation of “doesn’t think it’s actually warmer now than in 1970” that fits what they think (Do you think a person who claims that the US land temperature record is fabricated for propaganda purposes is going to believe that the global temperature record is meaningful?). I did actually have both the references-to-“adjustments”-with-scare-quotes and random-walk variants in mind when I originally made the claim.

            (FWIW I actually wouldn’t be surprised if the US land record has substantially lower trend prior to adjustments, maybe even statistically indistinguishable from zero. AFAIK there’s significant time-of-observation shift in the US land record which causes a very large cooling bias, combined with land record + regional for more temperature variability than the global record, could beat out the warming trend. The part where Goddard goes “and therefore propaganda” is relevant here)

          • “I saw one post claiming a climate sensitivity of 0.1 to 0.2c, and given that the raw forcing of CO2 is ~1c, that counts as greenhouse denialism in my view. ”

            On theoretical grounds, both positive and negative feedbacks are possible, and I don’t think anyone seriously claims to be able to predict their size a priori. The IPCC models have net positive feedbacks that roughly triple the direct effect from CO2. A model in which net feedbacks are negative gives you sensitivity between 0 and 1. That may or may not be correct, but it doesn’t deny the existence of the greenhouse effect.

          • James Picone writes:

            “Claiming that it isn’t statistically significant is the same as claiming it hasn’t warmed”

            That is not correct. Claiming that it isn’t statistically significant is claiming that we can’t be confident it has warmed, which is quite different from claiming that we can be confident it hasn’t.

            You have a coin that you suspect may be double headed. You flip it three times, get heads each time. You correctly report that the pattern is not statistically significant evidence that it is double headed. You are not claiming that it isn’t.

  80. Deiseach says:

    I think you’re very moderate on Tumblr and when you’re not quite so moderate, it’s generally more of “Hey, stop poking me with a sharp stick!” reaction.

    Unlike me, who if people think I’ve been mean on here to the vegans, should see how I treat my own family.

    Content/Trigger Warning: if animal carcasses make you feel woozy, don’t click on the link below.

    My vegan/animal rights brother posted a link to this on Facebook as per his usual evangelising. My response was along the lines of “Yes, the difference is if you rub your eyes after chopping peppers, you’ll be in trouble but there’s no problem if you rub your eyes after chopping carcasses”.

    See, I must love you all, I treat you the same as my own blood-kin! 🙂

    • zz says:

      Turns out, a lot of people find “this is disgusting” to be a morally relevant argument. Like, I remember when I was 13 or so and being charitable to pro-lifers and one of them said something the the effect of “watch this video of an abortion and there’s no way you’ll can think it’s morally okay.” So, I watched the video and, well, was mostly bored; to me, it was about as morally relevant as this video of a lizard scaring a cat because I don’t find disgust morally relevant*, even though it was, by itself, a sufficient condition to reject abortion for the pro-lifer. Several years later, I watched a Jonathan Haidt video on YouTube, realized that I’m in a minority because I find how disgusting something is to be morally irrelevant, and then a bunch of nonsensical arguments I’d heard previously made sense.

      I suspect this is review for most people here, but I hope that someone was one of today’s lucky 10,000

      *Subsequent to watching the Haidt video, I read Interlude 11H, which finally managed induce my first (and, so far, only) “disgust is morally relevant” reaction.

      • Nita says:

        Perhaps they were hoping you would feel sympathy / pity for the fetus, not just disgust?

      • Cauê says:

        Well, if you were bored you were probably not disgusted, so you didn’t really need to reject your disgust as irrelevant. Your stated reaction to Interlude 11H is more relevant to understand what you (we) want to ask of people when we try to shape institutions in ways that don’t conform to their moral intuitions.

        In my experience almost everyone equates being intuitively repulsed by something with “this is morally wrong”, but the triggers are not always the same.

    • Baby Beluga says:

      I love you too, Deiseach!

  81. stargirl says:

    I like this post alot. But I think it is probably absolutely terrible politics to write this publicly. People can now say “Scott Alexander openly tries to punish people who criticize EA/LW.” This looks really bad!

    My personal opinion is that one should never be open about being “political.” Never goes well.

    • Nita says:

      Wait, where’s the punishment part?

      • stargirl says:

        I was not saying the perception was accurate. But this line “make the insults and mistruths reputationally costly enough that people think at least a little before doing them” in context sounds alot like he is saying he was trying to punish Halquist.

        • Nita says:

          But… isn’t pretty much everyone on board with making “insults and mistruths” reputationally costly? I thought the disagreement was over whether Hallquist’s post contains such or not. So, Scott is positioning himself as either a virtuous defender of truth or a virtuous, but unfortunately mistaken, defender of truth.

          Unless, of course, you’re suggesting that Scott’s opponents are horrible, toxic people who will deliberately twist straightforward points out of shape and should be shunned by polite society 😉

          • stargirl says:

            I am very pro-SSC and that line sounded bad to me. I imagine alot of critics of rationality will hear it the way I did.

    • Dust says:

      I hope I’m not the only one to recognize the irony of the possibility that Scott may suffer reputational consequences for suggesting that there are cases where people should suffer reputational consequences.

  82. anon says:

    Matthews’ bit about there being too many “autistic white men” in EA came as somewhat of a shock to me. I know a lot of SJ and feminist “inclusiveness” is really about driving out nerds to make room for normies, but it’s rare to see anyone admit it openly.

  83. Anonymous` says:

    This seems like a very appropriate time to link one of my favorite Enlightenment hero stories.

  84. John Wentworth says:

    One alternative strategy would be to form a proper conspiracy, pretend that the entire rationalist community doesn’t exist, and then anyone criticizing us would look like a total loony crackpot.

    Actually, now I think about it, that wouldn’t even require an ACTUAL conspiracy, it would just require a bunch of loonies who SAY there’s a conspiracy, and then our critics would get lumped in with them. It would be sort of like the Masons: they don’t need any actual conspiracy to make their critics sound like crackpots.

    So maybe what we really need is for a few community members to form a group nominally against the “rationalist conspiracy.” Ironically, these few members would be the only people who need to do anything vaguely conspiratorial. They try to rouse rabble and generally complain loudly about the rationalist conspiracy, thereby helping to discredit our opponents.

    • It would be sort of like the Masons: they don’t need any actual conspiracy to make their critics sound like crackpots.

      Unfortunately, the Freemasons (once a major influential group in the U.S.) have long since gone over the event horizon themselves, and are on the verge of disappearing from the American scene. Hardly anyone without a historical bent takes them seriously any more.

      Imagine (as a resident of academia or the upper middle class) telling your friends and family that you had become a Mason. They would be aghast. It would be about like you had decided to join the Amish or the French Foreign Legion.

      • BBA says:

        I imagine their reaction would be closer to “They’re still around?” or “What’s a Mason?”

        • Loquat says:

          My employer has a contract to provide insurance to a Masonic group, and “They’re still around?” was basically my reaction when I found out.

      • Adam says:

        It seemed to me like just about every high-ranking enlisted black person I met in the Army was a Prince Hall Freemason, so they at least still seem to exist.

  85. Alyssa Vance says:

    “In quite a number of the most toxic and hated groups around, I feel like I can trace a history where the group once had some pretty good points and pretty good people, until they were destroyed from the outside by precisely this process.”

    I can think of a number of groups (which I won’t name, so as to avoid flamewars) where it seems like the opposite happened. A group starts off as moderate and reasonable, and I might even support it. Then they win their first victory, and a few of the moderates say “cool, great job, let’s go get pizza” and leave. And then they win their second victory, and a few more moderates leave. And this continues until the group is composed of radical extremists, and is so powerful that everyone gives them whatever they want, and all the old moderates still passively support them because of halo effect from earlier victories.

    • Alyssa Vance says:

      To illustrate what I mean without making anyone mad, I’ll use a historical example, which I think people have mostly forgotten about. In the 1920s, New York had very few state parks, and when Robert Moses got appointed as Park Commissioner, everyone supported him. After all, who doesn’t like parks? And, initially, he did a great job at making large, easily-accessible parks that everyone enjoyed. But this allowed him to get more and more powerful, until a few decades later, he was effectively a dictator who could confiscate people’s buildings at will, bulldoze them, and use them to make “parks” which were really just eight-lane superhighways.

      • Eggo says:

        That’s a nice technique for using examples! Well, until you inevitably get into an argument with someone who’s passionate about the early 20th century parks movement, and has a framed picture of Robert Moses in a shrine…

        What you’re talking about seems to happen a lot more to groups with vague goals rather than achievable, concrete ones. Naming your organization something like “Freedom To Marry” makes it much easier to go “great works lads! Clear out your desks: job’s over!
        As opposed to the Woman’s Christian Temperance Union, which will apparently be sticking around until everyone is 100% temperate at all times.

        • Tracy W says:

          In NZ the WCTU wound up leading the fight for woman’s suffrage. And succeeding only a couple of elections after universal male suffrage was achieved.

          • Eggo says:

            Looks like they did plenty of other damage, too, especially to wine growers. Had heard about the very narrow loss of national prohibition, but didn’t know they’d had so much local success.

        • Tarrou says:

          Successful organizations almost never self-cancel in response to unmitigated victory. The March of Dimes formed to combat polio, so it “succeeded” about eighty years ago in this country. So did it say “well thanks everybody, America is now polio-free, we’re shutting down!” Of course not, they had donors, political clout and vast popularity. So they expanded the mandate. Now it’s about “women’s health” issues which gives them an excuse to keep their cushy jobs and moral superiority and rake in millions. It will never end unless someone is stupid enough to support something very unpopular, and it loses the halo effect.

  86. Jimmy says:

    I think it’s interesting that people have been reading your posts as “defensive”.

    How one goes about defending seems to be the important thing here – not just whether there is a defense. One way people will do it is to feel hurt and unable to defend themselves, but instead of saying “I feel hurt and am unable to defend this”, they’ll gloss over their motivations and resort to increasingly dishonest/nonsensical arguments/attacks in order to put up *some* defense. This is obviously not so great.

    However, it’s much different when you don’t actually feel unable to provide a valid defense so you accept valid criticism and make good arguments just matter-of-fact. This seems like the obvious Right Thing.

    To me, SSC looks more like the latter than the former. There’s also a bit of “I don’t like having to defend against dishonest criticism” coming through, but it does not overpower the actual arguments, and Scott is also very upfront about it.

    Basically, I don’t see a problem

  87. Machine Interface says:

    It is interesting that the infamy of being compared to a “cult” is itself a meta-demonstration of this phenomenon, since cults themself went from being anecdotical minority religions to dangerous sectarian isolationationist factions somewhere in the 70s, along with unsubstantiated accusations of brainwashing and other forms of manipulation.

    • Scott Alexander says:

      I mean, there seem to be some groups like Scientology which are genuinely very bad? I don’t know how common they are among small religions, but I do feel like we need a word to describe them?

      Ozy was trying to come up with an “ideological abuse” distinction on Tumblr, but then Topher jumped in and said that Less Wrong was a good example of “ideological abuse”, so so much for that clearing anything up.

      • Sniffnoy says:

        Well, is anyone actually agreeing with him about it? At this point to me it just looks like Topher being cranky about LW. I have no idea how he’d actually defend that statement if challenged. There may be some at-first-glance-apparently-cult-like things about LW (that fall apart on closer inspection), but I can’t even think of what would be at-first-glance-apparently-abusive.

        • haishan says:

          “Nah, my opponent is just being cranky. No true Scotsman would call LW abusive!”

          He did defend it. I actually agree with his points there, although I think he’s failing to make the distinction between “Eliezer is a cult leader/jerk” and “LW is a cult” — in practice Eliezer pattern-matches to a cult leader way better than most locally prominent LWers pattern-match to cult members.

          • Ilya Shpitser says:

            Nah, some of the more hero-worshipping, a bit clueless younger LWers used to pattern match onto cult members pretty well. I suppose these are not prominent, but not sure why that distinction matters here. Cult rank and file are not prominent by definition.

            Incidentally, at least one prominent academic’s impression is that LW is “one compound away from a cult.” (Will not name, sorry, but was personal communication.) These impressions are persistent for a reason.

          • haishan says:

            You’d know better than I would — I’m reasonably familiar with the LW diaspora but I never really went on lesswrong.com or anything.

            I do suspect that EY is most of the reason LW gets compared to a cult; most of the criticism definitely focuses on him (his overconfidence, his particular weird beliefs, his maybe-abusiveness, etc.) Most critics aren’t going to bother digging for the rank and file’s reactions, especially since a good proportion of them are not very culty. And like I said, he does pattern match onto “cult leader” remarkably well.

          • nydwracu says:

            Does Yudkowsky pattern-match to ‘cult leader’? When people provide evidence and I click through, I see ‘some guy from Usenet’. My impression of Usenet (though it was before my time) is that the culture there was unusually accepting of the particular style of arrogance that comes out in Yudkowsky’s “what you’re actually asking is, ‘given that I’m not smart enough to understand that MWI is obviously true, what’s the proper epistemic position for me to have on it?'” and so forth.

            (Personally, I don’t care about that stuff — I’d rather be called an idiot than a cockroach.)

          • Sniffnoy says:

            Ah, I hadn’t seen that, thanks; not sure if it was up when I posted. But this seems pretty weak — his first example I’m not familiar with, but (just going by the description) it seems to lack any ideological component. As for the second, well, Eliezer was downvoted to -3 for it. So even if you want to call it an attempt at ideological abuse, the base wasn’t going along with it, in which case I don’t really see how it could be very successful.

          • TheAncientGeek says:

            There’s an important difference between are and were. If there are not now, there were cult like things about LW that did not fall apart in inspection. They fell apart under criticism. MIRI was criticised for phygishness, and responded by becoming less of a phyg and more of a credible research organisation. Despite the framing of the OP, its not all about defense, sometimes you have to accept the point.

          • Viliam says:

            Yudkowsky using his critics’ health problems to strong-arm them into deleting their criticism of him.

            Well, this removes like 99% of the context.

            If we are talking about the same story, there was a guy who spent years compulsively criticizing LW and Yudkowsky on various websites. When you found an accusation of LW being a cult, there was more than 50% probability that it originated from him.

            And how was this connected to health issues? A few years later, the guy came to LW and said that whevener he thinks about LW, his pulse increases to dangerous levels, and he is afraid that he could die from obsessing about LW too much. Unfortunately, whenever he publishes another criticism of LW, someone from LW publishes a refutation, which forces him to publish another response… creating a circle he cannot stop. Is there any way we could stop this together? For example, he would stop criticizing LW, LW would stop criticizing him, and both sides would agree to forget that the other side exists.

            Yudkowsky said that as long as the attacks on him remain publicly visible (even if the guy would stop writing new articles, there would still be new people finding the old ones, then quoting them in their blogs, or even at newspapers), he will have to keep defending. So any “peace treaty” must include removing the old articles.

            Does this still feel to you like Yudkowsky is a jerk for saying this?

            Yeah, we do have an intuition that when our opponents have health problems, we should treat them gently. But if it is a person who spent previous years trying to ruin your career, single-handedly responsible for most of the defamation people find online when googling your name, should you just shrug and forgive them? Or are you allowed to ask them to remove the offending materials first?

          • Eli says:

            In defense of that one academic, most hobbyist groups or social movements don’t usually run group houses devoted to their group. That’s more sort of Chabad-Lubavitch. But there actually are “rationality houses”.

            That’s some weirdness right there.

          • Nornagest says:

            Yudkowsky said that as long as the attacks on him remain publicly visible, he will have to keep defending. […] So any “peace treaty” must include removing the old articles. Does this still feel to you like Yudkowsky is a jerk for saying this?

            I wouldn’t say he’s being a jerk. I would say he’s being clueless.

            If you’re even locally significant online — and I’ve seen this happen in a circle of less than a hundred people, so you don’t have to be very significant — you’re going to attract critics. Some of them are going to do things like compare you to a cult leader or Hitler. I’ve been called both, and I’ve never been near as prominent as Eliezer is.

            Trying to quash this is futile: it demotivates you, eats up time that you could be using to write real content, and tends to be interpreted by observers as an abuse of power. You can engage with substantive criticism, or quietly adjust your approach to look less Hitlery; either one can be a good move. But you also need to figure out how to get your job done in the presence of some low background level of Hitler accusations, because aggressively defending yourself against some rando that no one’s ever heard of makes you look more like Hitler to people that’ve just come across your stuff, and doing your job makes you look less.

            Eliezer doesn’t get this, and it’s one of the main things that makes him bad at PR. He’s a smart guy, and he’s not bad at outreach as long as he’s making abstract arguments (or, say, writing fanfic) rather than trying to engage with individual people, but he seems to go on tilt as soon as he sees personal opposition.

          • houseboatonstyx says:

            @ Nornagest
            But you also need to figure out how to get your job done in the presence of some low background level of Hitler accusations, because aggressively defending yourself against some rando that no one’s ever heard of makes you look more like Hitler to people that’ve just come across your stuff, and doing your job makes you look less.

            Another approach was mentioned earlier: to let your friends answer the critics while you ignore them.

            Back to Scott Alexander’s situation (and Scott Aaronson’s) — someone with a solid establishment position being attacked on a shallow ad hominem level — maybe we could look at what Peter Singer has done or not done, and how it has worked or not worked for him.

      • ozymandias says:

        You completely missed the point.

        Every sufficiently large ideology contains at least one instance of ideological abuse; the difference is only whether it is a few people or everyone. Less Wrong is going to contain instances of ideological abuse, at least on the friendship/meetup group level, if it hasn’t already; what we have control over is how defensive we get about it when it happens.

        • Nyx says:

          I think Scott was maybe just irritated that it seemed like Topher was just trying to load LW with negative affect? Like, he could have made that point in a way that didn’t load negative affect (e.g. the way you did it here, by first noting that this happens in almost every group)

        • Scott Alexander says:

          I’m on a weird computer that won’t let me link to the post involved, but take a look at the hallq tumblr post I’m talking about and tell me if that is the most plausible way to read his comment.

        • Adam says:

          Clicking through all those links to the source brings up this very weird comment:

          “You should just be discounting expected utilities by the probability of the claims being true, and then putting all your eggs into the basket that has the highest marginal expected utility per dollar, unless you have enough resources to invest that the marginal utility goes down.”

          For someone pulling all of his ideas from economics, has he never been exposed to modern portfolio theory? ‘Put all your eggs into one tremendously uncertain basket’ is the exact opposite of what you should do.

          • Adam:

            The argument for a diverse portfolio comes out of the declining marginal utility of money, aka risk aversion. Von Neuman utility has risk aversion already built into the utility function. The statement “outcome A has twice the utility of outcome B” means that you are indifferent between a certainty of B and a .5 chance of A.

            The utility of a bundle of expenditures on different charities is the expected value of the utilities of the improvement in the outcome lottery it generates, and if that is lower than the expected utility of spending all your money on one charity you should do the latter. So the person you are quoting (Eliezer?) had his economics right.

            I hope that wasn’t too elliptical.

    • Ilya Shpitser says:

      Unsubstantiated?

      • Machine Interface says:

        Wikipedia on the subject:

        “James Richardson observes that if the new religious movements (NRMs) had access to powerful brainwashing techniques, one would expect that NRMs would have high growth rates, yet in fact most have not had notable success in recruitment. Most adherents participate for only a short time, and the success in retaining members is limited.[46] For this and other reasons, sociologists of religion including David Bromley and Anson Shupe consider the idea that “cults” are brainwashing American youth to be “implausible.”[47] In addition, Thomas Robbins, Massimo Introvigne, Lorne Dawson, Gordon Melton, Marc Galanter, and Saul Levine, amongst other scholars researching NRMs, have argued and established to the satisfaction of courts, relevant professional associations and scientific communities that there exists no generally accepted scientific theory, based upon methodologically sound research, that supports the brainwashing theories as advanced by the anti-cult movement.[48]

        […]

        Eileen Barker criticizes mind control theories because they function to justify costly interventions such as deprogramming or exit counseling.[citation needed] She has also criticized some mental health professionals, including Singer, for accepting expert witness jobs in court cases involving NRMs.[55] Her 1984 book, The Making of a Moonie: Choice or Brainwashing?[56] describes the religious conversion process to the Unification Church (whose members are sometimes informally referred to as “Moonies”) which had been one of the best known groups said to practice brainwashing.[57][58] Barker spent close to seven years studying Unification Church members. She interviewed in depth and/or gave probing questionnaires to church members, ex-members, “non-joiners,” and control groups of uninvolved people from similar backgrounds, as well as parents, spouses, and friends of members. She also attended numerous Unification Church workshops and communal facilities.[59] Barker writes that she rejects the “brainwashing” theory as an explanation for conversion to the Unification Church, because, as she wrote, it explains neither the many people who attended a recruitment meeting and did not become members, nor the voluntary disaffiliation of members.[60][61][62][63]”

        The “deprogramming” mentionned in the second paragraph is a purpoted form of counter-brainwashing which in practice gives a free pass for family members to kidnap and psychologically abuse (and in one documented case, commit corrective rape) one of their own if they can make sufficient allegation that this person is in “a cult” (where “being in a cult” can here mean things like “being a member of a leftist/activist organization” or “being a lesbian”). It also doesn’t work: people who are victims of deprogramming say what their captor want to hear, and as soon as they’re free they flee and get back to their “cult”.

        The whole theory of brainwashing participates to the principle of dehumanization: “there’s no way anyone would join these people on their free will, he must be mind-controlled”.

        • Ilya Shpitser says:

          I agree that the way cults operate does not resemble the caricature on the Simpsons.

          If you say that “brainwashing” is as you describe, then I am fairly happy to concede your point. Your sentence contained a conjunction: “brainwashing and other forms of manipulation.” Do you deny that cults manipulate people at all? Cults manipulate people, it is essential to how they operate.

          The way scientology operates in keeping people in is well-documented on clambake.org. I am less familiar with the moonies, but I know a few other (lesser known) examples, e.g. the fellowship of friends in California. The latter in particular preys on the relatively well off and educated.

          In some ways cults work by systematizing one to one abusive relationships (which are certainly a thing), and exploiting the powerful human need for hierarchy, and “special inclusion” (see e.g. C.S. Lewis’ “inner ring” essay).

        • Sylocat says:

          I think James Richardson is using the Hollywood sci-fi definition of “Brainwashing.”

  88. Anon22 says:

    Thanks Scott. I’ve been feeling pretty down lately after reading some of other peoples thoughts about these communites and this post has helped me understand why, and now I don’t feel so bad anymore. I couldn’t put it into words, but you did. Your very skilled like that. Thanks. 🙂

    Anyways in other communities I often feel like the Christians you describe. I just don’t mention LW/EA/SSC because every there more or less agrees we are dumb people who deserved to be mocked. So I just hear what they say and keep my mouth shut. I recently came across this comment (rot13 because it made me feel bad and I figure other people might not want to read it for the same reason).

    V nz abg n ivbyrag zna. Nsgre univat ernq gung, V jnag gb chapu rirelbar vaibyirq jvgu ‘Rssrpgvir Nygehvfz’ va gur snpr, [v]ercrngrqyl[/v].

    Wrfhf jrcg, ‘Jryc, uhznagvgl vf pyrneyl tbvat gb or nebhaq sbe 50 zvyyvba lrnef. Pyrneyl gung’f [v]thnenagrrq[/v]. Fb jr qba’g arrq gb jbeel nobhg, Ahpyrne jne, Tybony Jnezvat, be vaqrrq gur ovyyvbaf bs crbcyr yvivat va cbiregl gbqnl.’ :fhvpvqr:

    ‘Bu jung’f gung lbh fnl ng gur onpx? Lbh xabj n ivyyntr gung pbhyq ernyyl hfr n fbhepr bs pyrna jngre jvgubhg nyy gung Ovyunemvn va vg. Fbeel, guvf zbarl vf tbvat gb fnir 10 Dhnqevyyvba ulcbgurgvpny shgher crbcyr. Fgbc orvat fb frysvfu’.

    It made me sick. And I wanted to say something. But I didn’t. There are just too many people saying stuff like this and I don’t want a reputation for being “That LW Guy”, and I just can’t handle internet arguments anyways.

    As for solutions? I have no idea. These are people who read your out group post and come away with “This is 10000 word essay on ‘Why won’t you tolerate my intolerance?'”, I think Troy Rex and Raemon had good ideas though.

    • Nyx says:

      I mean, in social situations I think you can totally get away with being devil’s advocate as long as you imply or say that you so clearly know that “they” are wrong before you defend them. In particularly, play the status game where you take the *I have a better and more charitable perspective* position and try to understand why they think the way they think. (e.g. “I know god totally exists, but I don’t think you’re being charitable enough to the other side. Look at it honestly from their perspective…. [but imply that it’s still totally be wrong]”)

      Maybe I’m not describing it well, but I’ve had success with this ranging from evangelical christians to social justice liberals (managed to convince the extreme christian that faith is bad(!) insofar as it means believing something without a reason, because that heuristic can justify anything (in particular, it is why animates ISIS, which evangelicals don’t like at all, but is totally understandable if you believe murder gets you to heaven, which if you use faith is impossible to reject). Key is agreeing with them first, but then doing a more rational analysis after you have assured them that they are correct/ the ingroup.

      Basically, if you want to defend the other side, lie and don’t reveal that you are on that side (or somehow clearly establish that you are not a ???-sympathizer). Then, patronizingly analyze the other side and conclude “they are still wrong and dumb, but for slightly more understandable reasons/ mostly not evil, just dumb”. I think you can almost always get away with this (this being “I don’t think that you are being particularly charitable/intelligent, they actually probably believe *this* [which is still wrong tho haha]”)

      • Earthly Knight says:

        Feminists have evolved an immune response to this technique, which is a combination of accusing anyone who looks like they might be doing it of being a concern troll and a limitless tolerance for type I errors (“shotgun lupus”). Most changes in language, I have heard, originate with teenage girls, so you are going to have to mutate awfully fast to keep up with the tumblr set’s antibody swarm. You might still enjoy some success in aged and infirm communities, Christians being a prime example.

        (not a criticism of feminism– note who is the pathogen in this scenario)

        • Cauê says:

          note who is the pathogen in this scenario

          …the guy saying that faith is bad because it can justify anything and equally rejects all that oppose it, truth and falsehood alike.

          The defense mechanism you describe also suffers from this insensitivity.

        • Nyx says:

          I think you might be right online, I was thinking more from a perspective of a small group meeting (e.g. a 15 person club). Or even better 1-on-1.

          Though it might work online too. I see this happen occasionally (where someone basically yanks everyone out of the circlejerk by saying something to the effect of “do you honestly think that’s what they believe?…” followed by “They probably really think like…[insert perspective you’re trying to defend]”). I don’t think this ever loses you much status, though it may be ignored. It very rarely looks bad to say “do you honestly think that’s what our opponents believe?” and often even gains you status.

      • Deiseach says:

        You didn’t convince your “extreme Christian” that faith qua faith was bad, you convinced them that blind faith was bad, which they probably agreed with in the first place – have you really never encountered the common Evangelical verse from 1 Peter 3:15 “But in your hearts revere Christ as Lord. Always be prepared to give an answer to everyone who asks you to give the reason for the hope that you have. But do this with gentleness and respect”, as well as various verses from the Epistles of St Paul?

        Christians generally believe they have reasons for faith; indeed, some of us hold that fideism is almost heretical. You were arguing against fideism as far as your Christian friend was concerned, not against faith itself, so your triumph may be less than you think.

        • Adam says:

          You obviously know more about Christianity than most people here, so speaking for most people here, no, I’d never heard that one. The one that comes to mind first is ‘faith is the substance of things hoped for, the evidence of things unseen.’

          • Deiseach says:

            Yes, but note the “evidence” there, and “substance”. People have hope – in what? What is their hope? Here is the answer to that because of what God has done. You have an intimation of something beyond or more than the workaday world? “There must be something more than this”? Then here it is, not just a fantasy.

            Or at least, that was what St Paul was telling his original audience 🙂

        • Nyx says:

          This is true if you asked them directly if they think they believe for no reason, but I think I broke down a compartmentalization in which he did, in fact, believe that faith without evidence or reason was justified, even if he never would have said such. I think a lot of religious people have a similar compartmentalization (having been an extremely religious person myself through high school…). Basically, religious people motte-and-bailey with faith all the time (if asked or challenged directly, they will say ~what you did here. But then later they will use faith the way I did, where it acts as a very general justification for whatever beliefs without evidence).

          In particular, his behavior changed such that he now talks about how the bible confirms things that are independently verified outside (e.g. avoiding drugs, porn, etc is good because there is evidence those things are harmful to people). Where before that wouldn’t have been part of the discussion (it would have been more like “look at these things that look bad because I load them with negative images, etc”)

          Similarly, he’s now of the opinion that people of many faiths, even atheists, can go to heaven, taking the circumstances into account, because he recognizes that even if they’ve heard of christianity, they may not have good enough reason to convert. Whereas before the thought of “from their perspective, could I see good reason to believe christianity is true vs other religions or atheism?” wouldn’t have even occurred. Or would have been carefully avoided/ motte-and-bailey-ed around.

          Obviously, I came/come into these discussions putting on the face of a relatively devout christian, when I’m really not. Which is what I’m recommending. (Though it does require you to pass something like an ideological turing test, albeit a much easier version)

  89. HeelBearCub says:

    Call me crazy, but isn’t letting your Id argue sort of the opposite of what you want to do? In other words, much of what your pointing at is that people are making emotional, not rational, arguments. If you have seemed defensive in your writing (enough to be asked about and then recognize it as a true statement) it seems like you must have been doing that which you do not want to be done.

    Also, and part of me doesn’t think this is quite fair so please take it with a grain of salt, but doesn’t this categorize much of the argumental approach against being taken against feminism? Essentially statements of “those people are toxic, you don’t want to be a feminist”. On a 30 year timeline, I think the number of people who identify as feminist is falling, and for precisely the reason you describe being against their approach.

    I lzlo have to wonder who some of these “most toxic” organizations that had a point are. When I think of the most toxic organizations it’s the KKK, neo-nazis, etc. that come to mind. I am not sure what you are trying to say here, but it, unfortunately, seems ripe to be misinterpreted.

    • Mirzhan Irkegulov says:

      On a 30 year timeline, I think the number of people who identify as feminist is falling

      Anything to back this up? I’m not asking specifically for peer-reviewed studies in top-notch journals (only demagogues demand such for everything they disagree with, even things that are either trivial, or for which no studies exist). I’m only asking about anything that could serve as a stronger evidence, than just your impression. My impression is the reverse, but I wouldn’t bet money on either statements.

  90. Muga Sofer says:

    I have a feeling that “why are you being defensive” and “why are you defensing yourself” are two different points.

    I’m way too tired to come up with a proper analysis here, but they feel like different ideas to me, with my subjective speaking-English sense. “Defensiveness” is not “defending yourself”; it’s more like … attacking anything with even a hint that it might be attacking you, far more than is actually the case in reality.

    Yelling at someone because they insulted you publicly is defending yourself. Yelling at somebody because you think they were maybe implying something is being defensive. That’s the only way I can put it.

    (And on the scale of movements, yelling at people because they’re attacking you is defending yourself, and yelling at a movement because one of their members disagrees with you is being defensive? I guess?)

    • HeelBearCub says:

      @muga sofer:

      I think this is adjacent to what I was getting at below when I pointed out that letting one’s Id argue is counter-productive

    • Eggo says:

      Not really. Do you have to put up with people wandering into random forums to spread rumours about you, like Scott does?

    • gattsuru says:

      There need not only be giants for Don Quixote to be sane; they need be standing near where others see windmills. At an even deeper level, there need not only be giants in place, but you need a method to actually demonstrate them to people who’d care to learn about those giants.

      Apologies if this sounds trite, but I’ve been called defensive and suffering from a persecution complex, in response to specifically claiming certain insults toward my ilk were tolerated.

  91. anon85 says:

    I’m not sure how to answer your question, but let me at least try to get you to see it from my perspective (as a potential attacker).

    The AI risk people seem to me to be trying to convince regular, ‘ordinary’ EAs who worry about saving children to stop spending money on African lives and start spending money on “AI risk research”, which in practice means giving money to Eliezer and his friends. This seems like a moral monstrosity to me, much more so than people who ask for donations but target non-EAs (e.g. the make a wish foundation).

    Of course, I try to argue fairly, without any ad hominems, but I’m not sure how to engage AI risk people in a conversation: I don’t know too many of them, and the ones I do know are a bit like you, which means that they retreat from “AI risk is a concern we should donate money to (instead of AMF)” to “we can’t be sure about anything, and research never hurts” as soon as I start making good points. This is frustrating, and sometimes my frustration comes out in the form of unfair attacks (though I try to minimize this). Again, I find the current situation, with singularity people going to the EA summit to fundraise, to be a moral outrage; it makes me feel like yelling at someone.

    • I work at MIRI (on the operations side); if you email me at rob@intelligence.org sometime, I’d be happy to answer at least a few questions about why we think there’s a good EA case for us.

      If there’s currently a huge difference between the humanitarian impact of the best EA intervention and the second-best intervention, then the stakes are huge for picking the right cause. The choice between giving $5000 to the Humane League and giving it to the Against Malaria Foundation, for example, is almost certainly of massive moral significance. Yet I’m pretty confident that EAs who think this way about their favorite intervention shouldn’t be accusing every EA who disagrees with them of “moral monstrosity”; that would reduce the quality or the diversity of the discussion within EA, or both. This is an especially bad idea if we haven’t identified the best intervention with great confidence (or haven’t even thought up the best intervention yet).

      • anon85 says:

        Thanks for your reply, and I will consider emailing you soon (I don’t think I can change your mind, but perhaps I will find out if I have any misconceptions about your position; on the other hand, I’m busy, and if I start a conversation I’ll feel compelled to continue it).

        To clarify, I was not accusing people who disagree with me of being monsters themselves; it is their actions that are monstrous (their intentions are good, I’m sure). I also agree that it is not productive of me to accuse people of monstrous actions, but I was trying to explain my gut feeling on the subject, not to enter into a debate.

        The reason AI risk is different from other EA causes is that I feel like it’s trying to trick people into extreme conclusions using dubious reasoning. A simple example of this is Bostrom’s argument that Scott recently defended, which amounts to Pascal’s mugging; this is on the one hand hard to argue against, and on the other hand obviously fallacious (since I can use the same argument to mug you). Of course, these days MIRI proponents avoid using Pascal’s mugging, but their other arguments strike me as equally suspect.

        • OK, I understand what you were saying better now!

          Could you say which Bostrom argument you’re pointing to, or which Scott post? I don’t think MIRI or Bostrom endorsed Pascal-type arguments at any point in the past, because we think AI risk is high-stakes and high-probability, whereas the distinctive feature of Pascal’s mugging is that the risk is high-stakes and low-probability.

          I do often hear people outside MIRI say ‘even if the probability of AI risk were low, it would still be important.’ And I really wish they’d abandon that argument, or else clarify that by ‘low’ they mean something like ‘20%’ (and not, say, ‘0.00001%’).

          • TheAncientGeek says:

            Where’s the calculation?

          • Jeff Kaufman says:

            My memory is that Bostrom made exactly this “probability of success is very low but value of success is extremely high” argument during the EA Global AI risk panel. This was the question where they were asking what the probability of making a difference was. (Where Sores was only willing to give a probability conditional on AI not destroying the world, which was a bit of a cop out.)

          • Professor Frink says:

            Bostrom explicitly made a pascals mugging argument at EA global. He is perhaps the most prominent advocate, and this was perhaps the most prominent venue for the AI-risk argument.

          • anon85 says:

            As other commenters pointed out, Bostrom uses Pascal’s mugging a lot. As for Scott’s post defending him, I was referring to this:
            https://slatestarcodex.com/2015/08/12/stop-adding-zeroes/

            To be fair, Scott does say this argument is on “shaky philosophical ground”. But right after saying this, he spends the rest of the article defending the argument. Later on he even says “you have to really really stretch your own credulity to come up with numbers where Bostrom’s x-risk argument doesn’t work.” This seems a little dishonest to me, because it doesn’t mention that Bostom’s x-risk argument could just as easily be used to mug you, so there must be a problem with it somewhere.

          • It sounds like there are a lot of different independent misconceptions about this particular issue. (Maybe this demands another blog post?) To clear up a bunch of things at once:

            1. Pascal’s Mugging is any argument of the form “Doing [X] has a [superexponentially infinitesimal number] probability of yielding an outcome with [superexponentially vast number, larger than the infinitesimal number is small] utility. Therefore do [X].” (Where “superexponential numbers” include e.g. 3↑↑↑↑3 and 10↑googolplex, and don’t include “small large” numbers like 10↑80.)

            2. Eliezer Yudkowsky came up with the idea of “Pascal’s Mugging,” and Nick Bostrom has written about it in multiple academic publications. Both Yudkowsky and Bostrom always cite Pascal’s Mugging as an example of a fallacious argument. This is obvious from seeing that “[X]” above is arbitrary: if you buy the argument in the case where a stranger tells you that putting a bucket on your head will keep the world from being destroyed by AI, then you’ll also need to buy the argument in the case where a stranger tells you that putting a bucket on your head will cause an AI apocalypse. Bostrom and Yudkowsky are interested in Pascal’s Mugging because it’s obviously defective reasoning, and because it’s unclear what the best and most principled formal way of avoiding this reasoning is; so studying this topic may give us a better understanding of how to implement a practical decision theory.

            3. Bostrom briefly mentioned Pascal’s Mugging in his EA Global talk (before the panel) as a possible objection to the general idea of using expected utility calculations. Bostrom was not endorsing Pascal’s Mugging, and he wasn’t saying that global catastrophic risk arguments relied on Pascal’s Mugging. If I recall correctly, he was saying that Pascal’s Mugging, in an indirect way, suggests there may be systematic problems with expected-utility calculations, and until we know the best solution to this bug we can’t rule out the possibility that the best solution will radically undermine the entire project of basing decisions on utilities multiplied by probabilities even in the case of probabilities like ‘1%’ or ‘30%’.

            4. The idea that Bostrom was ‘endorsing’ Pascal’s Mugging probably stems from the end of the AI panel. My recollection is that Dewey asked whether the panelists thought their odds of preventing an AI catastrophe were low, medium, or high. Bostrom said “low”. “Low” in this context might mean 20%, or 5%, or 0.01%, or 0.000001%; I have no idea what order of magnitude he had in mind. Since Bostrom has always rejected Pascal’s Mugging (and treated it as a possible reductio ad absurdum of his views in his earlier talk, rather than as a possible reason to think global catastrophic risk is important), we can be confident that by “low” he didn’t mean “superexponentially low.”

            5. The question of whether MIRI and FHI’s odds of success are low is distinct from the question of whether the odds of a smarter-than-human AI system producing an existential catastrophe are low. If the former odds are very low and the latter odds are very high, the right conclusion is probably ‘we should put a lot of effort into looking for better options than MIRI and FHI for influencing AI outcomes,’ which is very different from ‘AI isn’t a big deal, or it’s only a big deal in a positive way.’

            6. If Dylan Matthews’ argument is just that a marginal dollar only nudges probability estimates about big multi-part projects a tiny amount, then his argument seems to prove too much. Advancing the frontier of AI alignment research is difficult, but well-targeted effort represents incremental progress that other actors can build on — more like setting aside $x of each month’s paycheck for a child’s college fund than like buying a series of once-off $x lottery tickets. The fact that a particular $100 is unlikely to make a large once-off impact on your child’s career prospects isn’t of immediate practical import, any more than is the difficulty of calculating numerical probabilities for the consequences of donating or not donating $100. Large projects where the benefit comes from chipping away at a highly important problem aren’t automatically dominated by less ambitious projects that get an immediate high-confidence reward for every $100 that goes in.

            For example: Giving $100 to an ambitious $100,000 project that’s intended to ban factory farming throughout the U.S. isn’t automatically a bad idea if you can save directly save a chicken’s life for <$100. Whether it's a good idea depends chiefly on how likely the $100,000 project is to achieve its goal, how valuable the goal is (compared to alternative uses of $100,000), and how much the $100 helps at making that goal likelier to be achieved (compared to alternative uses of $100). Trying to precisely compute the number of animal lives saved by giving $100 (or $10, or $1) is less useful than getting a good rough idea of the value of the project as a whole. This also includes effects like 'there's a 50% chance that if Project A hits its $100,000 goal it still fails to get factory farming banned, but the inroads Project A would make in that failure scenario increase the odds that Project B will get factory farming banned 2 years later from 10% to 30%.

            Eliezer Yudkowsky has also written about this topic a number of times. Quoting one instance:

            For so long as I can remember, I have rejected Pascal’s Wager in all its forms on sheerly practical grounds: anyone who tries to plan out their life by chasing a 1 in 10,000 chance of a huge payoff is almost certainly doomed in practice. […]

            I once again state that I abjure, refute, and disclaim all forms of Pascalian reasoning and multiplying tiny probabilities by large impacts when it comes to existential risk. We live on a planet with upcoming prospects of, among other things, human intelligence enhancement, molecular nanotechnology, sufficiently advanced biotechnology, brain-computer interfaces, and of course Artificial Intelligence in several guises. If something has only a tiny chance of impacting the fate of the world, there should be something with a larger probability of an equally huge impact to worry about instead. You cannot justifiably trade off tiny probabilities of x-risk improvement against efforts that do not effectuate a happy intergalactic civilization, but there is nonetheless no need to go on tracking tiny probabilities when you’d expect there to be medium-sized probabilities of x-risk reduction. […]

            To clarify, “Don’t multiply tiny probabilities by large impacts” is something that I apply to large-scale projects and lines of historical probability. On a very large scale, if you think FAI stands a serious chance of saving the world, then humanity should dump a bunch of effort into it, and if nobody’s dumping effort into it then you should dump more effort than currently into it. On a smaller scale, to compare two x-risk mitigation projects in demand of money, you need to estimate something about marginal impacts of the next added effort (where the common currency of utilons should probably not be lives saved, but “probability of an ok outcome”, i.e., the probability of ending up with a happy intergalactic civilization). In this case the average marginal added dollar can only account for a very tiny slice of probability, but this is not Pascal’s Wager. Large efforts with a success-or-failure criterion are rightly, justly, and unavoidably going to end up with small marginally increased probabilities of success per added small unit of effort. It would only be Pascal’s Wager if the whole route-to-an-OK-outcome were assigned a tiny probability, and then a large payoff used to shut down further discussion of whether the next unit of effort should go there or to a different x-risk.

          • anon85 says:

            Rob, who exactly are you arguing against here? We all agree that Eliezer doesn’t use Pascal’s mugging, right?

            We also all agree that Scott Alexander defended Bostrom’s multiplication of a very large utility by a very small probability (while at the same time saying that Pascal’s mugging is on shaky ground for reasons independent of Dylan’s attack). Correct?

            So the only point of disagreement is that you think Bostrom never used Pascals mugging. Is that it?

            I wasn’t at the EA conference myself, but commenters here understood Bostrom’s argument to be Pascalian. If you disagree with them, then at the very least it means Bostrom is very bad at getting his point across.

            Also, Dylan quotes Bostrom directly. He writes

            >Even if we give this 10^54 estimate “a mere 1% chance of being correct,” Bostrom writes, “we find that the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth a hundred billion times as much as a billion human lives.”

            This argument sure looks like Pascal’s mugging to me: it includes a 10^-20 number.

            I’m not really sure why we’re even talking about this; the conclusion is pretty clearly that Eliezer and MIRI try to avoid Pascal’s mugging, but Bostrom and others occasionally say things that sound fairly Pascalian.

          • Paul Crowley says:

            Rob, please do put this into a blog post or something I can link to, this misconception seems very widespread. Thanks!

          • Deiseach says:

            It would only be Pascal’s Wager if the whole route-to-an-OK-outcome were assigned a tiny probability, and then a large payoff used to shut down further discussion of whether the next unit of effort should go there or to a different x-risk.

            But the trouble with the Unfriendly AI outcome is that the proponents are chaining together a long series to get to their end result of “We should be working on this problem NOW or else!!”

            (a) That we develop some form of AI and relatively soon (that is, it’s not going to take 100 years or more)
            (b) That we develop AI that is at least human level
            (c) That this human-level AI can either improve itself or we work on improving it to above-human level, and that this improvement again happens very quickly
            (d) That this exalted AI has the ability or is given the power to make such large changes in human society that it becomes de facto the world government or some such power – it needn’t officially be Emperor of the Globe if it is controlling all our power generation and can switch it off, for instance
            (e) That the AI is or can be deemed ‘unfriendly’ because it has goals which clash with ours, or we are simply an inconvenience in the way to achieving its goals, like a vast civil engineering project being held up because of a colony of voles on one particular patch of land. We might not kill the voles, but the AI might kill us if it thinks its goals are superior for the outcome of the universe (or something).

            I know Scott gave the example of the tornado-meteorite-terrorist-election tie-lottery numbers improbability when dismissing Matthews’ assessment of x-risk donations, and I’m not saying the same level of improbability is at work here, but there are a whole slew of assumptions that “Once A, then B, C, D will fall into place like a chain of dominoes being knocked over”.

            I’ll give some credence to the initial step: the development of some form of AI. However, I think (1) it’s going to take a while to get there, but I’m willing again to concede this is measurable in 10-20 years timescale (2) it will be a simple form of intelligence; think “lab rat level” or the like.

            Getting from that to human is going to take longer, and part of that problem will be the whole difficulty in defining “What is intelligence anyway?” The machine can calculate incredibly fast? That doesn’t count. It can talk to you? Does anyone say Siri is really an AI right now, or simply the first faltering steps towards that? It kinda seems like it might be making its own choices? We’ve got philosophers of consciousness denying human beings can make choices or have any kind of free will!

            And then to get from that to “god-level really smarter than us because it can hack its own code” – yes, and the first ninety machines that try that will give themselves the computer equivalent of a lobotomy, because it’s the very definition of sawing away the branch you’re sitting on – unless the AI is smart enough to make a copy and try experimenting on it, and if the “ensmartening” treatment works on the copy, the AI will have created its own rival and supplanter, and will it be stupid enough to do that? Well, if we’re dumb enough to create our own successor, maybe our silicon child will take after its carbon parents.

            I’m much more convinced a real global catastrophe on the level of what can be classified a risk to the continuance of the human species, or at least civilisation (as we know it) will be something arising out of either global economic collapse on a worse scale than anything we’ve yet seen, or some stupid war (like Putin’s posturing re: the Ukraine) dragging us all into a gigantic mess, the way a Serbian student revolutionary managed to ignite the war that paved the way for the fall of the old European empires. Or some pandemic. But not Our New Computer Overlord.

          • One of the more interesting points in Bostrom’s Superintelligence is that the differences between various mammalian levels of intelligence, or inter-human levels of intelligence, might not be too huge, in the grand scheme of things. On a relevantly human scale, there seems to be a wide gulf between Einstein and average, but looking at the spectrum of intelligence from bacteria to human beings, it’s actually quite small. Bostrom makes the point that once we’re able to crack the nut of say, rat level intelligence, it could be a very short hop to human and then post-human levels of intelligence.

          • Deiseach says:

            Bostrom makes the point that once we’re able to crack the nut of say, rat level intelligence, it could be a very short hop to human and then post-human levels of intelligence.

            James, that reminds me of the chirpy pop-science articles about “Humans share 50% of DNA with bananas!” or even the “Humans and chimpanzees share 98% of DNA”.

            That 2% makes a heck of a difference. The jump from rat-to-human intelligence has only happened with us, and evolution has had a couple of million years to mess around with it. For whatever reason, our cousin human species died off or intermingled and we’re all that there is so far. Yes, gorillas, octupuses (looking like our best bet for nearest non-human intelligent Earth species) all the rest of it, but right now I’m not sitting here typing this message in discussion with an octopus commenter, am I?

            It’s easy to say “ONCE we’ve cracked it, it SHOULD be easy to make the next step”. Well, I’ll believe it when I see it. I think part of our longing for AI, and part of the whole animal rights “they are too sapient and sentient!” “Chickens are living, loving, experiencing beings”, is that we’re lonely.

            Hundreds of fellow species of insects, birds, even a handful of primates. But at our level, only us. We want a companion species, something or some entity to be our equal, our make. If the Neanderthals or the Denisovans had survived as separate human species, and there were two or three of us humans on the planet, I think matters would be different.

            As it stands, I think we want intellectual children and AI is that hope: like us, but different, so we’re no longer alone in a world full of non-human animals. It’s the same impulse that makes people treat their pets as children, as blood members of their family, and unironically refer to themselves as “mommy and daddy” to a dog or a cat that is their “baby”.

            I’ve seen pictures of snakes dressed up in tiny costumes, and if ever I thought people should not be permitted to own other living creatures on the basis of animal cruelty… breeding lizards to have big dark eyes and what looks like smiling mouths, to create living versions of cartoon animals that we can then project imaginary human-level emotions on (she really loves her mommy! No, it’s a reptile, that conformation of the jaw has been bred into it so it looks like a smile but it’s not smiling at you).

            People are horrified at the cruelty of puppy farms, but the reason the damn things exist is precisely because of this cutesy-poo anthropomorphism: if we treated animals as animals and not pseudo-humans in fur suits, not as love-bags that give us unconditional adoration and have ’emotions’ about us, if we let them have their own lives and not exploit them for cuteness and emotional support, if we maintained a distance between them and us so they were seen as discrete and separate entities with their own instincts and lives, we’d do away with a lot more cruelty in the world.

          • anon85, see my first point above:

            Pascal’s Mugging is any argument of the form “Doing [X] has a [superexponentially infinitesimal number] probability of yielding an outcome with [superexponentially vast number, larger than the infinitesimal number is small] utility. Therefore do [X].” (Where “superexponential numbers” include e.g. 3↑↑↑↑3 and 10↑googolplex, and don’t include “small large” numbers like 10↑80.)

            10^-20 isn’t a superexponential number, so it doesn’t raise any formal problems for decision theory — it’s not big (/small) enough to be used in Pascal’s Mugging, and you can reason with it more or less the same as you’d reason with a number like 10^-7 or 10^-3. You want to be careful that your estimate is robust, that you don’t waste your life chasing lottery tickets, etc.; but that’s true for lotteries you have a 1/10,000,000 chance of winning too, and for (low-payoff) lotteries where you have a 1/1,000 of winning, not just for lotteries where you have a 1/100,000,000,000,000,000,000 chance of winning.

            Rather than speculate, I just went and asked Bostrom whether he was giving a Pascalian argument when he said that the odds of making a difference for AI risk were “low” rather than “medium” or “high”. Bostrom replied:

            The point I was making in the EA global comment was the probability that you (for any ‘you’ in the audience) will save the world from an AI catastrophe is very small, not that the probability of AI catastrophe is very small. Thus working on AI risk is similar to volunteering for a presidential election campaign.

            This seems reasonable to me. Making a difference in a presidential election campaign is hard, though not impossible — large-ish campaigns can do it, and individuals in an especially good position (e.g., people with relevant expertise, people living in a critical swing state…) can expect to make a nontrivial difference. The analogy isn’t particularly close, though. I think of present-day AI risk research as more like ‘trying to make a big difference in early global-warming-relevant research in the 1970s’.

          • Jaskologist says:

            We want a companion species, something or some entity to be our equal, our make.

            [tangent]
            I think we’re even worse off than that. We’re definitely lonely, but it’s because we’re in denial about the potential companions that already exist.

            What if the Neanderthals had survived to the present? Since they’re able to interbreed with humans, aren’t they really just another race? What difference would it make to have Neanderthals in addition to whites, blacks, Asians, etc? We already don’t know how to deal with having all these races which are visibly different (and here I’m only speaking to the skin-level, nevermind any deeper differences). How much worse would we be if Elves showed up who were our superiors in every measurable way?

            There at least, it is somewhat feasible for us to live apart and avoid the issue altogether. The truly alien minds are much closer to home: men and women. We are quite different, sometimes in large and sometimes subtle ways, and we cannot do without the other. Currently, we respond to this with denial, to the point where it took me years of living with a woman to unlearn the idea that we were just different skins slapped on the same mind.

            So yeah, we’re lonely, in the same way that Burke and Wills were starving; we’re misusing the resources that are there to fit the need. The trouble is that our current conception of “equal” does not allow room for “different,” so we just deny difference entirely. After all, in any reductionist materialistic view, a different thing is not equal. The Pauline view of equality as “valuable, yet different, parts of one body” is much more workable.

            A babied toy dog is a clear case of parental instinct gone haywire because it has no baby to lavish its attention on. Encourage everybody to delay marriage and treat women as men-with-boobs, and it is any surprise that the craving for interaction with someone truly other manifests in more unusual ways?

            [/tangent]

          • Nita says:

            it took me years of living with a woman to unlearn the idea that we were just different skins slapped on the same mind

            Funny — I’ve lived with a man for a while, and he seems less and less like an alien with every passing year. I’d say he and I are more similar than two women or two men picked at random.

      • Sam says:

        What do you say to a person who wants to donate, say, $10M to an X-risk cause and is trying to decide between MIRI and the B612 foundation?

        (I’m not trolling, I honestly want to know how MIRI employees who are sympathetic to EA philosophy conceptualize AI risk within the context of other Xrisks.)

        • Eggo says:

          I’d also love to hear about this. We seem to have much better numbers for predicting collision risks, and a looong historical record to back it up as a serious threat.

        • John Schilling says:

          B612 is specifically focused on undiscovered near-Earth asteroids. The search for such asteroids to date has essentially ruled out the possibility of an impact sufficient to credibly threaten human extinction in the next million years or so even if we do nothing. To the extent that there is an X-risk from impacts, it comes from long-period comets. That risk is very small, <1E-8 per year, but it would be very expensive (tens of gigabucks) to do anything about it and nobody is seriously trying.

          B612 is working to pin down the threat of lesser impacts that could kill millions of people. It trades against mosquito nets for Africans, not against MIRI. I doubt it trades favorably against mosquito nets, but I haven't done the math.

        • Scott Alexander says:

          Suppose we only have to worry about asteroids for another 100 years – after that we’ll either be dead or high-tech enough that we can bat them away without difficulty. If you don’t believe this about 100 years, change the number until you do, it won’t affect the argument much.

          The last really big asteroid to hit Earth was 65 million years ago; the last moderately big one was 3 million years ago. The really big one is an x-risk, the moderately big one I’m not sure, anything smaller probably not. So let’s compromise and say one x-risk asteroid every 10 million years.

          So the chance of x-risk from asteroids in our 100 year window is about 1/100,000.

          Surveys of field experts estimate the chance of a superintelligent AI in the next hundred years at about 50%.

          So, which is it more urgent to prepare for – the thing with a chance of 1/100,000, or the thing with a chance of 50%?

          • Though keep in mind that top-cited AI researchers in the same survey assigned a 72% probability to human-equivalent AI having a neutral or moderately good/bad long-term impact on humanity. It’s arguably only the 20% who thought AI’s impact would be “extremely good” and the 8% who thought it would be “extremely bad” who could plausibly be thinking in terms of remotely Bostrom-like scenarios.

            If we multiply the 50% by the 8%, we only get a 4% chance of disaster in the next century. (… Though that’s probably the wrong number to look at if we’re deciding whether to fund safety work, since some researchers may be giving disaster that low of a probability because they’re assuming large amounts of safety research will in fact get funded.)

            Regardless, the numbers are surprising and interesting, since this group does generally expect us to achieve human-level AI in the next 20-60 years, and expects ” machine intelligence that greatly surpasses the performance of every human in most professions” within 30 years of that. I wonder what the before/after estimates would look like if we got a random sample of AI experts to read Superintelligence or Smarter Than Us?

        • HeelBearCub says:

          Scott,

          The chance of a super-intelligent x-risk AI is 50%? Really?

          • TheAncientGeek says:

            Of course, you need a string of other assumptions as well..fast takeoff,
            failure of boxing ,ill intentions….

          • Saal says:

            @HeelBearCub

            I’m fairly certain it’s being taken as a given that any superintelligent AI poses an x-risk unless properly aligned. Not saying I necessarily agree with that, but there it is.

          • HeelBearCub says:

            That 50% isn’t even “super intelligent AI” but “high level machine intelligence” that can do “most” professions better than humans.

            So, most could be as little as 50.1% and definitely doesn’t include any of the ones that require the highest level thinking. It also doesn’t say how professions are defined and whether the number of folks doing them has any weight. Basically, it could be interpreted as a question that asks “Is AI the next cotton gin?”.

            They also left out the 29 respondents who said “Never”. They didn’t even uncharitably interpret them as giving answer like “2x the highest anyone else gave”. They just left them out of the analysis.

          • Deiseach says:

            Oh well, if it’s 50% chance then we’re all already fucked, so we may as well blow all our donation cash on eating, drinking and being merry!

            Honestly, I’d sooner believe the latest loopy “New interpretation of Nostradamus’ quatrains proves Putin is the Anti-Christ” than the AI stuff. And I’m a skiffy fan from way back, so if the hard-selling is turning me off, I don’t know what it’s doing to mundanes.

            Apologies in advance, HeelBearCub, but I’m going to be doing some more insulting-comparisons here. This kind of “donate now because this is the big coming threat” urgency reminds me of nothing so much as the disgraced televangelist Jim Bakker and his new gimmick:

            A few months ago we mentioned that Jim Bakker is still “ministering” these days. The venue? A TV show, natch. And the focus of his “ministry”? Convincing viewers that some sort of apocalypse is right around the corner, and therefore you should buy his prepper food. Lots of his prepper food. Like, tons and tons of his prepper food. But you can only go to the ISIS/gays/terrorism well so often. Ya gotta get some new threats. So this week Jim had on Rick Wiles, who asserted with scientific exactitude that “the earth has a 206 year cycle” and we have just come out of a global warming cycle, but are now headed into a 206 year cooling cycle that will usher in a new ice age, starting in November of 2015. Bakker boldly interpreted the meaning of this for us: “New York, Chicago, all of your big cities, will be Hell. The gangs will take what they want. They will kill to take what they want. Then then they will start eating bodies of the people they kill.” So there you have it. A new ice age and cannibalism run amuck. Better order that 6 gallon pail of creamy potato soup from Jimbo RIGHT NOW! That way “You can have parties when the world is coming part.”

          • Scott Alexander says:

            I think that number is too high, but you’re still going to have a hard time getting to 1/100,000.

          • anon85 says:

            There’s nothing we can do now that will substantially reduce AI risk. We don’t even understand the AI risk problem very well.

            Then again, I don’t know of anything we can do now that will substantially reduce x-risk from asteroids, so donating to that seems even worse.

          • Jiro says:

            Then again, I don’t know of anything we can do now that will substantially reduce x-risk from asteroids

            I do. Assuming that we’ll figure out how to deflect asteroids in the future and will then no longer be vulnerable, making that day come faster reduces the time interval in which we are vulnerable. So any genberalized spending on space travel should do it. If we end up being able to deflect asteroids in, say, 225 years instead of 250, that would be a 10% reduction in risk.

    • “Again, I find the current situation, with singularity people going to the EA summit to fundraise, to be a moral outrage”

      Why a moral outrage? Different people have different beliefs about what the most important problems are. As long as the singularity people are honestly describing their views, and leaving it up to others to agree and donate or not, where is the moral problem?

      I am skeptical about the orthodox position on global warming–while I think it is real and probably due to human action, I don’t see any good reason to expect its effects to be catastrophic or even on net negative. I find it at least mildly immoral when people on the other side make deliberately dishonest arguments—but even then, I can understand the temptation, given that they are confident of the conclusion and believe persuading people is vitally important. But the fact that people who believe in CAGW want to persuade other people to believe in it and do things to prevent doesn’t strike me as in the least immoral, even if they are wrong.

      • anon85 says:

        I perceive the AI risk people as directly taking money that would have otherwise gone to other EA causes. I perceive other charities as taking money that would have otherwise gone to non-important causes. I realize that this perception is unfair, but I can’t shake the feeling that this is how it works.

        • Vadim Kosoy says:

          The reason MIRI competes with other EA causes is precisely *because* they use arguments that appeal to rational thinkers rather than warm fuzzies. I understand and respect your disagreement with these arguments ( although I disagree with it 🙂 ), but please take the outside view and consider that punishing causes for using arguments appealing to rationality is a really bad strategy.

          I think that it is crucial to the success of the EA movement that each of us respects those that support other causes. Let’s focus on emphasizing the strengths of our causes rather than attacking the weaknesses of the other causes. If we *have* to attack the weaknesses of the other causes, let’s do it in a respectful and balanced way, avoiding ad hominem and tribal battle cries.

          • anon85 says:

            I don’t think the AI-risk people primarily use rational arguments. Their primary appeal is by authority: it’s Eliezer and Elon etc. saying things like “I can’t see how anyone can be unconcerned about AI”, and make people who are unconcerned feel dumb.

            I don’t punish people for making rational arguments. But I can’t help but feel like in this case, the arguments aren’t particularly strong, and the steaks are unusually high (people’s lives are literally on the line). So it does make me angry when AI risk is given a seat to promote itself at an EA conference, convincing people who are not AI experts (and who therefore aren’t well-equipped to evaluate the arguments) that there’s some imminent danger.

          • Kaj Sotala says:

            May I suggest my review paper about the various arguments around AI risk as one that hopefully offers rational arguments for AI risk being real? http://iopscience.iop.org/1402-4896/90/1/018001/pdf/1402-4896_90_1_018001.pdf

          • John Schilling says:

            If MIRI is going to appeal to truly rational thinkers, within the context of EA, don’t they kind of have to present – with real math – the numerical value for the probability that a marginal dollar to MIRI will prevent an AI apocalypse?

            I don’t recall MIRI having done this. Without this, all you’ve got is one of the classic non-rational fuzzy arguments: “Something must be done. This is something. Therefore this must be done”. And they’re going up against people who have done the math and shown that they really can save human lives here and now at a higher lives-per-dollar ratio than anyone else.

          • AlphaCeph says:

            John Schilling says: “present – with real math – the numerical value for the probability that a marginal dollar to MIRI will prevent an AI apocalypse”

            If you’re being Bayesian about it, then that probability will depend on your prior. You could use a “reasonable” prior, put in some analogue problems that succeeded or didn’t succeed (like the Apollo program or the Manhattan Project), fudge it a bit for “AI safety is probably harder than nukes” and you’d get some result of the form “if you input about $10 biillion you get a 25% AI-X-risk reduction, with p ~ K*log(money) where K = 0.25/10” because $10 biillion is roughly what the manhattan project cost, i.e. it’s the amount of money required to pay a large team of highly skilled people to work full-time on a project for many years with another large team doing practical work at scale. More money than this doesn’t really help.

            So, since d/dx(K*log(x)) = K/x = 0.025/(current money in AI risk) = 0.025/(about 10 million) = 2.5*10^(-9) per $. Let’s just ignore that factor of 2.5 and call it 10^(-9) per dollar.

            So each dollar in AI risk mitigation reduces the probability by about 1 in 1 billion.

            Since a reasonable time-discounted estimate of a post singularity human population is, say, 10^14 – 10^20 you could argue that you are saving 10^5 – 10^11 lives per dollar. A post-singularity life-year will probably come with a higher quality factor than a pre-singularity one, so that to some extent offsets time discounting and number discounting (i.e. the idea that for increasing n, the nth person is not as intrinsically valuable). If you time-discount less aggressively and count uploads and forks of people which are likely post singularity, then the numbers get bigger, and we start approaching the crazy 10^54 type numbers.

          • John Schilling says:

            Things you pulled out of the air without justification, that I’m not buying:

            1. The p = k log ($) formulation. By that bit of math, the first AI researcher to spend fifteen minutes thinking about the problem had a ~3% chance of solving it for all time, but getting to even a 50% chance of success will require one hundred quintillion dollars. Or, applied to actual cislunar spaceflight, if I spend ten thousand dollars to build an amateur rocket in my backyard, I have a 40% chance of reaching the moon.

            2. The Apollo program analogy in general. You all don’t have any quantitative data on cost vs. results from the actual field of computer science to build from?

            3. Mistaking the victory conditions. Since you use Apollo as an example, I’ll note that Apollo has to succeed once. MIRI can’t just defeat Skynet, it has to tame every AI ever. It has to, continuing the analogy, build an enduring civilization throughout the solar system.

            4. Ignoring the possibility of doing harm. MIRI’s underfunded efforts could provide Skynet with the equivalent of a low-dose antibiotic regimen to develop a resistance to, where otherwise the AI would emerge overconfident and engage in a premature attempt at world conquest that could be stopped by the serious professionals. Since you brought up Apollo, I will point out that we’ve discussed this here before and the premature expenditure of ~$1E10 on Apollo has reduced, not increased, the probability that the human race will establish an extraterrestrial civilization.

            5. On the other side of the equation, pulling the 1E20 human lives MIRI is going to save out of a hat, asserting without evidence that their existence is inevitable or even likely if only Skynet can be defeated, and that MIRI will be at all relevant to their ongoing security even after 1E19 humans have been poking around at AI issues.

            And you all are comparing this to the rigor with which the mosquito net crowd is defending their claim to save an actual non-hypothetical human life for every thousand-ish dollars donated. They win, you lose. Your only converts will be the people who only give lip service to the idea that you are actually supposed to do the math before making the donation.

            This really does look like Pascal’s Mugging dressed up with a bit of mathematical legerdemain. Look, I got this big number by multiplying five other numbers together! That’s math! And you can check my multiplication to see that I did the math right! Pay no attention to the giant hat behind the curtain, from which I pulled the five other numbers!

          • anon85 says:

            Kaj: thanks for the link, but there’s not much there that I haven’t already seen. As a suggestion, the part that needs most work is the part you argue that current research can have any impact on AI safety at all. It mostly sounds like by AI safety research you mean “let’s brainstorm some ideas”.

            I also have problems with the rest of the arguments, but this is probably not the best place for a rebuttal.

          • Kaj Sotala says:

            anon85: In that case, I’m confused. You say that you’re already familiar with the arguments in our paper, but in the comment I responded to, you said that AI risk people weren’t offering rational arguments. Do you mean that the paper didn’t offer any rational arguments? Or that while these *are* rational arguments made by AI risk people, AI risk people still *predominantly* argue by means other than rational argumentation?

            As a suggestion, the part that needs most work is the part you argue that current research can have any impact on AI safety at all.

            Thanks for the suggestion! I agree that this could be discussed more.

          • anon85 says:

            Kaj, sorry, I should have clarified. I wasn’t saying AI risk people don’t have rational arguments. I said they don’t *primarily* use rational arguments to convert people. This is true of most other beliefs as well – people like to change their minds due to social pressure, not due to rational arguments.

            Rationalists are a bit better at this than the general population, but not by that much. It can actually be quite rational to follow the belief of people you respect, even without fully understanding the reasoning. Thus a perfect rationalist would still have their beliefs strongly affected by their social group.

            Now, do AI people have rational arguments? Of course! But these arguments are to a large extent wishy-washy. For example, I think I can make a good case that MIRI hurts our x-risk chances (Possible mechanism: suppose MIRI keeps yelling that AI is dangerous; by 2030, politicians take it seriously, and take some weak actions towards it. By 2050, strong AI still didn’t happen, and AI risk people are starting to look ridiculous. By 2070, no one pays attention to AI risk due to boy-who-cried-wolf effects and due to it being a mindkilling political issue. By 2100, a strong AI kills us all.)

            So who is right? Does funding MIRI now help or hurt? This is a question that each person will answer according to his or her subjective biases. We’re outside the realms of rationality and entering wishy washy speculation.

          • Kaj Sotala says:

            anon85: Thanks for the clarification!

            I agree that the question of whether supporting MIRI in particular is helpful for AI risk is subject to reasonable disagreement. For what it’s worth, Nate Soares recently published a post on what sets them apart from other people working on AI safety issues, but of course these too are suggestive arguments only: https://intelligence.org/2015/08/14/what-sets-miri-apart/

            That said, I feel that the claim of “each person will answer according to his or her subjective biases” is one that ultimately applies to *all* EA causes, not just the AI risk one. Yes, we’ve got some areas like poverty reduction where there exists some number of controlled studies, but even GiveWell’s reports on their very top charities are riddled with uncertainties, assumptions and caveats. (Highlighting the high amount of uncertainty is to GiveWell’s credit, of course.) And there are some very serious and valid questions you could raise about the GW approach, like “why should we assume that the areas where impact is the easiest to measure would be the ones that could also be expected to be highest-impact” – it’s not hard for someone to think that yes, we’ve got relatively controlled studies suggesting that these particular interventions are the best out of the ones studied, but the interventions that are amenable to controlled studies are not the most effective ones.

            Eventually the decision between choosing *any* EA cause comes down to subjective biases (or, to put it in a form that sounds more charitable, differing priors).

            But if someone accepts the argument for AI risk being worth working on as a major cause in the first place, then that seems like a sufficient reason to let MIRI-folk attend EA events and make their case, even if only to expose EAs to the AI risk view in general and make some of them come up with a better approach than MIRI has.

          • Steven says:

            “And there are some very serious and valid questions you could raise about the GW approach, like ‘why should we assume that the areas where impact is the easiest to measure would be the ones that could also be expected to be highest-impact’ – it’s not hard for someone to think that yes, we’ve got relatively controlled studies suggesting that these particular interventions are the best out of the ones studied, but the interventions that are amenable to controlled studies are not the most effective ones.”

            I know of one good reason to suspect that the best-measured interventions are likely to be the highest-impact interventions. Measuring what works what works is how high-impact interventions are built. The less feedback we get about an intervention, the less we’re able to learn and adjust. A complex intervention like crafting friendly-AI without high-quality progress measurement is about as likely to be successful as scientific theorizing without experimentation or observation.

      • Oliver Cromwell says:

        I suspect people who are morally outraged by the AI crowd think that the AI crowd are being dishonest, not really believing that AI risk is the most important cause or (more probable) that their work is likely to significantly mitigate AI risk, but knowing that they can persuade people to give them money for it anyway. Money that they then spend on salaries for themselves and expanding organisations that they control.

        I am not saying that this is necessarily what is happening, but it is much harder to be confident that it isn’t in comparison to listening to someone advocating giving money to an organisation that they don’t control, that doesn’t pay them a salary, that is recognised as effective by a lot of other people who are not members/employees of that organisation, and that is pursuing a goal you already believed to be important.

      • Eli says:

        You don’t think the already-occurring mass extinction of nonhuman life and poleward shift in temperate biomes is harmful?

    • Part of the point of EA is that it’s a bunch of people with a bunch of different causes attempting to cooperate/pool resources on the task of educating people to the point where they can take *any* of the causes seriously.

      See also: http://lesswrong.com/lw/66/rationality_common_interest_of_many_causes/

      The AI folks I know talk up EA constantly when talking to non-EAs, without much regard for whether the non-EAs are going to give to AI or other EA causes.

      Obviously everyone tries to make the best case they can for the cause they think is most important, but I’ve never observed AI folks directly exerting pressure on someone doing, say, global poverty, to abandon it and work on AI instead. In particular, again, as far as I know, Luke received no ill will for leaving MIRI to work at GiveWell.

      • Benito says:

        It’s only just struck me how useful the example of Luke is, to dispel the above commenter’s and others’ narrative about a distinction between EA and MIRI.

        • Eli says:

          Except that Luke moved from MIRI to EA, which would seem to say something about his beliefs. Let’s hope it’s that he believes Nate can do better for MIRI than he can.

      • anon85 says:

        I agree that it’s important to convince non-EAs to take EA seriously, and that AI-risk people try to do this. But AI-risk people also try to convert EAs to AI-risk, which I find problematic.

        Do you disagree that AI-risk people try to do this? It sort of sounds like you do, but I’m not sure.

        • I think that AI-risk people are trying to engage in positive-sum trade with global poverty people and animal rights people, leaving everyone involved better off.

          Like, let X be the number of people actively donating to/working on global poverty who originally got involved with EA after reading LW or MoR, and let Y be the number of people who were going to donate to/work on global poverty who were convinced to give to MIRI instead. I would be utterly shocked if X didn’t outnumber Y by a factor of 5. Kinda surprised if it wasn’t 10, actually.

          • anon85 says:

            Eliezer did a good job promoting rationality, and rationality is related to EA. That’s great. But note that promoting rationality and promoting AI-risk are almost entirely separate. Why can’t I be supportive of one but opposed to the other?

            As a side note: would EA have much less following if Eliezer’s writing on rationality never existed? This is possible, but I think you’re overconfident; it seems reasonable to expect that if LW never existed, the community would simply form somewhere else. And note that EY hardly ever promoted EA directly anyway – hpmor doesn’t mention it, and the popular LW sequences don’t either.

          • Jeff Kaufman says:

            The popular sequences and hpmor don’t mention EA by name because the name hadn’t been coined yet. But the idea that you should find the most valuable thing to do and go do it (or fund it) is one brought up many times in both and generally implicit as a background to the rest.

        • To more directly answer your question, my impression is that AI-risk folk put materials out into EA spaces saying “we think AI-risk is the most important thing, here’s why, ask us if you want to learn more”. I *don’t* think that AI-risk people are like walking up to individual people involved in global poverty and, I don’t know, telling them they’re wasting their lives and that they should stop and do AI-risk stuff instead.

          Cooperating across epistemic divides is kinda hard, but honestly I think we’re doing a decent job at it.

          • anon85 says:

            >To more directly answer your question, my impression is that AI-risk folk put materials out into EA spaces saying “we think AI-risk is the most important thing, here’s why, ask us if you want to learn more”.

            Good, this was my point. The next point that we can both probably agree on is that doing this kills people in Africa. I assume you agree?

            So then the question is whether the reasons people become convinced that AI-risk is more important are valid. I strongly believe that they are not. I guess this gets us down to the object level. But to stay on the meta level just a little bit longer, I think the arguments AI-risk people use are not easily quantifiable and depend on a lot of subjective guessing and gut feelings. It feels wrong to let people die based on such reasoning.

          • GMHowe says:

            @anon85
            >The next point that we can both probably agree on is that doing this kills people in Africa. I assume you agree?

            This is true of every charity that does not focus on saving lives in Africa. In fact, it is true of almost every spending priority whether charitable or not. Why the isolated ire for those who want to work on AI risk?

          • anon85 says:

            @GMHowe: isolated ire only for those charities who focus their fundraising on the EA community. Only those ones kill people.

          • AlphaCeph says:

            @anon85: “doing this kills people in Africa”

            Arguing, as you are, that AI risk is not the most important cause, kills many more people in the future.

          • onyomi says:

            Again, why can’t we distinguish between “killing people” and “failing to save people”? There is a very big difference.

          • TheAncientGeek says:

            But your ethics is not the One True Ethics!

          • anon85 says:

            @AlphaCeph: I addressed this when I said “So then the question is whether the reasons people become convinced that AI-risk is more important are valid…”

            Of course if AI-risk people are right, then not donating to AI risk kills future people. I just don’t think they’re right. This whole thing started from me trying to explain my gut anger at the AI crowd.

            @onyomi: again, I’m referring to killing people, not failing to save them; there are people who would live if it weren’t for EAs becoming convinced to donated less to malaria nets than they were going to.

          • onyomi says:

            Getting people to donate to something else instead of malaria prevention is not killing people, it’s convincing people not to save people. But by their lights, I’m sure they think they’re saving more people, so it’s not even bad on consequentialist grounds.

            To take a more clear-cut example: are people who advocate against vaccines baby murderers? I don’t think we can say that they are, even if their actions result in net baby deaths. Because THEY think they’re saving babies, however misguided they may be.

            Now I do think it is still possible to judge them ethically, because, in most cases, they are irresponsibly un-informed. In today’s world it is easy to read medical studies and it is also easy to know when you are not qualified to have an opinion. To go around strongly promoting a medical opinion without doing much research or having any relevant expertise is dangerously irresponsible, and, therefore, arguably unethical. But if the vaccine opponent actually has some (on their view) relevant background, and has spent a long time researching and thinking about the issue and has honestly come to the conclusion that vaccines are harmful, then I don’t see how they could be moral monsters for expounding that view.

            Now I’m assuming most of these MIRI people have done a lot of research and spent a long time thinking about this AI issue and genuinely believe it’s a very serious, if not the most serious risk. So one can’t accuse them of bad intentions or being irresponsibly ill-informed.

            Intentions matter even if they aren’t the only thing that the matters. Otherwise, accidentally killing your child by feeding him a food to which you were unaware he was highly allergic and intentionally poisoning your child would ethically equivalent.

          • anon85 says:

            @onyomi: Intention does matter, which is why I never used the word “murder”. Only “kill”.

          • onyomi says:

            Is unintentionally killing people even when you were well-informed and had the best of intentions morally monstrous?

          • anon85 says:

            An action that kills people is a monstrous action. A person may make monstrous actions without being a monster (if they had good intentions).

            Anyway, let’s stop this discussion, we’ve boiled it down to semantics and it’s no longer useful.

    • onyomi says:

      Is it more of a moral outrage than the billions of people who currently aren’t donating any money to any charity at all? Otherwise, this seems like a Copenhagen interpretation.

      Not that I can’t sympathize: a bad movie which had potential to be great annoys me more than a bad movie which had no chance of being good. But still, I think the Copenhagen interpretation of ethics is a destructive bias.

      • anon85 says:

        >Is it more of a moral outrage than the billions of people who currently aren’t donating any money to any charity at all? Otherwise, this seems like a Copenhagen interpretation.

        Well, no, of course not, but it’s hard for me to control my gut feelings.

        Still, I’m not sure if this is a Copenhagen interpretation, because convincing EAs to stop donating to malaria nets (and donate to MIRI instead) is a net negative in my view. It would be a Copenhagen interpretation if it was an action I viewed as net positive, but then I got mad because it wasn’t positive enough.

        • onyomi says:

          But do you view their donations to MIRI as a negative if you consider that that they might have otherwise spent that money not on malaria nets, but on a new BMW? And if so, are all BMW owners who never think about donating to MIRI or malaria nets equally monstrous?

      • John Schilling says:

        The billions of people who aren’t donating any money to any charity at all, are mostly very poor people who directly share their meager resources to help other very poor people who live next door to them. They are counted as “not donating any money to any charity at all” because, first world problem, we define “charity” as money donated to a formal organization that promises to help complete strangers.

        If it’s selfish rich first-worlders who don’t so much as buy a box of girl scout cookies you are talking about, there are only millions of those.

        • onyomi says:

          Okay, but I still can’t wrap my head around “donating to less important thing” as morally monstrous just because you could have donated it to more important thing. If the MIRI people are actually saying DON’T donate to malaria prevention; THIS is more important, then that is arguably irresponsible, depending on which of the two you think is really more important, but hardly monstrous. Charity is all in the realm of supererogatory. I don’t think anyone should be called monstrous for doing the wrong supererogatory thing.

          • anon85 says:

            If your existence causes many more people to die than if you didn’t exist, then your actions are monstrous (your intentions may still be good, of course).

            People who go around trying to convince EAs to donate to AI-risk have exactly this property: their existence causes death.

          • Linch says:

            Only some people believe that charity is supererogatory. Peter Singer does not. I’m on the fence about whether it should apply to the modal person of my SES, but at least for me personally I think it’s obligatory to devote some of my (incredibly privileged background as a male neurotypical person living in the first world) resources to improving the world. I’m not sure where to draw the line between obligatory and supererogatory, but it’s almost certainly above what I currently do.

            At the very minimum, it costs resources for me to exist, so I should at least be aiming for net zero.

            I have problems with Less Wrong, but the one thing they got right was the name. I wish we could do something similar in EA, for the sake of intellectual honesty, but “less defective egotists” just doesn’t have the same ring to it.

        • Deiseach says:

          I can’t buy Girl Scout cookies because they don’t sell them over here in Ireland. Obviously, I am a terrible person for not flying to America so I can buy a box 🙂

    • Sylocat says:

      I wonder if part of the reason “Does AI Research get too much of a spotlight within EA” is an issue is because MIRI is the only charity that is run by a very visible proponent of EA.

      If someone just says, “Hey, donate to my charity!” your mind thinks, “Okay, sales pitch. Is it good?” and you know to keep that in mind when evaluating the claims. But if someone says, “Hey, here’s an algorithm that smart people can use to figure out which charities do the most good in the world. You want to be (thought of as) a smart person and a charitable person, right?,” and then goes to a conference of people who use this algorithm and says, “Here’s why this algorithm shows that Smart and Charitable and Diligent people will donate to this charity that I personally benefit from,” well, a lot of people are going to think, “Hmm, that seems a little too convenient for him.” And, whether he wanted it or not, Eliezer Yudkowsky is one of the people most frequently associated with Effective Altruism, since it’s pretty heavily tied to the capital-R Rationalist movement in general, and Yudkowsky has written articles for the websites of a bunch of the Charity Navigators used by it.

      And thus we find ourselves in a Type 1 Error versus Type 2 Errors situation, because it often is a bad sign; there frequently are ulterior motives there, because, as Dilbert put it, “All forms of motivation breed the wrong behaviors.” So, if you become sufficiently diligent against buttered coffee or chocolate weight loss pills, your pattern-recognition filter might flag Effective Altruism as “A scam to get donations to the LessWrong crowd.”

      (which isn’t helped by the already-prevalent perception that MIRI are just some fearmongering cranks telling campfire stories to gullible nerds about how Ultron and Skynet will get you if you don’t eat your vegetables donate money to that guy who wrote that Harry Potter Marty Stu fanfic)

      So, some people might not want “AI Research” to have any prominence at all in EA circles merely because they think it makes them look like they have a conflict of interest, regardless of whether or not they actually have one.

      • Earthly Knight says:

        Yep. If you’re also independently inclined to think that Yudkowsky is a bit of a charlatan, his involvement and attendant conflict of interest will probably sound the death knell for your participation in effective altruism. “You know who I’m not going to give money to? A charity strongly associated with that guy, which, by the way, personally enriches him and feeds into his bizarre cult of personality.”

        This may not describe more than about twelve people, though.

        • Sylocat says:

          Thing is, I’m actually quite sympathetic to the views of the AI Risk people. I can easily believe that AI Foom might happen with catastrophic consequences, and that it really is a good idea to spend at least SOME time and effort researching ways to prevent that from occurring.

          But, though the RationalWiki article on Yudkowsky is unfairly mean in spots, it’s 100% correct when it points out things like how MIRI has yet to actually publicize any actual worthwhile results in the field (which is not, in itself, a sure sign of being unqualified, but isn’t exactly reassuring), and that all of his examples of decision theory are computationally intractable* (as I’ve mentioned in the past, my own programming skillz are somewhere between “Moderately Talented Amateur” and “This Guy,” and even I know that). In fact, most of the AI Apocalypse scenarios I’ve ever seen, from Yudkowsky or any of his friends, seem to depend on computers working the way they do in bad 90s sci-fi movies (or worse, on C.S.I.).

          * (of course, the RationalWiki article then goes on to, in the same paragraph, ignore the fact that it is possible… indeed quite easy… to code a program that can alter some sections of its own code and not others, which kind of undermines their point. I should bring that up on the Talk page and see how it goes over)

      • Deiseach says:

        Step Two (“Hey, here’s an algorithm that smart people can use to figure out which charities do the most good in the world”) isn’t bad in and of itself. I do some minor eye-rolling at it because what gets tagged on to it then is “And if you don’t use this to find out which charity does most good in the world, but instead give to charities based on blind sentiment, you are a horrific evil monster directly responsible for the suffering and death of millions”. Oh, cry me a river!

        The trouble, as you say, comes in at Step Three: “My wonderful completely objective unbiased it’s mathematical so it must be the most right thing in the universe algorithm says you should donate to my pet cause because it’s the most vital thing in the world, no don’t mind the existing suffering millions, my pet cause will mean as yet non-existent millions of billions will live in a happy intragalactic* civilisation.”.

        That does sound like special pleading at best, huckstering at worst, Heaven’s Gate style skiffy cultishness at very worst (“Post-Singularity, when we’ve solved the problem of Friendly AI, we will all upload our consciousnesses into new machine energy-state existences, leaving our limited physical material bodies behind, and colonise the galaxy”).

        (*Let’s work on a galaxy-wide civilisation first before starting to mess around with other galaxies in an intergalactic civilisation, hmmmm? Walk before we start running!)

        • Sylocat says:

          I do some minor eye-rolling at it because what gets tagged on to it then is “And if you don’t use this to find out which charity does most good in the world, but instead give to charities based on blind sentiment, you are a horrific evil monster directly responsible for the suffering and death of millions”. Oh, cry me a river!

          Which is especially silly because of the whole Kierkegaard/Lessing’s-Ditch dilemma. Even if you use Rationalism™ to figure out which charities are the most effective to donate to by some complicated Utilitarian algorithm, the very act of donating to charity (and the very premise of Utilitarianism in the first place) is rooted in, *gasp*, sentiment!

          I don’t know whether people actually think morality can be derived from first principles, or if they just wanted the film Inside Out to have a character named “Reason” whose job is to keep the emotions in check and prevent them from making Riley make suboptimal decisions.

    • sam says:

      I’m kind of with you in the urge to yell. I only occasionally stick my head into these communities, so I’ve just recently learned that AI risk is seriously considered under the EA umbrella. I was pretty disheartened that there’s people that start out by thinking “How can my money do the most good?” and eventually conclude “Donate it to a thought experiment”. At a certain point, it’s time to question the initial assumptions that led to that conclusion. I’ve always considered the point of altruism to be the alleviation of some sort of suffering, and it’s going to be hard to convince me that reducing AI risk does that effectively.

      Any of the math I’ve seen to defend x-risk causes as important depends on two things I don’t agree with. The first is the assumption that increasing the number of future human lives is somehow a virtue in its own right. And the second is a tendency to assign an equal (or nearly equal) value to the life of a theoretical future person (that may or may not exist), to the value of a real, live, currently existing person. I can’t get on board with either of those things, but the math doesn’t really work out without them.

      • Montfort says:

        Take, as an extremely inconvenient hypothetical world, this case: we detect a large, mass-extinction-event-level asteroid coming towards earth. It will most assuredly hit us and destroy all human life unless somehow redirected or destroyed. Luckily for us, we have a whole 500 years to fix it, though it quickly becomes clear we will need to spend much of that 500 years spending large amounts of money on researching how to deflect it, starting right now. Exactly what part of your argument doesn’t hold and say “this is a waste of our money, we could be using it to save people now!”? Or is this a bullet you will willingly bite?

        I’m specifically referring to your dispute that increasing the number of future humans is good, and that future humans should be discounted more than x-riskers do. If you find this insufficiently inconvenient, you may also assume there’s a massive plague going on that will depopulate [continent(s) of your choice, but not the whole world] if not stopped by funds that would need to go to asteroid deflection.

        (edit: I accidentally a phrase)

        • Adam says:

          This is more along the lines of the hypothetical she’s considering:

          Imagine you are presented by God with a button you can push that will end all suffering forever and bring about heaven on earth, but this generation of humans will also be the last. They’ll live out happy, blissful lives and die painlessly, no plague, no asteroid, but after them, no more humans.

          Don’t push the button and you get a further 100 million years of humans, but all of it filled with continuing wars, poverty, disease, crime, all the bad things that make it suck for a lot of people.

          Which of these is the more valuable future? Her impression, and mine, is that the crowd heavily invested in x-risk prevention is saying the second is better. She and I think the first is better. There is no inherent value in just living longer. Value comes from the quality of the life.

          All the future people who are around when that asteroid hits in your hypothetical, by all means, they still count, even though they’re in the future. What doesn’t count is the people who never come after. It’s not a tragedy for people to never come into existence.

          • Montfort says:

            Good example, I see what you’re saying. I haven’t really ever come across a reasoning system about hypothetical humans that worked even in the easily apparent edge-cases like x-risk and extreme overpopulation. And that does include yours (by my judgement, of course), but also the more conventional(?) model used by our hypothetical x-risker.

            As current x-risk propositions are not nearly so pressing, this doesn’t apply to the original question, but concerning your hypothetical – is it a key part that you’re assured the next 100M years will see no large improvement in quality of life?
            E.g. push the button, everyone is perfectly happy, no kids, die, don’t push and this generation lives as normal, but the next 100 generations live perfectly blissfully, then die… You would also push in that situation, because the greater amount of bliss only belongs to hypothetical people?

          • Adam says:

            No, that’s not really a key component. The only really key component is the worth of people who exist versus people who don’t exist. When they exist doesn’t matter. Even if you prevent transgalactic future utopia but completely guarantee perfect bliss until blissful death for everyone who currently exists, I take that. The key is that suffering is evil. Non-existence is not evil. I’m completely indifferent to how many humans ever exist in the future or whether humans continue to exist at all (noting that, in the long run, extinction is guaranteed regardless of anything we do). I’m not at all indifferent to what any person’s existence is like once they come into existence.

          • Nornagest says:

            Are we talking present-day levels of wars, poverty, disease, crime, etc., or some higher level? Because if it’s the former, then I think there’s a utilitarian case for it even if you don’t value life per se.

            If memory serves, about 100 billion people have ever lived. Most of their lives were miserable by our standards: let’s quantify their quality of life relative to ours at about .5, to pull a number out of a dark and smelly place. Let’s further assume that eliminating war and poverty etc. for the current generation will double their quality of life. Now, it’s hard for me to estimate how many people will end up living over the 100 million years of your second future, because it depends on a hell of a lot of assumptions, but with generations of 25 years and 7 billion humans at a time (to stick with the status quo theme), it comes out to somewhere around seven quadrillion.

            Average quality of life in the Terrestrial Kingdom scenario is (100 billion * .5 + 7 billion * 2) / 107 billion = 0.598. Average quality of life in the grim status quo of the far future is (100 billion * .5 + 7 quadrillion * 1) / (7 quadrillion + 1 billion) = 0.999992. Exact numbers vary with your assumptions, but it’d take a very blissful Kingdom indeed to outweigh 100 million years of business as usual.

          • John Schilling says:

            Most of their lives were miserable by our standards; let’s quantify their quality of life relative to ours at about .5

            Is this a fair thing to do to people whose lives were happy by their own standards? I get that you’re pulling numbers out of a hat and that it doesn’t really change the end result. But I can’t help dissenting whenever it is suggested that prior generations can’t have enjoyed life as much as we do because they didn’t have as many toys as we do.

          • Nornagest says:

            Well, it’s a fair point that the hedonic treadmill is a real thing, and I’ll leave it to others to square it with the concept of quality of life; that’s a subtle and nasty problem that I’m not really equipped to tackle at the moment.

            But on the other hand, self-reported happiness does internationally correlate (albeit noisily) with economic and technological development, so I don’t think my line of thinking up there is totally bankrupt. Personally I suspect this has less to do with TV and consumer electronics and more to do with the presence of shoes and penicillin and indoor plumbing and the absence of feuding warlords. That seems like it’d apply intertemporally, too.

        • sam says:

          Well, let’s think of it another way: if your goal is to simply maximize the number of potential human lives that exist between now and humanity’s inevitable extinction, then you shouldn’t donate money anywhere. You should instead use your resources to have and raise as many children as possible. After all, if you don’t have kids, not only are you preventing your descendants from existing, but THEIR descendants, and so on. Assuming a pretty conservative birth rate, it doesn’t take that many generations for that number of potential descendants to become so huge it dwarfs the amount of lives you could save by earning to give.

          Putting it that way, it sounds immoral not only to not have kids, but immoral not to have as many kids as possible. My selfish refusal to have any kids is could cost humanity millions of potential future lives.

          If that sounds absurd, it’s because it kind of IS absurd. The premise that “more humans = better” doesn’t really make sense to me. It says nothing about the quality of those lives. Plus, all those lives are just theoretical at this point. We don’t know if they’ll end up coming into existence or not. What we DO know is that there are TONS of real, live, currently existing people that are suffering and/or dying. That is not theoretical. I’m going to prioritize saving the life of the living person, presuming they want to keep living. A person that never comes into existence doesn’t suffer at all from not existing in the first place.

    • Dust says:

      I’ve already noticed that people in the EA movement are afraid to suggest any causes except for the standard established ones as being worth EA attention. I suppose comments like this one where advocating some view of what’s effectively altruistic that one disagrees with is considered a “moral outrage” will make this problem even worse :/

      which in practice means giving money to Eliezer and his friends.

      In any emerging cause area people working on the cause are likely to be friends with one another. So maybe in the future arguing that EAs should give money to any weird emerging cause will be considered a “moral outrage”. That would be a real shame, because I’d expect the most neglected causes to be weird ones (neglectedness is of course a core principle of EA).

  92. Getting dragged over the event horizon doesn’t happen to most groups, ideas, or individuals, so we probably have more to learn from studying negative examples of this trend (i.e., the successes and the non-catastrophic failures) than positive examples. Which groups, ideas, and individuals don’t tend to spring to mind when we think about this issue, because they aren’t self-destructively memetic, emotion-evoking, etc.? And can we steal their tricks?

    Why have Objectivism and secular humanism followed such different paths? Or: anarchism, the Green Party, Catholicism, UBI, anti-obesity regulation, veganism, the ‘is bisexuality a thing?’ debate, the Big Bang, genome sequencing, string theory, the mind/body problem in philosophy, Carl Sagan, and Europe/EU policies are controversial and maligned in certain contexts; but why have they been so much less controversial and maligned and heatedly debated in the U.S. than libertarianism, feminism, MRA, socialism, creationism/evolution, eugenics, the nature/nurture debate, anti-drug regulation, New Atheism, Richard Dawkins, China, etc.? (I have a bunch of disorganized pairwise comparisons in mind here; not saying every single member of Group 1 is less contentious than every single member of Group 2. I considered adding Steven Pinker to both groups, for different comparisons.)

    Part of what makes this an unusually difficult battle for rationalists is that the thing we’re trying to cultivate and promote is epistemic honesty — and useful strategies like ‘picking one’s battles’ or ‘finding ways to avoid pissing off / catching the eye of the most dangerous potential critics’ can sometimes mean compromising honest discourse. (Or it can feel that way, even when your ‘spin’ isn’t false; or it can just not feel like a default option because it’s not part of our identity to be wily or charismatic.)

    Generally, I’d suggest something more like Strategy 2, assuming our goals would be better served by not having every idea associated with epistemic rationalism get sucked past the event horizon.

    One way to avoid the hazards of Strategy 2 is to keep it mostly on your tumblr, or ask other people to do it; you’re one of the most valuable parts of this community, and in many ways it makes more sense for platoons of us to be taking hits to protect you than for you to be taking hits to protect us. 🙂 In general, match the prominence of your response to the prominence of the criticism, so you don’t give criticisms a big signal boost (unless they’re especially high-quality, or your response is especially compelling and accessible). This includes simply ignoring most criticisms.

    Another way to avoid the Strategy 2 hazards is to respond to criticism with humor, patience, and goodwill. You’re quite good at this in many cases; if you’re worried you can’t strike that tone on an issue (but ought to), that’s a good time to delegate topics, co-write, or run drafts by editors.

    Keep in mind that you’re one of the ‘front pages’ for rationalism and EA, so in many cases you’ll be the first place anyone hears about these issues (and in many cases they’ll never hear about them at all if you don’t mention them). The social media universe centered on you probably looks very different from the social media universe centered on your average reader. (In some scary ways, but also some anti-scary ways.)

    This sounds overly personalized to Scott’s situation, but the conditional advice applies to a lot of people, I think.

    • HeelBearCub says:

      Another way to avoid the Strategy 2 hazards is to respond to criticism with humor, patience, and goodwill. You’re quite good at this in many cases; if you’re worried you can’t strike that tone on an issue (but ought to), that’s a good time to delegate topics, co-write, or run drafts by editors.

      This seems like something that is very hard to do all the time, but a key strategy.

      • houseboatonstyx says:

        Yes. In some cases might a blogger keep zis top entries on zis own subjects, and put a short rebuttal as a footnote, or as a comment to zis own OP, with a link to it in the OP? Or as I think others may have suggested, a link to some other site, and a policy of directing comments on the attack’s issues to that site and removing any that are posted on zis main site

        Making the link sound very dull to the average reader, but including dog-whistles to those who are already following the attack.

    • Scott Alexander says:

      Keep in mind there is also a mixed strategy of responding to most criticism well but occasional criticism with extreme countermeasures. That way you’re not pulling out the countermeasures too often, but everybody at least knows there’s a risk.

      Cf. the Finnish strategy during the Winter War of always shooting the third Russian in a formation. The Russians didn’t keep formations very well after that.

      • Yes. There’s also a strategy of responding to criticism with humor, patience, a semblance of goodwill/courtesy, and acerbic, savage wit. Christopher Hitchens is a good example of someone who rarely comes across as “defensive,” but also rarely comes across as nice — and it works for him, though it’s a hard strategy to pull off.

        Usually my advice errs in the direction of encouraging niceness, because the failure modes of being too nice are generally a lot better than the failure modes of being too mean. But I don’t rule out its playing a role for people who are really skilled writers and really skilled strategists.

      • rossry says:

        Cf. the Finnish strategy during the Winter War of always shooting the third Russian in a formation. The Russians didn’t keep formations very well after that.

        …source? My Google-fu fails.

      • Decius says:

        If using this strategy, is it important that the potential attackers know that there is a chance of severe response?

        Actual discussion of course would have to be free from the extreme countermeasures, which I think I can do. I just don’t see how to give attackers the information they need to be afraid of response without explicitly telling them so, which becomes a target in itself.

    • LCL says:

      Keep in mind that you’re one of the ‘front pages’ for rationalism and EA, so in many cases you’ll be the first place anyone hears about these issues (and in many cases they’ll never hear about them at all if you don’t mention them)

      Confirming example: I was referred here to read some of Scott’s meta-level political analysis and found it so insightful I decided on the spot to read everything he had written (little suspecting he’s some kind of astonishing writing-productivity outlier).

      Forget just rationality, EA, etc. I didn’t know the phrase “Social Justice Movement” until I learned it from Scott. The kids passing out fliers in the quad have some kind of national movement, and it’s supposed to be a big deal? News to me. My former rubric would have been “lefty campus activism, like usual.”

      So you can imagine my reaction to a defense against, say, “new athiest” critics. Or if you can’t: it was “wait, who? what happened to the old atheism?”

      I should add that I have since referred several people to read the meta-political stuff here, all of whom are likely to be as clueless about the rest of this as I was. So yes, “front page” would definitely apply.

    • Shenpen says:

      Why is almost everything more hotly debated in the US than in EU? My impression is, Americans seem to think the opinion of the people actually matters, it actually affects things i.e. there is a real democracy, and thus it is important that as many Average Joes as possible have the right views and express them clearly. I think in Europe people gave up this illusion long ago. Why have an opinion at all when the elites don’t listen anyway? That is the general view.

  93. Anonymous says:

    This heavily reminds me of that page on RationalWiki where a bunch of people declared that rationalism had been shown to be a cult because a leadership figure was insufficiently nice to someone who had started a hateblog about him. Once a group of people dislikes you, any attempt to defend yourself will be regarded as an unprovoked attack. (If you don’t matter as a moral entity, attacking you isn’t a relevant action.) The fact is that when you engage with people who insult you, your writing suffers. You’re still being dogged by some turns of phrase in “untitled” the better part of a year later, and that “punishing defectors” paragraph about Taubes will probably suffer the same fate. The fact is, I strongly doubt that rationalism will go the way you imagine it, because there is no counter-lobby. Nobody’s going to take the time and effort to thoroughly smear rationalism’s name in the public square. Nobody’s even heard of rationalism in the public square.

    • Scott Alexander says:

      RationalWiki’s whole spiel is looking for groups near the event horizon and giving them an extra push.

      • Faradn says:

        I’m only an occasional LW reader, but it’s obvious to me that Rationalwiki’s LW article is disgraceful.

        • Professor Frink says:

          So I’ve read almost all the sequences and follow a lot of rationalist blogs and I actually think the rationalwiki article on LW is surprisingly accurate and covers most of the criticisms that crop up even from within the community.

          I often hear how terrible rationalwiki is, but it honestly seems like a pretty decent summary.

          • Linch says:

            Maybe it’s my own tribal identifications speaking, but the RW page on EA is *significantly* worse than their page on LW.

            http://www.donotlink.com/edhp

            I’ve criticized their neturality on the talk page and have not yet gotten a response.

          • Faradn says:

            I think I’m mixing up the LW article with the Yudkowsky article. Also my view is tainted a bit by having read the editing discussion of the LW article, and seeing the attitudes of some of the more outspoken people there.

          • Professor Frink says:

            So I’m not sure it’s fair to judge an article about by it’s talk page, so lets talk about the articles.

            I just read the main Yudkowsky article which seems maybe slightly more biased than the Less Wrong article (which as I said I found pretty decent) but still not awful. It repeats a lot of the in-group criticism I’ve seen. So what are the specific complaints? Where does it go wrong?

            I’ve had this conversation so many times, where someone says the rationalwiki articles is terrible, I ask why, and then they say “oh, I meant some other article” or the like.

          • Samuel Skinner says:

            I just compare the communism versus fascism articles. I find it a useful barometer for bias.

          • Daniel H says:

            I am looking at the Yudkowsky article now. The problems I see with it are that it gives an uncharitable rendering of his views (it gives several reasons that AI foom is unlikely, many of which are either false or based on a poor understanding of AI foom; it says he dislikes mainstream science without clarifying that he thinks it does a better job than most ways people could try to learn but that better is possible; it goes into far more detail on the things it has criticisms for than the things that are praiseworthy), it gives an unfair look at his accomplishments (no, duh, of course he hasn’t released AI code; that’s not the problem he’s working on), and it conflates LessWrong with Eliezer Yudkowsky (implying that he agrees with anything “his followers” say).

            There are other problems with it too, but they mainly fall into one of those categories. The things that don’t include condemning him for holding views that are controversial, condemning him for working on problems that are thus far unsolved, and exaggerating things to the point of being ridiculous while still making the sentences technically true.

          • anonymous says:

            So I click on that “donotlink” link above, and I get a dialog box that says: “Most readers think that this page is nonsense. What do you think?”

            …that’s kind of the problem we’re talking about here.

          • Viliam says:

            Most readers think that this page is nonsense. What do you think?

            I think this is an example of selection bias. People who disagree with the page are more likely to use donotlink than people who think the page is okay.

            Unfortunately, “selection bias” wasn’t among the provided options. 🙁

        • TheAncientGeek says:

          People: RationalWiki isnt supposed to to be unbiased. Its not The Other Wiki. Think of it as political journalism[*]. Political journalism is supposed to have a POV. Its allowed to be snarky and to cherry pick. What’s it’s not allowed to do is make things up wholesale.

          [*] Particularly UK and Australian political journalism, particularly Private Eye. [**]

          [**] Essentiallly this comes down to the UK libel laws, if anyones interested.

          • Jiro says:

            It’s not the wiki part which makes people think it shouldn’t be biased, it’s the “rational” part. Deliberately cherrypicking is not rational (in the sense of using rational arguments to convince people).

          • TheAncientGeek says:

            “Rational” here means not conservative or religious.

          • John Schilling says:

            People who assert that “rational” means “not conservative or religious”, should be mocked, ridiculed, or otherwise put on the defensive at every reasonable opportunity. This seems like a reasonable opportunity.

          • Shenpen says:

            >“Rational” here means not conservative or religious.

            … may Zeus fuck them around with a really large trout for that.

            I don’t know why, but such moves make me really angry. I am OK with people saying their views are those of the smart high prestige people and their opponents views are those of the village idiots. It is a normal human strategy.

            But when people basically say that it is obvious, goes without saying, and hardly needs mentioning that their views are those of the smart, high prestige people and it is obvious, goes without saying, and hardly needs mentioning their opponents views are those of the village idiots then I get so mad that I see red.

            I don’t know why. It triggers a deep sense of social insecurity, namely that I never had any idea who is the popular kid in the class, or what the popular kids think – it is entirely possible that I used to listen to the lowest prestige bands all through high school and did not even know that they are, because I was unable to read social clues.

            Is this just me, or can someone relate to that? Is it possible that this emotional reaction to this strategy really comes from there?

          • houseboatonstyx says:

            Shenpen

            >>“Rational” here means not conservative or religious.

            > … may Zeus fuck them around with a really large trout for that.
            > I don’t know why, but such moves make me really angry.

            I think yours is a very rational reaction. I also am angry — well, flabbergasted — to hear such statements, both because they are so ridiculous I don’t know where to start picking them apart (so I recoil as from spoiled food), and because when they are stated so clearly and consciously, it indicates that the speaker knows zie is stealing a key word and making it meaningless. (In this case the word is ‘rational/irrational’; elsewhere it’s ‘racist’. What was a neutral tool for … well, rational … discussion for both sides, is used as a bludgeon and broken.)

          • nydwracu says:

            “Rational” here means not conservative or religious.

            ah yes, pol pot, paragon of rationality,

          • TheAncientGeek says:

            If its ok for one group to overload Rational with particular beliefs about theology, quantum mechanics and probability ,is it not ok to overload Rationality with particular beliefs about theology and politics?

      • TheAncientGeek says:

        Which isnt necessarily a bad thing,

  94. Ever An Anon says:

    Supposedly SEO experts recommend countering critical blog posts / reviews / comments by burying them under tons and tons of positive ones you make yourself. I’m not sure how true it is but people seem to buy it: look into any multi-level marketing scam and you’ll see a battery of eerily similar articles with titles like “[MLM Name] helped me get started” or “Is [MLM Name] a scam? No!”

    Actual political movements and even militaries are known to pull similar stunts. Add an extra digit to everyone’s Party Number to make our group look bigger. Only hire freakishly tall men, dress them in giant hats, and march them all over the country. Anything to make yourself look bigger and more intimidating.

    I think that a pufferfish kind of strategy, intentionally inflating your group’s percieved wealth size and influence, can probably help in avoiding the black hole. It won’t stop a determined attacker but it might help on the margins.

  95. E. Harding says:

    “To dehumanize them is to say their ideas don’t count, they can’t be reasoned with, they no longer have a place at the table of rational discussion.”
    -Ah, but, Scott, there are millions of people out there who can’t be reasoned with (on both the Left and Right, and everywhere in between), at least on certain issues. Is this a “concentrate on the sin, not the sinner” post/thing, or is there something more to it?

    Also, I would have preferred more concrete, actual cases to hypotheticals in this post. The concrete cases are much more useful to examine, as they’re not arguments from fictional evidence.

    • There are lots of people who cannot be reasoned with. But there are few groups that contain only people who cannot be reasoned with, so when someone takes the position “there is no point trying to reason with these people,” the most likely explanation is that he doesn’t want to be reasoned with.

      • Seth says:

        I’d contend the major problem is not literally “there are zero people (in the group) who can be reasoned with”, but more along the lines of “the structural incentives give enormous power to those of the group who can’t be reasoned with, which will be used to attack the people outside trying to reason with the group, and also to keep in line people inside the group who may do such reasoning with outsiders”. Similarly, the people who can be reasoned with may have very little power overall in the group.

        Relatively noncontroversial example: There are some Creationists who can be educated as to why it’s not science. But Creationism as a movement has much more to do with overall religious politics than pure education in isolation. Attempting to defuse it solely by individually educating all Creationists is a fool’s errand.

        • Zorgon says:

          I agree with you about the dynamics in question, but I disagree about the intentions of the people claiming “there are no reasonable people in Group X”.

          I think a lot of people in !X, particularly those adjacent to X, are aware of the phenomena you’re taking about; but when some issues an absolute negative about communicating with a group, my immediate assumption is always the same as Scott’s above – they are engaging in group politics and wish to attack The Enemy (and raise their own status in so doing) in the most efficient manner possible. The phenomena you mention merely acts as a convenient excuse.

      • Murphy says:

        I’d say some groups attract more of the kind who can’t and not just .

        Non-trivial groups of people who will insist that the only reason their powers of telepathy aren’t working is that you personally are broadcasting too much negativity in the room.

        Non-trivial groups of people who will insist that crystals thrice blessed by a druid make plants grow better and heal wounds faster but will accuse you of “western style thinking” if you ask whether they’ve ever tried testing it with crystals in some plant pots but not others.

        Hell, there are non-trivial groups of people who will insist that the entire concept of “reasoning” and “facts” are oppressive tools of their outgroup created only to oppress them.

        There are groups who are genuinely anti-reason. Not in a horrible way, may of their members can be lovely people but they just don’t have much of an overlapping way of thinking.

    • Scott Alexander says:

      I specifically avoided concrete examples because all of them are mind-killing, would have been met with “how dare you suggest this group doesn’t deserve dehumanization”, or would have turned the comments into being about those movements rather than my point.

      • Nita says:

        Do you really see no difference between “this group deserves to lose some members because its ideas are bad” and “the members of this group deserve to be dehumanized”?

        The OP also struck me as speaking about groups almost as if they were people. No one’s hurling people into black holes. I don’t understand why I should always consider a group becoming less popular a morally bad outcome, regardless of its object-level beliefs or effects.

        • FacelessCraven says:

          @Nita – “Do you really see no difference between “this group deserves to lose some members because its ideas are bad” and “the members of this group deserve to be dehumanized”?”

          …In the spirit of our exchange in the last thread, I think part of the difference in question is between a group that alienates the public through its own actions, and a group whose alienation is helped along by its adversaries highlighting the worst in it and concealing the best, and generally doing what they can to make it impossible for it to get a fair hearing.

          If I listen to someone’s views, conclude they are repugnant, and lose respect for them, that’s one thing. If I then write a polemic about why that person is evil and that no one intelligent should waste their time listening to them, that’s something else. The latter is certainly an effective tactic if your goal is to defeat your ideological opponents. It also pretty much inevitably leads to massive abuse.

          Some views don’t seem to be worth respecting. Some views are sufficiently monstrous that holding them makes you a monster. I don’t think there are any views that make a monster sufficiently monstrous that, provided they’re willing to attempt to frame their ideas in a civil manner, responding in kind makes you a monster too.

          “I don’t understand why I should always consider a group becoming less popular a morally bad outcome, regardless of its object-level beliefs or effects.”

          Becoming less popular isn’t the bad outcome. It’s how that happens that matters. If your group can be made revolting to people who have never encountered it first-hand, that’s generally a bad sign.

          “No one’s hurling people into black holes.”

          At least three examples spring immediately to mind, but I think Scott would rather we not discuss specifics. I guess it depends what you mean by “hurling into a black hole.” How would you define it?

          • sweeneyrod says:

            “If your group can be made revolting to people who have never encountered it first-hand, that’s generally a bad sign.”

            I’ve not met any members of the KKK and I don’t really want to. I don’t think that’s irrational.

          • Nita says:

            I agree that it’s possible to vilify a group unfairly, and that this is bad behaviour. On the other hand, it’s also possible to honestly criticize a group’s ideas and get labeled An Evil Enemy Trying To Destroy Us.

            Is it OK to publish any criticism at all, or should it be forbidden, to let every person form their opinion of every group solely from personal experience?

            (I’m not sure how one would “conceal the best” in some group, btw — is it a thing people do?)

            And you talk of someone calling people evil monsters, but that doesn’t match the instances that Scott got defensive about. E.g., Christopher Hallquist didn’t write “LW-rationalists are evil monsters!”, he accused them of undervaluing the mainstream scientific process to make their own advice shine brighter. Whether he is right or wrong, he’s certainly treating rationalists like human beings.

          • MicaiahC says:

            Sweeney: Signal boosting this article because it’s amazing.

            If you find it surprising that they can be reasoned with, and that Davis could have made it out alive and mostly unharmed, I think you should acknowledge one of 1) the KKK aren’t as dangerous as you perceive 2) talking to them, no matter how unpleasant might have on net positive effects in limited circumstances or 3) you just don’t like them very much for what they think, and basically don’t wish to associate with them because they’re gross.

            I’m being a bit flippant here, there are less absurd graduations of my points 1, 2 and 3 here as well as points I haven’t covered. But really, why don’t you want to meet a member, and does an actual account of meeting one change your mind?

          • FacelessCraven says:

            @SweeneyRod – “I’ve not met any members of the KKK and I don’t really want to. I don’t think that’s irrational.”

            As far as I know, I’ve never met a member of the KKK either. I have encountered things that their members have written or drawn, and the signs they wave at their marches, and based on those I am comfortable drawing conclusions about their movement. If some member of the KKK wishes to explain why they’re deeply misunderstood and are actually a force of good in their communities, let them. It’s going to be a hard sell, though.

            @Nita – “Is it OK to publish any criticism at all, or should it be forbidden, to let every person form their opinion of every group solely from personal experience?”

            Criticism: “this guy is dead wrong, and here’s why.”
            What We Are Talking About Here: “This guy isn’t just wrong, he’s evil, and you’d be a fool to listen to anything he says.”

            One is engaging in debate, the other is attempting to close off the possibility of debate. This doesn’t seem like a difficult distinction to make.

            “(I’m not sure how one would “conceal the best” in some group, btw — is it a thing people do?)”
            It is a thing people do. For an example that is probably safe:
            https://slatestarcodex.com/2013/12/23/we-are-all-msscribe/
            See also: “A lie gets halfway around the world before the truth gets its boots on.”
            Make the first impression your audience gets of a target a ridiculous straw-man caricature. Reach your audience before the target can, get your narrative in place before they even reach the field, and no one will listen to anything they have to say, even if they’re telling the truth and you are lying.

            You’ve honestly never seen this happen?

            “And you talk of someone calling people evil monsters, but that doesn’t match the instances that Scott got defensive about.”

            That’s the conversation from the last post leaking into this one. It seems to me that both discussions are about the same principle. Namely, that using reasoned argument to advocate evil is more acceptable than “fighting dirty” to stop people from advocating evil.

          • Nita says:

            I don’t think saying “those ideas are evil” is always fighting dirty. If anything, to me

            A: “don’t listen to those people because their ideas are bad”

            seems less damaging to the collective search for good ideas than

            B: “don’t listen to those ideas because their people are bad”.

            I don’t like how Scott seems to conflate the two and accuse people of saying both even when they’ve only been saying A.

            (Examples of A: “Listening to Marxists / neoreactionaries / Christians is a waste of time because they’re advocating for waging class war / punishing gay people / worshipping an Unfriendly Supernatural Intelligence.”)

            (Examples of B: “How can you defend LW-rationalism / feminism / mens rights when Eliezer / Amanda Marcotte / Paul Elam is such a jerk?!”)

          • FacelessCraven says:

            @Nita – ”
            A: “don’t listen to those people because their ideas are bad”
            B: “don’t listen to those ideas because their people are bad”.

            Not listening to them and not listening to their ideas are the same thing, aren’t they? And a person with evil beliefs and an evil person are pretty much the same thing, yes?

            Example A is straw men, and example B is ad hominems. Why would one be preferable to the other? Both insulate your audience from anything the person in question says, so both are open to abuse on the MsScribe model. One is unfair to a group, the other is unfair to a person.

            …I’m really not getting the difference. Both seem pretty bad, and pretty similar.

          • HeelBearCub says:

            @FacelessCraven:
            But when they were thrown into the black hole the KKK was an organization that did actual, physical violence to black people, existed for the purpose of terrifying them, and had been that way since inception.

            Davis is a hero, and took the opportunity of being humanized to his enemy. We should never forget that the people in the KKK are humans.

            But, if we don’t get to call what the KKK did “evil” then evil doesn’t exist.

          • Nita says:

            @ FacelessCraven

            a person with evil beliefs and an evil person are pretty much the same thing, yes?

            No, not at all! E.g., if neoreactionaries ever start putting their grand ideas into action, they will (most likely) become evil people. For now, they are merely people who hold and popularize evil beliefs. Similarly, while Lenin was writing pamphlets for his underground revolutionary organization, he probably had some evil beliefs. But when he started the Red Terror, he became an evil person.

            Example A is straw men, and example B is ad homin