Fear And Loathing At Effective Altruism Global 2017

San Francisco in the middle sixties was a very special time and place to be a part of. Maybe it meant something. Maybe not, in the long run – but no explanation, no mix of words or music or memories can touch that sense of knowing that you were there and alive in that corner of time and the world….There was a fantastic universal sense that whatever we were doing was right, that we were winning. And that, I think, was the handle—that sense of inevitable victory over the forces of Old and Evil.

— Hunter S. Thompson

Effective altruism is the movement devoted to finding the highest-impact ways to help other people and the world. Philosopher William MacAskill described it as “doing for the pursuit of good what the Scientific Revolution did for the pursuit of truth”. They have an annual global conference to touch base and discuss strategy. This year it was in the Palace of Fine Arts in San Francisco, and I got a chance to check it out.
.


The lake-fringed monumental neoclassical architecture represents ‘utilitiarian distribution of limited resources’

The official conference theme was “Doing Good Together”. The official conference interaction style was “earnest”. The official conference effectiveness level was “very”. And it was impossible to walk away from some of the talks without being impressed.

Saturday afternoon there was a talk by some senior research analysts at GiveWell, which researches global development charities. They’ve evaluated dozens of organizations and moved $260 million to the most effective, mostly ones fighting malaria and parasitic infections. Next were other senior research analysts from the Open Philanthropy Project, who have done their own detailed effectiveness investigations and moved about $200 million.

The parade went on. More senior research analysts. More nine-digit sums of money. More organizations, all with names that kind of blended together. The Center for Effective Altruism. The Center For Effective Global Action. Raising For Effective Giving. Effecting Effective Effectiveness. Or maybe not, I think I was hallucinating pretty hard by the end.
.

I figured the speaker named “Cashdollar” was a hallucination, but she’s right there on the website

One of the breakout rooms had all-day career coaching sessions with 80,000 Hours (motto: “You have 80,000 hours in your career. Make the right career choices, and you can help solve the world’s most pressing problems”). A steady stream of confused altruistic college students went in, chatted with a group of coaches, and came out knowing that the latest analyses show that management consulting is a useful path to build charity-leading-relevant skills, but practicing law and donating the money to charity is probably less useful than previously believed. In their inevitable effectiveness self-report, they record having convinced 188 people to change their career plans as of April 2015.

(I had been avoiding the 80,000 Hours people out of embarassment after their career analyses discovered that being a doctor was low-impact, but by bad luck I ended up sharing a ride home with one of them. I sheepishly introduced myself as a doctor, and he said “Oh, so am I!” I felt relieved until he added that he had stopped practicing medicine after he learned how low-impact it was, and gone to work for 80,000 Hours instead.)

The theater hosted a “fireside chat” with Bruce Friedrich, director of the pro-vegetarian Good Food Institute. I’d heard he was a former vice-president of PETA, so I went in with some stereotypes. They were wrong. Friedrich started by admitting that realistically most people are going to keep eating meat, and that yelling at them isn’t a very effective way to help animals. His tactic was to advance research into plant-based and vat-grown meat alternatives, which he predicted would taste identical to regular meat at a fraction of the cost, and which would put all existing factory farms out of business. Afterwards a bunch of us walked to a restaurant a few blocks down the street to taste an Impossible Burger, the vanguard of this brave new meatless future.
.

The people behind this ad are all PETA card-carrying vegetarians. And the future belongs to them, and they know it.

The whole conference was flawlessly managed, from laser-fast registration to polished-sounding speakers to friendly unobtrusive reminders to use the seventeen different apps that would keep track of your conference-related affairs for you. And the of course the venue, which really was amazing.
.

The full-size model of the Apollo 11 lander represents ‘utilitiarian distribution of limited resources’

But walk a little bit outside of the perfectly-scheduled talks, or linger in the common areas a little bit after the colorfully-arranged vegetarian lunches, and you run into the shadow side of all of this, the hidden underbelly of the movement.

William MacAskill wanted a “scientific revolution in doing good”. But the Scientific Revolution progressed from “I wonder why apples fall down” to “huh, every particle is in an infinite number of places simultaneously, and also cats can be dead and alive at the same time”. The effective altruists’ revolution started with “I wonder if some charities work better than others”. But even at this early stage, it’s gotten to some pretty weird places.

I got to talk to some people from Wild Animal Suffering Research. They start with the standard EA animal rights argument – if you think animals have moral relevance, you can save zillions of them for almost no cost. A campaign for cage-free eggs, minimal in the grand scheme of things, got most major corporations to change their policies and gave two hundred million chickens an improved quality of life. But WASR points out that even this isn’t the most neglected cause. There are up to a trillion reptiles, ten quintillion insects, and maybe a sextillion zooplankton. And as nasty as factory farms are, life in the state of nature is nasty, brutish, short, and prone to having parasitic wasps paralyze you so that their larvae can eat your organs from the inside out while you are still alive. WASR researches ways we can alleviate wild animal suffering, from euthanizing elderly elephants (probably not high-impact) to using more humane insecticides (recommended as an ‘interim solution’) to neutralizing predator species in order to relieve the suffering of prey (still has some thorny issues that need to be resolved).

Wild Animal Suffering Research was nowhere near the weirdest people at Effective Altruism Global.

I got to talk to people from the Qualia Research Institute, who point out that everyone else is missing something big: the hedonic treadmill. People have a certain baseline amount of happiness. Fix their problems, and they’ll be happy for a while, then go back to baseline. The only solution is to hack consciousness directly, to figure out what exactly happiness is – unpack what we’re looking for when we describe some mental states as having higher positive valence than others – and then add that on to every other mental state directly. This isn’t quite the dreaded wireheading, the widely-feared technology that will make everyone so doped up on techno-super-heroin (or direct electrical stimulation of the brain’s pleasure centers) that they never do anything else. It’s a rewiring of the brain that creates a “perpetual but varied bliss” that “reengineers the network of transition probabilities between emotions” while retaining the capability to do economically useful work. Partly this last criteria is to prevent society from collapsing, but the ultimate goal is:

…the possibility of a full-fledged qualia economy: when people have spare resources and are interested in new states of consciousness, anyone good at mining the state-space for precious gems will have an economic advantage. In principle the whole economy may eventually be entirely based on exploring the state-space of consciousness and trading information about the most valuable contents discovered doing so.

If you’re wondering whether these people’s research involves taking huge amounts of drugs – well, read their blog. My particular favorites are this essay on psychedelic cryptography ie creating messages that only people on certain drugs can read, and this essay on hyperbolic geometry in DMT experiences.
.

The guy on the right also works for MealSquares, a likely beneficiary of technology that hacks directly into people’s brains and adds artificial positive valence to unpleasant experiences.

The Qualia Research Institute was nowhere near the weirdest people at Effective Altruism Global.

I got to talk to some people researching suffering in fundamental physics. The idea goes like this: the universe is really really big. So if suffering made up an important part of the structure of the universe, this would be so tremendously outrageously unconscionably bad that we can’t even conceive of how bad it could be. So the most important cause might be to worry about whether fundamental physical particles are capable of suffering – and, if so, how to destroy physics. From their writeup:

Speculative scenarios to change the long-run future of physics may dominate any concrete work to affect the welfare of intelligent computations — at least within the fraction of our brain’s moral parliament that cares about fundamental physics. The main value (or disvalue) of intelligence would be to explore physics further and seek out tricks by which its long-term character could be transformed. For instance, if false-vacuum decay did look beneficial with respect to reducing suffering in physics, civilization could wait until its lifetime was almost over anyway (letting those who want to create lots of happy and meaningful intelligent beings run their eudaimonic computations) and then try to ignite a false-vacuum decay for the benefit of the remainder of the universe (assuming this wouldn’t impinge on distant aliens whose time wasn’t yet up). Triggering such a decay might require extremely high-energy collisions — presumably more than a million times those found in current particle accelerators — but it might be possible. On the other hand, such decay may happen on its own within billions of years, suggesting little benefit to starting early relative to the cosmic scales at stake. In any case, I’m not suggesting vacuum decay as the solution — just that there may be many opportunities like it waiting to be found, and that these possibilities may dwarf anything else that happens with intelligent life.

.

This talk was called ‘Christians In Effective Altruism’. It recommended reaching out to churches, because deep down the EA movement and people of faith share the same core charitable values and beliefs.

The thing is, Lovecraft was right. He wrote:

We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far. The sciences, each straining in its own direction, have hitherto harmed us little; but some day the piecing together of dissociated knowledge will open up such terrifying vistas of reality, and of our frightful position therein, that we shall either go mad from the revelation or flee from the deadly light into the peace and safety of a new dark age.

Morality wasn’t supposed to be like this. Most of the effective altruists I met were nonrealist utilitarians. They don’t believe in some objective moral law imposed by an outside Power. They just think that we should pursue our own human-parochial moral values effectively. If there was ever a recipe for a safe and milquetoast ethical system, that should be it. And yet once you start thinking about what morality is – really thinking, the kind where you try to use mathematical models and formal logic – it opens up into these dark eldritch vistas of infinities and contradictions. The effective altruists started out wanting to do good. And they did: whole nine-digit-sums worth of good, spreadsheets full of lives saved and diseases cured and disasters averted. But if you really want to understand what you’re doing – get past the point where you can catch falling apples, to the point where you have a complete theory of gravitation – you end up with something as remote from normal human tenderheartedness as the conference lunches were from normal human food.
.

Born too late to eat meat guilt-free, born too early to get the technology that hacks directly into my brain and adds artificial positive valence to unpleasant experiences.

But I worry I’m painting a misleading picture here. It isn’t that effective altruism is divided into two types of people: the boring effective suits, and the wacky explorers of bizarre ethical theories. I mean, there’s always going to be some division. But by and large these were the same people, or at least you couldn’t predict who was who. They would go up and give a talk about curing river blindness in Nigeria, and then you’d catch them later and learn that they were worried that maybe the most effective thing was preventing synthetic biology from taking over the ecosystem. Or you would hear someone give their screed, think “what a weirdo”, and then learn they were a Harvard professor who served on a bunch of Fortune 500 company boards. Maybe the right analogy would be physics. A lot of physicists work on practical things like solar panels and rechargeable batteries. A tiny minority work on stranger things like wormholes and alternate universes. But it’s not like these are two different factions in physics that hate each other. And every so often a solar panel engineer might look into the math behind alternate universes, or a wormhole theorist might have opinions on battery design. They’re doing really different stuff, but it’s within the same tradition.

The movement’s unofficial leader is William MacAskill. He’s a pretty typical overachiever – became an Oxford philosophy professor at age 28 (!), founded three successful non-profits, and goes around hobnobbing with rich people trying to get them to donate money (he himself has pledged to give away everything he earns above $36,000). I had always assumed he was just a random dignified suit-wearing person who was slightly exasperated at having to put up with the rest of the movement. But I got a chance to talk to him – just for a few minutes, before he had to run off and achieve something – and I was shocked at how much he knew about all the weirdest aspects of the community, and how protective he felt of them. And in his closing speech, he urged the attendees to “keep EA weird”, giving examples of times when seemingly bizarre ideas won out and became accepted by the mainstream.
.

His PowerPoint slide for this topic was this picture of Eliezer Yudkowsky. Really. I’m not joking about this part.

If it were just the senior research analysts at their spreadsheets, we could dismiss them as the usual Ivy League lizard people and move on. If it were just the fringes ranting about cyber-neuro-metaphilosophy, we could dismiss them as loonies and forget about it. And if it were just the two groups, separate and doing their own thing, we could end National Geographic-style, intoning in our best David Attenborough voice that “Effective Altruism truly is a land of contrasts”. But it’s more than that. Some animating spirit gives rise to the whole thing, some unifying aesthetic that can switch to either pole and back again on a whim. After a lot of thought, I have only one guess about what it might be.

I think the effective altruists are genuinely good people.

Over lunch, a friend told me about his meeting with an EA philosopher who hadn’t been able to make it to the conference. This friend had met the philosopher, and as they were walking, the philosopher had stopped to pick up worms writhing on the sidewalk and put them back in the moist dirt.

And this story struck me, because I had taken a walk with one of the speakers earlier, and seen her do the same thing. She had been apologetic, said she knew it was a waste of her time and mine. She’d wondered if it was pathological, whether maybe she needed to be checked for obsessive compulsive disorder. But when I asked her whether she wanted to stop doing it, she’d thought about it a little, and then – finally – saved the worm.

And there was a story about the late great moral philosopher Derek Parfit, himself a member of the effective altruist movement. This is from Larissa MacFarquhar:

As for his various eccentricities, I don’t think they add anything to an understanding of his philosophy, but I find him very moving as a person. When I was interviewing him for the first time, for instance, we were in the middle of a conversation and suddenly he burst into tears. It was completely unexpected, because we were not talking about anything emotional or personal, as I would define those things. I was quite startled, and as he cried I sat there rewinding our conversation in my head, trying to figure out what had upset him. Later, I asked him about it. It turned out that what had made him cry was the idea of suffering. We had been talking about suffering in the abstract. I found that very striking.

Now, I don’t think any professional philosopher is going to make this mistake, but nonprofessionals might think that utilitarianism, for instance (Parfit is a utilitarian), or certain other philosophical ways of think about morality, are quite unemotional, quite calculating, quite cold; and so because as I am writing mostly for nonphilosophers, it seemed like a good corrective to know that for someone like Parfit these issues are extremely emotional, even in the abstract.

The weird thing was that the same thing happened again with a philosophy graduate student whom I was interviewing some months later. Now you’re going to start thinking it’s me, but I was interviewing a philosophy graduate student who, like Parfit, had a very unemotional demeanor; we started talking about suffering in the abstract, and he burst into tears. I don’t quite know what to make of all this but I do think that insofar as one is interested in the relationship of ideas to people who think about them, and not just in the ideas themselves, those small events are moving and important.

I imagine some of those effective altruists, picking up worms, and I can see them here too. I can see them sitting down and crying at the idea of suffering, at allowing it to exist.

Larissa MacFarquhar says she doesn’t know what to make of this. I think I sort of do. I’m not much of an effective altruist – at least, I’ve managed to evade the 80,000 Hours coaches long enough to stay in medicine. But every so often, I can see the world as they have to. Where the very existence of suffering, any suffering at all, is an immense cosmic wrongness, an intolerable gash in the world, distressing and enraging. Where a single human lifetime seems frighteningly inadequate compared to the magnitude of the problem. Where all the normal interpersonal squabbles look trivial in the face of a colossal war against suffering itself, one that requires a soldier’s discipline and a general’s eye for strategy.

All of these Effecting Effective Effectiveness people don’t obsess over efficiency out of bloodlessness. They obsess because the struggle is so desperate, and the resources so few. Their efficiency is military efficiency. Their cooperation is military discipline. Their unity is the unity of people facing a common enemy. And they are winning. Very slowly, WWI trench-warfare-style. But they really are.
.

Sources and commentary here

And I write this partly because…well, it hasn’t been a great couple of weeks. The culture wars are reaching a fever pitch, protesters are getting run over by neo-Nazis, North Korea is threatening nuclear catastrophe. The world is a shitshow, nobody’s going to argue with that – and the people who are supposed to be leading us and telling us what to do are just about the shittiest of all.

And this is usually a pretty cynical blog. I’m cynical about academia and I’m cynical about medicine and goodness knows I’m cynical about politics. But Byron wrote:

I have not loved the world, nor the world me
But let us part fair foes; I do believe,
Though I have found them not, that there may be
Words which are things,—hopes which will not deceive,
And virtues which are merciful, nor weave
Snares for the failing: I would also deem
O’er others’ griefs that some sincerely grieve;
That two, or one, are almost what they seem,
That goodness is no name, and happiness no dream.

This seems like a good time to remember that there are some really good people. And who knows? Maybe they’ll win.

And one more story.

I got in a chat with one of the volunteers running the conference, and told him pretty much what I’ve said here: the effective altruists seemed like great people, and I felt kind of guilty for not doing more.

He responded with the official party line, the one I’ve so egregiously failed to push in this blog post. That effective altruism is a movement of ordinary people. That its yoke is mild and it accepts everyone. That not everyone has to be a vegan or a career researcher. That a commitment could be something more like just giving a couple of dollars to an effective-seeming charity, or taking the Giving What We Can pledge, or signing up for the online newsletter, or just going to an local effective altruism meetup group and contributing to discussions.

And I said yeah, but still, everyone here seems so committed to being a good person – and then here’s me, constantly looking over my shoulder to stay one step ahead of the 80,000 Hours coaching team, so I can stay in my low-impact career that I happen to like.

And he said – no, absolutely, stay in your career right now. In fact, his philosophy was that you should do exactly what you feel like all the time, and not worry about altruism at all, because eventually you’ll work through your own problems, and figure yourself out, and then you’ll just naturally become an effective altruist.

And I tried to convince him that no, people weren’t actually like that, practically nobody was like that, maybe he was like that but if so he might be the only person like that in the entire world. That there were billions of humans who just started selfish, and stayed selfish, and never declared total war against suffering itself at all.

And he didn’t believe me, and we argued about it for ten minutes, and then we had to stop because we were missing the “Developing Intuition For The Importance Of Causes” workshop.

Rationality means believing what is true, not what makes you feel good. But the world has been really shitty this week, so I am going to give myself a one-time exemption. I am going to believe that convention volunteer’s theory of humanity. Credo quia absurdum; certum est, quia impossibile. Everyone everywhere is just working through their problems. Once we figure ourselves out, we’ll all become bodhisattvas and/or senior research analysts.

This entry was posted in Uncategorized and tagged , . Bookmark the permalink.

514 Responses to Fear And Loathing At Effective Altruism Global 2017

  1. andrewflicker says:

    Thank you, for a bright spot in a bad week.

  2. Dedicating Ruckus says:

    I’m entirely willing to believe that EAs are, fundamentally, working from deep moral convictions. And many of their initiatives seem to have been good ones; getting money to the anti-malaria charities is very praiseworthy, and I’m happy about humane farming practices moving forward.

    But bear in mind that people acting from deep moral conviction can still be wrong, and can act on wrong information to do enormous harm. And any ethics that leads you to “tile the universe with hedonium to maximize utility”, or alternatively “destroy the universe to minimize suffering”, is really obviously running right down that path. I’m tempted to reference Eliezer’s post on cognitive trope therapy: if your proposed program sounds like something the Rebel Alliance has to fly a daring fighter raid to blow up before it can be activated, consider you might be the bad guy.

    (Even in less dramatic ideas… I would actually assassinate people before I would let them follow through on a program to destroy all predator species. Currently this is exceedingly unlikely to succeed, and we should be nice until coordinating meanness, and so forth, and so it’s not a choice that’s put before us. But that is definitely the lesser of two evils there.)

    You wrote a bit ago about not being sure that rationality could have prevented you from signing on to Communism, if you came across it before it was actually committing its atrocities. Considering the similarities (mostly their shared apparent tendency to bulldoze Chesterton’s Fence the first moment it poses an inconvenience), perhaps the list of suspects for “potential next Communism” should include EA.

    • kokotajlod@gmail.com says:

      “…perhaps the list of suspects for “potential next Communism” should include EA.”

      I’m a committed EA and I endorse this message.

      I think it’s the #1 thing I worry about in the category of “ways the EA movement could turn out to be bad.” The good news–which thus far has convinced me to stay in the movement–is that when I raise these concerns to people they seem very open to considering them. There’s lots of work by EA’s on moral uncertainty & moral cooperation, and in my personal experience they take both ideas very seriously, such that they *don’t* do drastic and chesterton’s-fence-smashing things even if it seems like a good idea, and they *do* assign weight to cooperating with /respecting other value systems. And then also I think EA’s are often unusually open to changing their minds on ethical matters, so I have hope that we’ll converge on the truth eventually. 🙂

    • Scott Alexander says:

      >> Considering the similarities (mostly their shared apparent tendency to bulldoze Chesterton’s Fence the first moment it poses an inconvenience), perhaps the list of suspects for “potential next Communism” should include EA.

      See https://slatestarcodex.com/2015/09/22/beware-systemic-change/

      • nestorr says:

        https://seat14c.com/future_ideas/37D

        Relevant sci fi short story by Peter Watts

        suddenly Malika no longer feels that hardwired heartfelt empathy for the lone six-year-old in need of a new liver. She no longer shrugs at news of another million starving refugees. Suddenly, those two feelings have switched places.

        • Deiseach says:

          Yeah, unconvincing since I wanted to give both the story and the author a good kicking when I started reading it (and abandoned it quite a short way in). Lazy writing as seems to be my experience with Watts; sets up his hobbyhorses as “and the only obvious and plain conclusion you can possibly draw is…” and relies on cardboard stereotypes instead of characterisation (Malika is Sassy Black Woman from the get-go and the interaction with the White Boy With A Gun is just stupid. If you didn’t already know Watts was a liberal white guy, you certainly would from that single instance).

          To be fair and admit to my biases, when he started in on the “yeah nobody is really empathetic, Mother Teresa was an evil hag as proven by Christopher Hitchens”, that’s when I bailed (and my ‘author needs a good kick up the backside’ readings were at maximum). Gore your own ox if you want to convince me; pick one of your own side’s heroes or admired figures and deconstruct them if you want me to believe you really mean the point you’re making, and not just taking a cheap pot-shot at a boo-figure representative of the other side that you’re sure all your readers will be nodding in approval when you do it.

          suddenly Malika no longer feels that hardwired heartfelt empathy for the lone six-year-old in need of a new liver

          So I guess when Malika is walking past the drowning child it will now be “Sorry kid, these are Louboutins, you can’t expect me to wade into a muddy pond with these on!” and then when she gets to the café as the sounds of the final gurgling bubbles fade in the distance, she will be in streams of tears reading the news story about how cute big-eyed orphans in Faroffistan will not have any new teddybears this Winterval? Nice going, Watts, I think the other Peter might like a word with you 🙂

          • Iain says:

            You should finish reading the story.

          • Witness says:

            I’m with Iain on this one.

          • Deiseach says:

            Well, trouble is, I can’t bring myself to finish reading the story because of the overwhelming “I wanna punch the author in the snoot”, which is down to the author because if he hadn’t spent his “why should I give this story the time of day” points on “hur hur we all agree religious types are Teh Evul, right?”(and not even original at that, recognisably lifted wholesale from Christopher Hitchens who had various axes to grind) then I might be less inclined to go “I wanna punch the author in the snoot”.

            Imagine that, in order to make his point about Sassy Black Women and White Guys With Guns, he had used (to adopt the language of SJ) racial slurs; that the agent on the plane had grabbed her by the throat and snarled “Get up, you [n-word] bitch!” and so on and so forth (this is part of why I’m rolling my eyes at the ‘leaning just so to show off his gun’ bit because it just doesn’t work. The agent is nervous and deferential, to the point that Sassy Mally remarks on it, and the whole point is that she’s an expert they really need to help solve their problems. In which case, the guy would instead produce his badge or other sigil of authority and very politely be all “Excuse me, Dr Sassy, would you mind terribly accompanying me?” I don’t think Watts is a good enough writer to be giving us clues that Something Is Not Right by deliberately setting up a dissonance between Sassy Mally’s expectations of how White Authority Figures treat Sassy Black Women and the situation as it is unfolding in actuality, so I do think it’s explained by lazy writing, using stereotypes, and showing off his liberal Canadian cred).

            I think some readers might bail out at that point and go “Look, I get enough of this in my real life, I don’t need some liberal white boy trying to show off his chops by turning my experiences into Edgy McEdgerson lazy writing”, and I think that would be a reasonable explanation as to why “No, I don’t want to read this story”.

            If he’d set up the scenario with Bridget Murphy, daughter of Famed War of Independence Hero, being accosted by the Brutal British and her going “Faix an’ begorrah, sure isn’t this how de Brits always treated us poor Irish with the brute force, evictions, starvation and cholera, wirra wirra!”, I would have bailed out then as well. And I think he’s a bad enough writer that that is exactly how he’d imagine the scenario to go.

            Besides, if I want uninformed Catholic bashing in the service of “ain’t we freethinkers so superior?”, I can get that wholesale onna innertubes any day 🙂

            As to the starving orphans, again that’s part of his bad writing – he describes the feelings as having “switched places” which means that the compassion and concern Malika feels for the six year old liver transplant patient has been transferred to the starving refugees but also that the indifference and lack of involvement she felt about the refugees has now been transferred to the littlest patient; that works out in effect to the reversal of the Drowning Child parable, which relies for its force on “if you saw a child in front of you (etc) – then you should feel the same concern for those you can’t see in front of you”. Switched Emotions Malika would now, therefore, walk past a drowning child in front of her and instead only feel empathy for the unseen millions far away. Malika-in-this-moment has had her reactions switched and if we plonked her down before that muddy pond with the drowning child, she would shrug her shoulders and move on (as she shrugs her shoulders and turns the page when reading about the million starving refugees).

            A better way would be to say that the empathy and compassion had now expanded to or been extended to or incorporated the million refugees, then Malika would still feel compassion for both the near and far groups. But he didn’t do that, he switched – which is to say, swapped, exchanged, traded, put one in the place of the other – her emotional reactions. So now she feels compassion for the far and indifference to the near. Probably not what he intended but how his sloppy writing works out.

          • Bugmaster says:

            I’m with the other commenters — you should really, really finish the story. Trust me on this one.

          • rlms says:

            Fourthed.

          • Deiseach says:

            I’m with the other commenters — you should really, really finish the story.

            Sorry, lads, not if you paid me (and I had an income reduction this past two weeks so that’s a genuine offer).

            I don’t care if the ending is so fantabadoosie that it will make me three inches taller, five stone lighter, knock ten years off my age, bump up my IQ 20 points and give me the winning numbers in tomorrow night’s lottery. He gratuitously pissed me off at the start and even if the ending is an amazing twist “And then the Lord God Almighty sent a vision that the Pope is indeed His Vicar on earth and everyone should convert to Roman Catholicism” ending, I do not care. I will not read the rest of that story because as I think I’ve said before, Watts’ prose style is like chewing cardboard.

            Besides, I think I can guess where it’s going and what the Big Reveal will be and again, I do not care.

            Maybe that means I’m missing out on a really good skiffy story. Oh woe is me, wirra wirrastrue! I will just have to bear it as best I can 🙂

          • Standing in the Shadows says:

            Besides, I think I can guess where it’s going and what the Big Reveal will be and again, I do not care.

            You’re not wrong. The ending was predicable Watts. In fact, it’s a doubly predictable Watts, in that it’s a nothing more than a vicious little revenge fantasy by a viciously angry “woke” liberal Canadian white boy.

          • Hyzenthlay says:

            I’m with the other commenters — you should really, really finish the story. Trust me on this one.

            If the ending involves a brilliant switcheroo of the reader’s expectations, just tell her what it is (with spoiler alerts for anyone who doesn’t want to be spoiled).

          • Deiseach says:

            Those who advised me to finish reading the story may regret that advice.

            EDIT: I do quote directly from the story, so I suppose I should give a spoiler alert, though I think the only real warning needed is “Step in dirt, you’ll need to wipe off your shoes”.

            Okay. So over at the sub-reddit this is introduced as “The new Peter Watts short story is fun, full of ideas and references paperclip maximization”.

            Sounds relevant to our interests, correct? And so I gave it a go, with the results as you can see in my original comments.

            And then I was told “No, you gotta read to the end, trust us on this!” So I did.

            In conclusion: Fuck Peter Watts. Non-consensually. With a monster, unlubricated, and splintered dildo (and definitely not one of the fun Rochester dildos). Without adequate preparation first. Continuously, with no breaks, for hours.

            (What? That’s a fun, idea-packed comment right there, just like the story! No paperclips though, you’re right).

            Yes, the story went precisely where I thought it would go, and then some.

            She goes in through the Emergency Broadcast System: a pipe into the head of every God-fearing Murrickan, to be used only in times of National Crisis (or, on rare occasions, when some congressperson wants to know whether their spouse is cheating on them).

            Now, I’m not a God-fearing Murrickan, but I am a bad Catholic. What is there in that paragraph with its meaningless little swipe at religion (Malika being Black means that she is much, much more likely to come from precisely that God-fearing background of Black churches), or in that entire story, other than auto-fellation? What is the author doing but “hur hur ain’t I so clever?” with his “yeah let’s have the Sassy Black Woman kill all the White Guys and Gals, with or without Guns”.

            Culturally appropriative, sexually and racially stereotyping, revenge-fantasy porn about Trump voters. Yeah, great story, guys. And you wonder why I think the Sad Puppies have a point about modern SF?

            And if the sub-reddit thinks a story about wiping out white Americans for the crimes of being God-fearing and voting Republican is “fun”, I’m glad I deleted my account there. Imagine re-writing this story in exactly the same words, only swapping round races and politics: Malika is white, the God-fearing Murrickans are Blue Tribe, etc. You’d be destroyed and if anybody recommended it as a fun, idea-packed story about AI risk, they too would be destroyed.

          • Hyzenthlay says:

            Murrickan

            Well, that sums it up.

            (I read about half the story and also found it pretty insufferable.)

          • rlms says:

            Did you read Watts as intending to portray Malika as laudable? That wasn’t my interpretation. I thought the whole mass murder thing was meant to be obviously disgusting, especially since she was doing it purely in pursuit of “a warm fuzzy feeling that… verges on the orgasmic”. But even before that part, I don’t think she was intended to be likeable.

          • Hyzenthlay says:

            If she was intended to be unlikable from the beginning I may have misinterpreted it, but I didn’t really get that impression. To me she felt like a vehicle for what the author wanted to say.

          • Deiseach says:

            Did you read Watts as intending to portray Malika as laudable?

            OH, yeah, Very definitely. From her introduction as Sassy Black Woman with a quick intimation of “This is how YT has been keepin’ the coloured folks down” via the “Malika knows how interactions with White Guys with Guns go for people like her” scene (which as I’ve said makes no sense except as stacking the deck to get us sympathetic to her/reinforce our white liberal guilt as to ‘yes, even Black professors are subjected to this‘) to the snarking about the names of Tami and George to the zinger about altruism (religious variety) to everything, basically. Malika is Da Bomb and if we don’t like her it’s our fault for being racist sexist you name its who probably like to think of ourselves as Good Murricans. (Or, I beg his pardon, Murrikans. Goodness, it’s been a while since I’ve seen someone refer to Amerika – or even AmeriKKKa). You think Malika is intended to be unlikeable? You’ve another one of the privileged white systemic racists who police Black people for being too loud!

            I do think Malika is intended to be perceived as heroic, even a martyr for her principles: when she carries out the deed, no-one who knows where she is will be left alive so there will be no-one to bring her food, and we’re meant to conclude she will starve to death – but she does it anyway for the sake of “the have-nots, the refugees, the poor unprivileged who never got the latest Sony augment for their sixth birthday”.

            I think it’s pretty plain that Watts doesn’t much like humanity as is, and he likes to express in his writings “Now, if I were doing it, this is how I’d improve on God’s frankly shoddy handiwork” e.g. doing away with consciousness. Malika’s wiping out all the basket of deplorables and giving the huddled masses the chance of a do-over is right in his wheelhouse. Though I can well imagine Peter likes – or at least can tolerate – the section of humanity that agrees with his principles, and I could imagine him and Margaret sitting down for a nice cup of tea and sighing delicately over the problematic neighbour to the south.

            Me, I’d be classed by them as one of the deplorables Malika is pushing to kill themselves, and given I already have a head full of suicidal ideation, thank you very fucking much, Mr Watts, for your cutesy solution to the problem of people like me. Maybe even your Final Solution, shall we say?

          • Witness says:

            “I do think Malika is intended to be perceived as heroic, even a martyr for her principles.”

            My read of the ending is that Malika fails, for lots of reasons. I guess I could go into spoiler territory if if helps.

            (P.S. Apologies for adding to your mental distress on this. Also for multiple edits.)

          • Deiseach says:

            Apologies for adding to your mental distress on this.

            Nah, we’re cool, bro. I was a bit… upset… when I got to that part, but it rapidly transmuted to anger. I have no beef with you, but Peter Watts, on the other hand – I’d like to dangle him by his ankles over a precipice while screaming ARE WE HAVING FUN YET, PETER? HEY, SUPPOSE MY BRAIN GETS HACKED BY SOME BIO-ENGINEERING AND I THINK IT’S A GOOD IDEA TO LET GO OF YOUR ANKLES! WOULDN’T THAT BE COOL? SPEAK MORE DISTINCTLY, I CAN’T DISCERN AN ANSWER TO THAT THROUGH THE YELLING FOR HELP!

          • Bugmaster says:

            @Deiseach:

            FWIW, my impression of the story is the exact opposite of yours. It’s not a prescription for a better future, but a cautionary tale. Malika is very much the villain of the narrative. She’s the evil genie, and the civil servants who enable her are the unwitting fools who unleash the genie with nary a second thought. The “Murrikan” part is very much in character: this is not Peter Watts being condescending at you, this is Malika being condescending at humanity.

            On a separate note, call me privileged if you like, but I don’t see why villains in literature shouldn’t be black, or Muslim, or gay, or any other demographic. We are all human, and evil is part of that.

          • Bugmaster says:

            I should also add that my perception of Peter Watts in general is very different from yours. His writing is extremely dystopian, because not only does he believe that humanity is deeply flawed at its core (as you’ve correctly pointed out), but also that it is ultimately irredeemable. There is no clever hack or mind-trick that can solve the problem; the entire project was doomed from the start, and every attempt at fixing it is bound to end up in disaster. Peter Watts’ books are a handy catalogue of such potential disasters.

          • Jules says:

            Malika begins to see a psychiatric solution.

            Maybe it’s her own bias, a hammer’s tendency to see every problem as a nail. Maybe the bias is in MAGI, or the data sets that programmed it; machine learning is as corruptible as any other kind.

            Maybe it’s just the best answer. It certainly feels good.

            Not only is Malika not admirable in her sacrifice, since she’s been hardwired to enjoy those, but she’s implied to have possibly killed off a significant portion of humanity without it being actually necessary.

            She does create a better world, though.

          • Standing in the Shadows says:

            She does create a better world, though.

            How do you know? Because the author’s viewpoint character, who some people here are trying to run defense for Watts by libsplaining away as the villain of the story, says that she will?

            At best, it creates a *different* world, inhabited by beings with the surface appearance of humans, but with very alien minds. And if they do survive, and do create a technological civilization that can hold at bay the ugly brutal and short nature of life without technological civilization, they will just get more alien.

            I have zero attachment or interest in them.

            Which invokes the number one killer of literature: “I don’t care what happens to these people”.

    • Mary says:

      Oddly enough, I was just reading Aye, Robot by Robert Kroese in which an evil plot to make everyone happy is central.

      (Reading Starship Grifters first is wise.)

    • Deiseach says:

      “destroy the universe to minimize suffering”

      But you must admit, it would work!

    • Ozy Frantz says:

      The thing about random intellectuals starting movements about improving the world is that sometimes you get Communism and sometimes you get the scientific method or the Enlightenment. If you want “hey guys we just solved religious war” then you have to put up with a certain amount of “…um yeah we just killed several million people through famine, sorry about that.”

      Personally, I’m a big fan of reversibility: don’t do something unless you can easily change it back before it kills several million people. I feel like a lot of weird EA proposals are pretty easy to put into this framework, and ones that aren’t (AI) have everyone really really freaked out about how irreversible they are. There are some exceptions in the weird-EA space (habitat destruction), which bother me too.

      One thing that’s really good about weird!EA as a whole is that it tends to be aggressively morally pluralistic. I suggest reading the Foundational Research Institute’s papers on cooperation and moral trade. (Naturally, being FRI, they justify it with arguments about the benefit of superrationality in an infinite multiverse.) Moral pluralism is a huge protective factor: don’t do things that are really hostile to other people’s value sets, even if you’d capture a lot of value from your own POV.

      • Dedicating Ruckus says:

        This and other replies have ameliorated my initial wary reaction quite a bit.

        As I mentioned, it seems (from my not-particularly-researched perspective) that about everything EA has concretely done has been good, or at worst a waste of effort (though I still do feel better knowing someone is spending time thinking about AI risk, despite being reasonably sure it’s not an issue). I’ve probably got echoes of culture war in my head making me instinctively assume the worst.

        • I’m very supportive of people thinking through these weird edge cases, because occasionally there’s a very convincing demonstration that they are right – and the rest of the time, it continues to get mostly ignored. (See: US Research during WWII into Atomic weapons vs. Post-WWII research into Psychic powers.)

          It’s still a bit unclear to the general public which side various existential risks are on, and so funding a bunch of research to double check seems reasonable – while also eliminating malaria and giving money to the world’s poorest instead of funding badly run aid programs. But EA is diverse, and they are working on a lot of the weird edge cases WHILE giving money to eliminate malaria, and encouraging many others to do the same.

      • Progressive Reformation says:

        Was the Scientific Method really a “movement” in the way that we’re discussing? It wasn’t a reorganization of society along more “humane” or “scientific” lines; it was a bunch of improvements to a very niche human activity that didn’t have any direct relation to the way that human societies organize. In the earlier post Scott linked to above, he divides altruistic efforts into the categories of “man vs. nature” (e.g. fighting disease) and “man vs. man” (e.g. fighting capitalism). The Scientific Method seems more like the first kind, and the dangerous things more like the second kind.

        [As for whether the Enlightenment was even a good thing, I have to go with what Zhou Enlai supposedly said about the French Revolution: “It’s too soon to say.”]

        • Eli says:

          Was the Scientific Method really a “movement” in the way that we’re discussing? It wasn’t a reorganization of society along more “humane” or “scientific” lines; it was a bunch of improvements to a very niche human activity that didn’t have any direct relation to the way that human societies organize.

          You know, if you read what Francis Bacon had to say about the scientific revolution back when he was first inventing it, this is the total opposite of the truth. It was planned as a revolution that would lead to humanity achieving a true, action-guiding understanding of the world around us. Just read:

          Further, it will not be amiss to distinguish the three kinds and, as it were, grades of ambition in mankind. The first is of those who desire to extend their own power in their native country, a vulgar and degenerate kind. The second is of those who labor to extend the power and dominion of their country among men. This certainly has more dignity, though not less covetousness. But if a man endeavor to establish and extend the power and dominion of the human race itself over the universe, his ambition (if ambition it can be called) is without doubt both a more wholesome and a more noble thing than the other two. Now the empire of man over things depends wholly on the arts and sciences. For we cannot command nature except by obeying her.

          Science was never meant to be a tame genie kept stuffed in its bottle, to occasionally come out and grant us a useful technology. It was meant to sweep away all fog and confusion, giving humankind a clear view of our existence and a stark revelation of our real choices in life.

          • Deiseach says:

            Bacon’s idea was the extension of empery over Nature; by means of science (or the scientific method), Man would now understand and control those forces to which he had hitherto been subject.

            The AGW crisis is now the fruits of that empery, where our control and exploitation of Nature has come back to show us that we are still subject to those forces and have to work in harmony rather than exert power as an absolute demand for obedience and service.

          • Bugmaster says:

            @Deiseach:
            I think that’s a needlessly animistic way to look at things (but then, as an atheist, I have a pretty low threshold for animism tolerance).

            When Samsung decided to save a few bucks on their phone batteries for the Note 7, the unintended consequence was that their batteries started a-sploding, costing the company billions of dollars. One way to look at it is to say that their actions have angered the Machine Spirits; in trying to make batteries cheaper, they have committed a sin, and the wages of sin are death. Another way to look at it is that they failed to anticipate the negative side-effects of their business strategy.

            These two interpretations suggest somewhat different solutions. One way to solve the problem would be to recognize that harmony with the Machine Spirits must be maintained at at all times, and to humble oneself before the Omnissiah. Seeking to make batteries cheaper is a sin, so repent ! Another solution would be to try and understand why their business department failed to listen to R&D on this issue, and to change their management practices so that it doesn’t happen in the future — while fighting the fallout from the exploding batteries as much as possible.

            Both solutions will get us fewer exploding batteries, but only one of them could also get us cheaper batteries at some point in the future.

          • Deiseach says:

            Bugmaster, you misunderstand me, I certainly do not hold with any kind of panpsychism or panentheism. What I meant was that Bacon envisaged a method whereby Mankind would be able to harness the lightning and send those forces to do our bidding.

            Well, we have a fair idea of how electricity works, and one thing we found out was that sticking your fingers into the socket is a bad idea.

            You can’t simply and cleanly and easily “understand, tame and control” these universal forces because things are a lot messier and more interconnected. The Industrial Revolution was fantastic but now we are being told that this has led to us pumping a hellacious amount of carbon into the atmosphere and that this has consequences we neither envisaged nor imagined.

            So any “this new enlightenment philosophy is in the spirit of the discovery of the scientific method!” should very much take into account that it is as likely to be just as messy and interconnected and “holy crap who knew that would happen?” We don’t get to repeat Bacon’s happily ignorant optimism that “we’ll know how to make things work and nothing can possibly go wrong when we take control of the reins” because we’ve had the experiences of the gulf between the ideal and the reality since then.

          • We don’t get to repeat Bacon’s happily ignorant optimism that “we’ll know how to make things work and nothing can possibly go wrong when we take control of the reins” because we’ve had the experiences of the gulf between the ideal and the reality since then.

            I don’t think we have. We have pessimistic predictions about global warming, but the predicted catastrophes haven’t yet happened. Crop yields continue to go up. Hurricanes have not become more common.

            Nuclear weapons may be a better candidate for a bad outcome of our attempt to harness nature, but there too all the really bad stuff is possible futures–the number of people killed by nuclear weapons so far is tiny relative to the number killed by conventional means.

            Judged by what’s happened so far, Bacon’s view looks pretty good. Extreme poverty falling sharply, calorie consumption in the poor parts of the world trending up, the contagious diseases that used to kill large numbers of people mostly under control.

          • Bugmaster says:

            @Deiseach:

            Bacon envisaged a method whereby Mankind would be able to harness the lightning and send those forces to do our bidding.

            We are engaging in this argument by means of a global network of machines that enhance thought, powered by lightning and connected by ivisible light. I’d say that Bacon had largely succeeded, especially when you consider the fact that many of us wouldn’t be here if it weren’t for our ability to harness the fundamental building blocks of life itself.

            Of course, the Enlightenment did not succeed completely, but what ever does ? And yes, technology brings with it great dangers as well as great rewards, but the answer is to learn from one’s mistakes, not to go back into the caves. I agree that learning is hard, but at least it has the potential to move humanity forward. Ignorance does not.

        • Tibor says:

          That “too soon to say” quote is basically a mistranslation – he was referring to the 1968 left-wing student protests in Paris, not to the actual French Revolution.

      • Jacob says:

        Aka “Don’t call up what you cannot put down”. It’s a good rule.

        Also maybe it’s just me who initially had this problem, but the Foundational Research Institute should not be confused with the Future of Humanity Institute (https://www.fhi.ox.ac.uk/research/research-areas/) which does work in existential risk, notably AI and pandemics.

      • Nabil ad Dajjal says:

        The name confuses a lot of people but the scientific revolution wasn’t a social movement.

        A lot of early scientists, like Newton and one of the Bacons, were really into hermeticism. They believed that alchemy and astrology would unlock the secrets of the universe. And they worked to improve those fields so much that the idea that chemistry and physics might be related to them sounds bizarre.

        There was never a step “and now we totally reorganize human society!” That part happened on its own over the course of centuries as people applied scientific findings to everyday problems one at a time.

      • apollocarmb says:

        Are you talking about capitalism? Far more have starved to death under capitalism.

        • Nornagest says:

          As much as I love relegislating tired old communist talking points, I’d rather do it in the open thread. Or, preferably, not at all.

      • Anon123321 says:

        Neither the scientific method nor the Enlightenment were movements in the way that communists and EA were/are movements. Locke was hardly some visionary trying to improve the world. Kant never left his hometown. And when a group of Enlightenment thinkers actually did band together into something resembling a movement, you got the French Revolution and the Napoleonic Wars.

    • FeepingCreature says:

      Yeah but I mean … you are guaranteed (by anthropic bias) a universe in which you can live. That’s it. That’s all.

      You are not guaranteed that the universe does not contain unimaginable amounts of suffering that you can, somehow, prevent. The universe is allowed to throw situations containing great amounts of grief at you. There’s nothing preventing it.

      Predators may be a net good, or they may be a net evil, but you can’t say that just because they exist it’s better than if they not existed. The universe is, in fact, permitted to throw net negative things at you. You are only owed existence. You are not owed a universe that’s fine as it is.

      • Dedicating Ruckus says:

        This seems to be an interesting argument for… uh, atheist metaphysics, I guess…? …that is nonetheless not really responsive to what I initially wrote.

        That said… many of my moral axioms come from my being a theist, which implies 1. I hold them very strongly, 2. ~nobody here is going to be convinced by them. In this case, I assign significant positive value to the existence of nature as it was created; the proposition of mutilating that, across the entire planet, to the degree that destroying all predator species would imply, is an evil at least on the same order of magnitude as killing all humans.

      • Randy M says:

        You are also not guaranteed a world where you can know whether removing predators does or doesn’t cause biospheres to crash in unforeseen ways.

      • KeatsianPermanence says:

        Actually, there is good data on the effects of removing predators from ecosystems. For example, predators (ex: wolves) were systematically removed from the midwest starting in the mid-1800s such that many native predators were on the brink of extinction around the 1980s. At that point prey populations – namely deer as their numbers are actively monitored – began to balloon and exceed the carrying capacity of the environment. Up until the mid-2000s, it was fairly common to find starving fawns in the early spring as head level leaf buds were just enough to support most fawns at a starvation level. In response, hunting campaigns through increased tag limits have helped alleviate some of the environmental pressure. However, this is just an example replacing a non-human predator with a human predator. The predator niche provides a positive check on prey populations to avoid a sort of localized Malthusian Catastrophe.

        It seems like the anti-predator movement is exceptionally naïve. Every basic ecology course covers the necessity of predators in the predator-prey dynamic. So, what do they expect will happen once the predators have been eliminated? All the prey animals ravaging ecosystems in pursuit of a starvation subsistence or a gradual fade into nirvanal non-existence? Surely no utilitarian would want to trade isolated instances of short suffering for widespread long term suffering?

        • Tibor says:

          I find the anti-predator idea silly, but If I were to play the devil’s advocate I’d argue that replacing non-human predators with human predators can reduce suffering. A bullet in the head or the heart is a lot less painful than being slowly digested alive in a snake’s stomach or having your organs liquified by a spider’s poison and then sucked out.

          • Nornagest says:

            To be fair, your average snake’s prey probably dies of suffocation long before it feels any noticeable effects from digestive juices. And I’m not sure that spiders’ typical prey can actually feel pain.

            On the other hand, a depressingly large number of hunters will take shots that lead to unnecessary suffering.

      • Eli says:

        Oh, look who thinks he’s owed existence!

        • Dragon God says:

          You are; as per the anthropic principle, the universe that we observe must be a universe that can support intelligent life to do the observing.

          • Jules says:

            We necessarily exist, since we exist, yes.
            “Owe” implies a moral obligation that does not follow.

            We’re not even guaranteed that the universe won’t blink out of existence in the next instant…

    • tmk says:

      Strangely, this is how I feet about Rationalists as a whole. I love reading the blogs, and I think I am rationalist-adjacent, but I would not put these people in charge of the world. We’d all be dead within a year.

    • careful says:

      “Considering the similarities (mostly their shared apparent tendency to bulldoze Chesterton’s Fence the first moment it poses an inconvenience), perhaps the list of suspects for “potential next Communism” should include EA.”

      Bingo.

      Compassion must be balanced by personal responsibility. The moment you allow EA to become centered around inequality, of groups or identities, the path to tyranny stretches out before you.

      How do you keep the individuals of a ‘movement’ individual?

      • Scott Alexander says:

        This isn’t my impression of them. I don’t really see any Chesterton’s-Fence-bulldozing. There’s a lot of discussion of crazy options, but the discussion usually ends with “and here are the hundred reasons we’re not going to do them”.

        Even the destroy-physics guy says we should destroy physics after the Universe decays to a point where it can’t sustain intelligent life.

        • Deiseach says:

          Even the destroy-physics guy says we should destroy physics after the Universe decays to a point where it can’t sustain intelligent life.

          The Universe is going to come to an end on its own eventually, so leave it alone. No need to destroy physics. Okay, as a fun topic for discussion in the bar, sure it’s great but anything remotely serious? Please leave for the universe next door and let our one alone.

        • careful says:

          We will see.

          There are many groups I thought would be impenetrable to SJW/Nu-Progressive/Communist thought, but they’ve shown time and time again that their identity politics tactics work well for successful entryism and eventual hijack.

          • vV_Vv says:

            If I recall correctly, during the presidential election there were many EAs who seriously argued that they should have turned the EA movement to political advocacy, because Trump was going to cause lots of dis-utility.

            And some time before, some EAs were crusading to kick Scott out of the movement for his anti-SJW writings.

            So there is definitely leftist entryism going on, which may eventually result in hijacking.

    • vV_Vv says:

      You wrote a bit ago about not being sure that rationality could have prevented you from signing on to Communism, if you came across it before it was actually committing its atrocities. Considering the similarities (mostly their shared apparent tendency to bulldoze Chesterton’s Fence the first moment it poses an inconvenience), perhaps the list of suspects for “potential next Communism” should include EA.

      Humanity seems to have a general tendencies to produce “do-good” ideologies that quickly degenerate into fanaticism, totalitarianism and commit atrocities. Communists wanted to do good, and they were largely consequentialists: they weren’t particularly shy about killing a few people or a hundred million in order to achieve their utopia.

      If you go back in time (or stay in the present and go to Syria), you can find fundamentalists religions doing the same: they want to do good by saving the souls of mankind, and if this entails doing some killing, the end justifies the means. SJWs are pretty much in the same boat (e.g.), I’m pretty confident that they would send people to the gulag if they could.

      I’m not claiming that EA = ISIS = SJW = Khmer Rouge, but we should be very aware of this catastrophic failure mode of ideologies that want to “save the world”.

      Maybe I’m overly cynical, but I tend to be wary of the “good people”: If you just behave like a regular, partially nice and partially selfish human, with regular human motives, then I know what to expect from you and how to deal with you. But if you put up a big show to signal how virtuous you are, whether by picking up worms, eating fake burgers, or patronizing me about QUALYs/Jesus/Allah/Marxism/etc., then I think that you are trying to con me, or that you really are a fanatic, or both.

      • Hyzenthlay says:

        But if you put up a big show to signal how virtuous you are, whether by picking up worms

        I don’t know if the “picking up worms” thing is virtue signalling or a genuine compulsion. I guess it would depend on whether the individual always does it or only when other people are watching. I do kind of feel an urge to save worms whenever I see them after a rainstorm, regardless of who’s around (though that impulse wars with the “but they’d feel gross to touch” impulse and sometimes there are just so many worms that I don’t bother).

        It may be less trying to impress others and more a kind of moral OCD.

        • ConnGator says:

          Been saving sidewalk worms for decades, never thought of it as signaling. Just seemed like the right thing to do.

          Of course, every time I move one I imagine a being as superior to me as I am to an earthworm doing the same thing for humans, which makes me happy.

        • anonymousskimmer says:

          I’ve historically avoided picking up worms/snails when other people are around due to embarrassment. Then I metaphorically kick myself for letting my own social embarrassment cost their lives.

          People act for reasons opposite to virtue signaling. They sometimes refrain from acting in order to avoid signaling weirdness.

    • Ghatanathoah says:

      I love it when someone else has already said the thing I logged in to say. It reminds me that I’m not the only person thinking about stuff like this. I tend to freak out when I see people who want to radically change human nature/the nature of consciousness. My brain starts screaming “this person wants to kill you and everything you love!” I’m supportive of transhumanism, but I’ve also really internalized Yudkowsky’s whole “Value is Fragile” message and am really scared of transhumanism going wrong.

      But I really like a lot of the replies people have been making to your post. I am pretty convinced at this point that EA probably won’t destroy everything valuable about the human race, or become the next communism.

      One thing people haven’t mentioned that makes me feel better is that there seems to be a lot of overlap between rationalists and libertarian/neoliberal economists. I got into rationalism through reading Overcoming Bias, which I was linked to by a Bryan Caplan article on Econlog. I don’t think my path is unique, and there are probably some people who went in the other direction. So I think a good portion of rationalist/EA people are already familiar with all the Hayekian critiques of central planning and defenses of Chesterton’s fence. I think that libertarian economics appeals to rationalist types, and that they’ve absorbed a lot of it’s respect for personal autonomy and Chesterton fences.

    • Eli says:

      But bear in mind that people acting from deep moral conviction can still be wrong, and can act on wrong information to do enormous harm. And any ethics that leads you to “tile the universe with hedonium to maximize utility”, or alternatively “destroy the universe to minimize suffering”, is really obviously running right down that path. I’m tempted to reference Eliezer’s post on cognitive trope therapy: if your proposed program sounds like something the Rebel Alliance has to fly a daring fighter raid to blow up before it can be activated, consider you might be the bad guy.

      Seconded. I think it’s important to ask: how come people’s theories of morality aren’t adding up to normality? And if we don’t require moral theories to add up to some kind of normality at the normative or meta-ethical level, then what are they for, and why should we follow them?

  3. Said Achmiz says:

    And I tried to convince him that no, people weren’t actually like that, practically nobody was like that, maybe he was like that but if so he might be the only person like that in the entire world. That there were billions of humans who just started selfish, and stayed selfish, and never declared total war against suffering itself at all.

    And he didn’t believe me, and we argued about it for ten minutes, and then we had to stop because we were missing the “Developing Intuition For The Importance Of Causes” workshop.

    Rationality means believing what is true, not what makes you feel good. But the world has been really shitty this week, so I am going to give myself a one-time exemption. I am going to believe that convention volunteer’s theory of humanity. Credo quia absurdum; certum est, quia impossibile. Everyone everywhere is just working through their problems. Once we figure ourselves out, we’ll all become bodhisattvas and/or senior research analysts.

    Ok, but… this isn’t actually true. Like, funny comment about exemptions from rationality aside… clearly, you know that this isn’t true, right?

    That said—I think that the EA folks should definitely go right on believing that it’s true. I enthusiastically endorse the EA movement adopting a policy of “don’t try to convince anyone, they’ll come around eventually anyway”. I think this is a good policy, and I’d love to see it become the official EA party line.

    • baconbacon says:

      I think it is mostly true of people like Scott. If you are at an EA meeting and genuinely worrying/feeling guilty about how much you give you will eventually do more, even just at the margin of more money flowing their way as raises mean giving 10% of your salary means more donations every year.

    • antilles says:

      I’m not sure. I don’t think it’s right to believe that most people are fundamentally unreachable; most people possess a capacity for empathy, that they apply inconsistently and mostly to people in their immediate circle, and a capacity for reason, that they apply inconsistently and mostly to issues in their immediate interest. I think most people have the capacity to see the world like EA’s do, which is not to say something like “everyone will come around eventually.” But maybe everyone *could* come around, and by focusing on that possibility, we will discover some good techniques for recruiting more people.

      • Deiseach says:

        I don’t think it’s necessarily inconsistent to apply empathy preferentially to those nearest to you; until relatively recently, that’s all you could do. You might never hear of great tragedies far away or if you did, there was nothing you could do practically to assist or alleviate them.

        (1) The above holding true for much of human history, the only ones you could help being those near to you, it is not irrational to care more about the suffering you see than the suffering of which you are unaware, or are aware of but cannot affect

        (2) You can’t help everyone in the world simultaneously, so yes you are going to pick and choose; else you end up with twenty competing tragedies going on at any one time and if you treat them all equally, you end up splitting your donation into “a negligible amount for each” which has no meaningful impact – so guilt-tripping people about not caring about the big-eyed orphans in Faroffistan as something they should be doing instead of giving food to that homeless guy they see on the street every morning as they go to work (e.g. the Drowning Child parable) runs the risk of coming off as smugness and holier-than-thou; you’re a rube who only cares about the white Christian babies in the village orphanage where you live, I’m a sophisticated careful thinker who treats all humans equally regardless of race, creed or status

        (3) EA itself makes such distinctions with the EFFECTIVE part of Effective Altruism: here is a tool, a method, a way of discerning which intervention is the most effective, does the most good, has the best outcome. That necessarily means the third, fifth or tenth cause on the list gets less attention, which is too bad for the big-eyed orphans of Faroffistan if they’re number three on the list. Yet EAs are not going around grabbing one another by the lapels and yelling “You only care about the people in Foreignitania, why not Faroffistan?”

        (4) Compassion fatigue. If you are demanding that I care about everyone equally, see point two above, which means I’m as likely to end up going “To hell with Faroffistan and that homeless guy” and not help anyone, ingroup or not, near or not

        (5) EA itself is moving on to that next stage; it seems to be going from “You too can make a real difference to saving children’s lives even with a small regular donation!” to “Scratch that, your measly whatever donation is functionally meaningless, what we’re now thinking is that you little people can serve as living examples to entice the really big-bucks guys to fork over large amounts of moolah in really big – did we mention massive? humongous is what we’re thinking – amounts that will do actual good. But eh, if you’re not a multi-billionaire yourself, go work for the most money-grubbing industry you can find and get the biggest salary going, so your measly donation will be a bit less measly”. Realistically, the majority of the population will not end up in big jobs pulling down a quarter of a mill or more per year in salary, so either you relegate them to never donating or doing anything charitable, or you accept that they are going to continue to donate and be involved in charity work outside of the EA movement and that means they will donate to interest groups of their own

        • Vigil says:

          your measly whatever donation is functionally meaningless

          On a relative scale, I suppose. On an absolute scale, while you might prefer to have a Norman Borlaug’s worth of impact than an Oskar Schindler’s worth, the latter is still pretty goddamn brilliant.

          • Deiseach says:

            Oh, I’m on the side of the Widow’s Mite, but I have the horrible feeling the current tendency in the thinking would be to rewrite the parable to have Jesus praising the rich Pharisees instead: “See? That’s what you need to do. Get into the circles of power and influence and mix with the movers and shakers! No good even if you throw in your entire life’s savings, that’s way too little to make any kind of impact. Be a big noise in the administration and get on good terms with the occupying power that is the functional government, you can do way more good by becoming a really good pal of the Emperor and talking to him about things like large-scale irrigation projects. That aqueduct project Procurator Pilate ordered, now; that’s the kind of useful thing I mean, taking the corban money just lying around in the Temple treasury and putting it to real practical work!”

    • FeepingCreature says:

      I think people are mostly first-level good and second-level indifferent. People will follow their morality, but they don’t, as far as I can tell, care much about maintaining, in either sense, their morality. They’ll take opportunities to do good if they come across them, but they will not take opportunities to do good better, because those opportunities won’t set off their sense of moral goodness. They won’t harm people but they’ll consume drugs that’ll make them more likely to harm people, knowingly. They are not reflectively consistent, or interested in becoming so, or cognizant of why it would matter.

      They are following morality as a routine, not a goal.

      • Amused Muser says:

        ” …or cognizant of why it would matter.”

        Not to put too fine a point on things, but, well… why would it matter?

        I tend to think of your description as being very accurate, but also very reasonable — why would people want to treat morality as a goal, when treating it as a routine is so often more convenient?

        • Desertopa says:

          From a societal standpoint, people are almost certainly worse off for not trying to implement their morality more seriously. There’s a free-rider dilemma here, because it makes sense for people to settle for treating morality as a routine themselves, since it’s less demanding. But in terms of outcomes for everyone involved, it probably does, in fact, matter.

          • FeepingCreature says:

            Of course, this only matters if people actually care strategically about outcomes. I feel like you’re modelling routine morality as an optimization to avoid effort; that’s not what I was saying, what I was saying was that morality is primarily a routine, which some few people for some reason or other reverse-engineer into a goal.

    • I think it’s probably true, at least to some extent. I never gave a fig about philanthropy until I was in a position where I had a) some degree of financial independence, and b) the time to learn and think carefully about life. Maybe I’m an anomaly, but it strikes me as entirely reasonable that once people take care of the basic necessities, lots of them will inevitably keep climbing Maslow’s pyramid to the self-actualisation peak.

      That’s why my main focus right now is on championing the virtues of frugality and putting people on the path to financial independence. I see it as a natural pathway to EA (and related ideas). People are initially driven by self-interest, but they quickly find out that having heaps of cash and spare time isn’t actually particularly satisfying or fun, and start finding other ways to create meaning.

      I like Jordan Peterson’s mantra: Sort yourself out first. There are lots of responsibilities directly within your own grasp. Every day, you can tackle trivial tasks to make the domain of your immediate experience a little bit less chaotic. Keep slaying bigger and bigger dragons. Once you’ve put your house in order, then you can start to work in the community. By that time you’ll actually have some power, and some self-confidence, and some competence. (You’ll also be much less likely to hit something complicated with a stick and claim that you fixed it.)

      • sconn says:

        Thanks for justifying my life choices. 😉

        My husband and I had a disagreement about charity. I wanted to start giving right now, he wanted to get out of debt first. Our life is overwhelming in many ways and I feel guilty for getting a pint of ice cream because people are starving …. but he pointed out, reasonably enough, that if I try to give what I don’t really have (both cash and emotional resources) I’m not going to give as much as if I get myself sorted first.

        I caved, but I really hope that once we’ve gotten out of debt and invested the money and so forth, that we still want to give it away. Given how many of my past self’s goals I abandoned, I don’t trust my future self very much.

        • Ha, you’re welcome. The strength of your intentions are pretty impressive! I had no debt and six figures in savings, and I still had to fight hard to convince myself to start giving now, rather than wait until I’m fully financially independent.

          I think you and your husband definitely made the right choice though. Most people abandon their goals because they weren’t well though-out, or they pivoted to focus on better opportunities, or they didn’t have the systems in place to achieve them. As long as your underlying beliefs and values are consistent over time (i.e., EA isn’t just a whim or a flight of fancy) you’ll surely end up helping in some fashion.

          All the best with your financial journey!

      • scoop712 says:

        As the person in question: this is basically what I meant. Very aware it may in fact be baloney, but it does honestly seem true to me.

        • I hope so. I wonder if anyone has any relevant research to share? Anecdotally, people in the communities I’m part of (early retirement, personal finance) have a higher-than-average awareness of EA, and seem to be receptive to the idea.

          • Dragon God says:

            Seems overly optimistic:
            According to EY (or someone else on lesswrong), Optimism is ordering your preferences on future states of the world, then assigning probability according to that preference ordering.

            I think you prefer a world in which people desire to do good, and then base your predictions of reality on that preference.

      • jakubsimek says:

        I disagree and think some heuristic like tithing is better. I understand the math behind “pay your debts first” or “invest in order to give more in the future”, but these are rational but not meta-rational in the sense of Newcomb’s paradox and “one-boxing.” Maybe a simpler explanation would be to think about the notions of antifragilty and black swans – rare but extremely impactful events: one can pay up all debts and get hit by a car right on the day he wanted to make his first donation. Tithing or some similar rule is a paradox and not as “rational” as paying debts faster – but it also eliminates some risks and is less fragile.

    • Yosarian2 says:

      I think it’s true of most people. I think most people basically want to do good, and that usually when they do not it’s due to their own “problems”, either internally or in terms of the situation they find themselves in. All people have the capability to be incredibly coldly pragmatic about getting what they need and the capability to do horrible things when they feel they need to, but despite that, in my experience most people seem to tend towards being good when given the space and freedom to do so.

      • Dragon God says:

        Seems overly optimistic:
        According to EY (or someone else on lesswrong), Optimism is ordering your preferences on future states of the world, then assigning probability according to that preference ordering.

        I think you prefer a world in which people desire to do good, and then base your predictions of reality on that preference.

  4. fortybot says:

    > to driving predator species extinct in order to relieve the suffering of prey (yet to be analyzed for effectiveness).

    Wouldn’t that have a hugely disruptive effect on the ecosystem, likely resulting in massive famines (as all the primary consumers eat everything)? Is (number of suffering individuals)*(magnitude of suffering) even a useful metric for measuring suffering?

    • Mary says:

      For instance, being killed by a wolf or hawk or cat is MUCH faster than death by starvation.

      • fortybot says:

        In the article they link a “difficult to watch” clip of a lion killing a water buffalo. There are at most 3 seconds of fear and 2 seconds of suffering. I don’t think that really justifies the extinction of lions in the wild.

    • Ozy Frantz says:

      I promise that if you can think of a criticism in thirty seconds we are probably also going to think of it and put it in our paper, especially since people keep telling it to us. (Not speaking for my employer.)

      • onyomi says:

        What about big cats are awesome?

        • Ozy Frantz says:

          I for one have no objection to maintaining a breeding population of charismatic megafauna in zoos.

      • fortybot says:

        I’m sure those are some good reasons. Wanna share?

      • HeelBearCub says:

        Ozy, this is bug nuts.

        Seriously, my admiration for EA dropped an exponent.

        • Deiseach says:

          My problem with this is, in part, that it takes “animals have as much moral worth as humans and so humans have no right to exploit them, including for food or pleasure” then ramps it up to eleven million by making humans the ultimate arbiters of what animals, if any, live or die, if any animals survive at all, if we do permit certain animals to exist we will micro-manage their lives to a nanometer with monitoring and control of their environment, and finally animals have to engage in a beauty contest to be “charismatic” enough to win our approval for their continued existence (oh yes, we won’t eliminate lions as a predator species because I loved the Lion King as a toddler, so we’ll keep a few prime specimens in zoos).

          I mean, this is exerting moral influence with our thumb on the scales making us the only moral agents worth considering to the absolute maximum and it makes keeping battery farm chickens for egg-laying look like nothing in comparison. Humans should not kill animals because suffering so we’re going to kill all the animals because suffering. Sorry, I am plainly of too low an IQ to grok the logic of this.

          • rlms says:

            I think it is just taking the utilitarian focus on the value of
            happiness/suffering in life and disregard for the intrinsic badness of death/non-life to an extreme. It’s the opposite perspective to the one that views euthanasia/abortion/etc. as wrong based on a focus on the badness of death/non-life respectively over the badness of suffering in life.

          • Deiseach says:

            It is quite a consistent position to be against suffering initiated specifically by humans.

            I can see that. Wild animal suffering is not, however, initiated by humans. And taking the step to decide who lives and who dies is surrendering the point that humans are not just another animal, but one with the most moral worth/weight, the least like other animals (because there are no concerned whale or octopus groups discussing ‘what about the suffering predators cause?’) and the ones with ultimate control over the world (in a strictly non-religious sense).

            So if you give in on that, I think you do not have any argument about “humans should not eat meat” or “humans should not raise and kill food animals” from a moral basis. You can argue about it’s bad for health etc and you most assuredly can argue about unnecessary cruelty and suffering in intensive stock rearing, but you can’t argue “humans have no right to kill other animals” when you’ve just been discussing “should we wipe out all predator species”.

          • Dragon God says:

            I reject the concept of animals having moral worth.

          • anonymousskimmer says:

            @Dragon God

            No entity has “moral worth”; the only thing we (the creators of value judgments) can assign moral worth to are our relationships to other entities. We can’t assign a moral value to the entity itself. Period. We lack the godly power necessary to imbue value in an entity itself (such that all would see the inherent value now imbued within that entity).

            The absolute closest we can come to that is being the proximal cause of an entity’s existence, but even that isn’t creating any inherent moral value *in* that entity, it’s merely enabling various morally valued relationships to exist.

            People are free to assign value to their relationship with any entity they wish, even a fictitious spaghetti monster or a real bacterium. That’s the sole power of having the ability to make moral judgments, and it’s an absolute and inalienable (save through death) power embodied directly in the moral judger themself.

        • Jiro says:

          Let me say a little bluntly: People who claim to be concerned about reducing animal suffering usually aren’t, and are virtue signalling. Even the ones who aren’t are doing it for irrational reasons. They don’t take it seriously. Actually being concerned about animal signalling leads to some extremely bizarre results.

          The wrong lesson to learn from this is that you should accept the bizarre results; the right lesson to learn from this is that concern for animal suffering has bizarre implications and should be avoided.

          Remember the idea that you should prefer whale meat to beef and prefer beef to chicken because fewer animal lives are lost per serving of meat? That’s another example of taking animal worth seriously with a bizarre result–and unlike the bizarre result of killing predators, this one is perfectly safe to implement. Yet most vegetarians would still be horrified at it, because it may be logical, but they aren’t.

          • Tibor says:

            Once again playing the role of the (sort of cuddly and smelling of weed rather than sulfur) devil’s advocate – It is quite a consistent position to be against suffering initiated specifically by humans. If humans stop killing animals and let animals kill each other as usual, particularly if they do it by replacing all the good stuff that comes from killing animals (meat) by artificial and possibly cheaper means of production then we have no net loss of utility on one hand, no unforseen possibly dramatic consequences.

            Perhaps more importantly, you don’t have to fall into the trap of quantifying utility of existence of various animals. You don’t have to ask whether a lion’s life is worth more or less than the life of an antelope. If you transition to vat-grown meat (most realistically because it is cheaper), you actually dial back the human interference in the ecosystem. If you eliminate predators and replace them with human culling of populations (something that often has to be implemented anyway, particularly in Europe where we have successfully exterminated wolves, bears etc in most places…although this proposed idea would mean doing it on a whole different scale), you increase the human involvement.

          • Jiro says:

            It is quite a consistent position to be against suffering initiated specifically by humans.

            That’s the point of the whale meat example: It’s another bizarre consequence of taking animal suffering seriously, but this time the objection “I only care about suffering initiated by humans” doesn’t apply. But vegetarians don’t like that one either.

            Perhaps more importantly, you don’t have to fall into the trap of quantifying utility of existence of various animals

            That makes the whale meat example even worse. You can no longer say that you consider a whale to be worth more than 150 chickens.

          • Tibor says:

            @Jiro: I don’t think you need to compare whales and chicken. You just transition to vat-grown meat entirely.

            Btw, my personal view is something like this – if you can provide the chicken with a life worth living, i.e. if it’d prefer its short existence to nonexistence then raising it for meat is a moral good (it has its costs irrelevant of the chick’s own utility but those are mostly reflected in the price of the meat). Killing a whale for meat does not come with the corresponding creation of utility for the whale (it would have lived anyway…unless you can come up with some sort of a whale farm). The conclusion is that raising animals for meat is or at least can be morally a net good whereas killing wildlife isn’t (unless it is necessary for culling reasons due to the lack of natural predators).

            How do you decide whether a chicken’s or a cow’s life is “worth living”? My heuristic is to observe what those animals normally do on their own and if you provide them with most of that then you can be reasonably sure they get a good deal out of it – better than in the wild (which is not that high a standard of living, at least not on average…you might of course be lucky whereas the life on the farm is very egalitarian). By those standards a lot of farming is a net good.

          • Jiro says:

            You can’t farm whales, but you can still farm cows, so the weirdness still happens between cows and chickens. If one cow has 20 chickens worth of meat, and you don’t think cows are vastly more morally important than chickens, you should consider it to be a lot less immoral to eat cows than to eat chickens.

          • Dragon God says:

            Tibor Chickens have no preference over living or not living; they’re cognitive abilities are too underdeveloped for that.

        • Dragon God says:

          This made me commit not to support EA—I’ll devote my resources towards other more pragmatic causes (like Science education).

      • raemon777 says:

        Sure, but I would feel a lot more comfortable if when this came up in conversation, the “humans have tried this before and fucked it up” objection was always put front in center along with “and we’re addressing it somehow, or, we’re not going to do it if we can’t find a way around that”, ideally in such a way that it’s hard to forget and/or quote out of context.

        (This is mostly a PR concern rather than a “do I even trust the Wild Animal Suffering EAs?” thing, although it’s a bit of the latter too)

        • Ozy Frantz says:

          If anyone is saying anything about reducing wild animal suffering other than “it is important that we do more research because we don’t know what interventions to recommend yet” then at-message me and I will yell at them. Talking about the merits of predator control can wait until someone has actually written a paper summing up the costs and benefits of predator control.

          • Douglas Knight says:

            Here is someone making a recommendation.

          • Deiseach says:

            Ozy, if we’re thinking about fucking around with predator control (even just thinking about it), then what does that say about the moral rights of other animals? Because predators are not part of our food sources, we don’t raise them for meat and other goods, we don’t exploit them in that fashion and yet we’re still talking about reducing numbers/exterminating them. And this time from a “disinterested” perspective rather than “they are in areas where we are developing for resources, they attack our livestock and attack us, they are under pressure from our expansion” which is not disinterested but I think is more colourable precisely because it comes from a place of interest as with any other competition for a niche in the environment between species.

            That does kind of blow the “humans have no right to kill animals for their own use or interests” out of the water, does it not? And if it does not, how not? The suffering argument still depends on “we have the right to intervene because…” Well, because what? Because we’re the ones who can, and we’re the ones who can because we are self-aware, intelligent, and miles ahead in development of any other species on this planet. We are not talking the same moral equivalence between us and them. Even if we switch out “right” for “duty to intervene”, then whence comes that duty – except from our superior moral status?

          • briantomasik says:

            @Deiseach , normal adult humans are “miles ahead” of people born with severe and permanent intellectual disabilities. Why care about their wellbeing?

          • Dragon God says:

            Because it is better for all humans to care about the wellbeing of all humans than of a select few.

          • Deiseach says:

            Why care about their wellbeing?

            Brian Tomasik, I care about other humans because they are humans and I believe humans have more moral worth than, and are sufficiently distinct from, other animals for that to be a meaningful distinction.

            Were I to adopt the values of the pro-choice movement, I would not care; the actual over-rides the potential, the severely brain damaged or otherwise impaired are only potential lives and it’s perfectly fine to put an end to them:

            Helga Sol Olafsdottir counsels women who are considering ending their pregnancy over a foetal abnormality.

            She says she tells mothers: “This is your life. You have the right to choose how your life will look like.”

            She told a reporter: “We don’t look at abortion as a murder. We look at it as a thing that we ended.

            “We ended a possible life that may have had a huge complication… preventing suffering for the child and for the family. And I think that is more right than seeing it as a murder — that’s so black and white.

            “Life isn’t black and white. Life is grey.”

            By that logic, there is a difference in moral worth between the more and the less developed entity. By that logic, it is my right to kill another entity for my convenience. (You may call it “utility” or “hedons” or what you will if you find “convenience” an unpalatable term, but the basic idea is that permitting a child with a disability to be born and raising it will be massively more inconvenient for/reduce the utility of the life of the parent(s) because it imposes great burdens on them, hence the right to abort such a flawed life). By that logic, I can kill a cow for meat if meat eating fits in with how I choose my life will look like.

            However, by the logic of the animal rights people, there are no meaningful differences between humans and animals. That being so, it means humans have no right to decide what animals live or die or interfere with their lives in any way, since we have no basis to make decisions for others. If we’re reducing animal suffering because it makes us feel better and the idea of suffering distresses us, that is something that applies only to humans. We are not considering the wishes of the animals in this scenario, nobody is saying “We’ll ask wolves if they think they should be rendered sterile”.

            So if I’m not distressed by animal suffering, you cannot impose your distress on me. You can’t tell me not to eat meat if you feel disgusted by the very notion of eating meat and I don’t. We both of us are putting our values and feelings and beliefs ahead of those of the animals, we both are judging the weight of our principles by our belief systems and whether it’s on the side of “keeping chickens for eggs and meat” or “letting prey species die off”, it’s our feelings we are concerned with, not those of the animals.

            If we are claiming the right to kill animals to prevent suffering, we are claiming the right to kill animals, full stop.

      • [Thing] says:

        I promise that if you can think of a criticism in thirty seconds we are probably also going to think of it and put it in our paper, especially since people keep telling it to us. (Not speaking for my employer.)

        Oh good, then I assume you’ve already stopped to consider which predator species is currently causing the most suffering at this point in history …

        • Jiro says:

          Malaria parasite?

        • Iain says:

          To be fair, they are working hard on converting said predator species into herbivores…

          • vV_Vv says:

            But wait! how do we know that the plants don’t suffer?

            brb detonating the false void decay device…

          • sconn says:

            ….”shedding, as he put it, the green blood of the silent animals, and in the future man would live on nothing but salt. Then a pamphlet was put out titled ‘Why Should Salt Suffer?’ and there were more problems.”

            –GKC, The Napoleon of Notting Hill (misquoted from memory since I don’t own the book)

      • ventablackbear says:

        I’m getting a strong feeling of disgust at the idea of extincting a species because it’s a predator.

        Does this idea imply that only strictly herbivorous animals will be allowed to live in the wild?

        Every human has to go vegan and stop hunting (lest ye be killed), right? Since we’re concerned about the suffering of prey, we also have to consider other ways in which human society causes suffering to prey animals aside from directly killing them for food, right?

        What about the suffering of the predators while they are being killed? I guess this is weighted against all the present and future suffering of prey that will be killed at their claws?

        It strikes me as something I would never attempt or endorse because of how fragile and interconnected I perceive the ecosystem to be. Can we say for certain that there will be no catastrophic side-effects of eliminating all predator species? If we can’t I feel like that’s enough to not try it.

        I would also like to point out that, if a superior alien species existed and knew about Earth and wanted to implement this idea, we would be extincted first, which doesn’t really feel nice.

        • Ghatanathoah says:

          Does this idea imply that only strictly herbivorous animals will be allowed to live in the wild?

          I doubt even that would really be allowed. There are tons of ways for them to suffer without predators, including injury, starvation, and disease.

          Every human has to go vegan and stop hunting (lest ye be killed), right?

          I’m not sure about this. If you got rid off all the predators you’d need something to cull the population, or else the animals would suffer from starvation and overcrowding. Hunters trained to kill quickly and humanely would be a necessity.

          In the long run I bet the best thing to do would be to cook up a different motivational system besides pain/pleasure. Like maybe pleasure/more pleasure. Then genetically engineer it into the prey animals and gradually replace the natural animals with the genetically engineered ones.

          • Naclador says:

            Or, while you are at it, just engineer the prey animals to feel pleasure at being killed by a predator. This way you could keep the ecosystems intact and still increase pleasure and overall utility. Think “cow in restaurant at the end of the universe”.

            In my opinion, this whole story about predator control stinks to the sky of human hybris and megalomania.

          • Nornagest says:

            Doesn’t that lead more or less immediately to an extinct prey population and very fat predators?

          • Naclador says:

            Not neccessarily, you could keep the instinct of the prey to flee and be scared by predators, but still make the process of being killed more enjoyable by a sudden, massive endorphine spike. This way hunting would still take effort but suffering of prey would be minimized.

            Aside of that, I thought the irony on my side was obvious.

      • vV_Vv says:

        The same way that the Book of Job addressed the Problem of Evil?

      • jakubsimek says:

        I would suggest you to read Antifragile by Taleb – he has exactly these metaphors – modern people are like zoo animals – very fragile. I don’t agree with all what he is saying and don’t like his troll persona, but I like his ways of thinking and that if h could name one adversary and opponent in the world it would be Ray Kurzweil. Kurzweil reportedly takes 100 pills a day of various drugs, Taleb is all for “via negativa”- just remove some habits, objects, food for a while to have the change, variance and challenge. Antifragility is about “growing your muscles” through damaging them (up to a point). So and pain is inevitable at times, but pain and feelings in general are just a way how our body makes probabilistic calculations (Harari writes a lot about it). So your ideas to put predators into the zoos in the lenses of antifragility and antifragilizing systems is very suboptimal, I would say and wrong. It’s like some Eliezer’s AI joke of “getting a grandma out of burning house by blowing it up”: after humans put themselves in a zoo cage, they want to put all animals there as well and annihilate the wild ones. Sad!

    • Loquat says:

      I’ve heard that type of theorizing before, though the version I heard involved either re-engineering the predators to eat plants themselves or feeding them the (insert predator species here) version of mealsquares. It also seemed to involve extremely in-depth human or AI management of nature, as obviously someone has to restrict the birthrate of the prey species to prevent massive boom-bust famine cycles.

      Also, I shudder to think what they’d do if they ever become convinced plants, fungi, or bacteria can suffer.

      • fortybot says:

        > Also, I shudder to think what they’d do if they ever become convinced plants, fungi, or bacteria can suffer.

        And what happens when we discover life which is entirely alien to us (extraterrestrial or otherwise)? How do we define pain for non-sentient beings? If we define it as some sort of response to damage or threat of damage, then almost all forms of life experience pain. At some point it’s just a chemical reaction indistinguishable from other processes of life.

        To me, this proposal sits in an awkward spot where it doesn’t go far enough based on its premise. Why stop at all carnivores and omnivores? Just sterilize all animals completely. That way no animals will even exist to feel pain. Continuing on, we should also eliminate all forms of life, as they do nothing but suffer and cause suffering. Leave living for the robots who can subsist off of solar power or something.

        • There is a significant discussion about whether to accept negative utilitarianism; it seems your analysis assumes it. If you accept it, then yes, benevolent world-exploder is optimal. Which is a large reason people find it silly; https://en.wikipedia.org/wiki/Negative_utilitarianism#The_benevolent_world-exploder

          • fortybot says:

            I think it’s reasonable to assume negative utilitarianism, given that the OP’s argument is for the removal of a significant amount of organisms in order to reduce suffering. And yes, I do find it silly (mainly because I rather want to live, and I don’t care enough about other people to go against that).

          • sconn says:

            Denying negative utilitarianism is the main reason I am not a vegan. If humans didn’t eat beef or milk, we would probably let the cows die out (or nearly) and I *like* cows. If we could just raise them more humanely than we do, then they can have a nice life doing cow-type things and then one bad day (or bad few seconds) before being a tasty steak for me.

            Now humanely raised meat, currently, costs a lot more, but I don’t really feel any guilt at all about eating it. The alternative for that cow isn’t a happy life in the wild (there is no “wild” for cows) but never existing at all. I feel like I’m giving it something.

            (I’m not a purely positive utilitarian either, or I would still be having kids. I feel like balance is desirable.)

      • KeatsianPermanence says:

        It also seemed to involve extremely in-depth human or AI management of nature, as obviously someone has to restrict the birthrate of the prey species to prevent massive boom-bust famine cycles.

        1. Considering AI is human developed, how can we suppose that some sort of super-sentient AI would comprehend ecosystems at a level beyond that of humans as it is limited by human observations and perspectives? And, considering the limited information available and the often disastrous outcomes of ecosystem interventions, what is the reason to believe that AI management will be comparably more successful than passive/natural management.

        2. At what point do we cross the line into imposing a morality that is arbitrarily pleasing to us but ultimately harmful?

    • nimim.k.m. says:

      I’m now thinking about the numerous times humans have managed to disrupt or outright destroy various ecosystems by accident, by unchecked greed, or by attempts to create minor changes that were not very well thought-out and ended up being much larger changes than anticipated.

      A coordinated attempt to gather enough resources to purposefully mess up the whole ecosystem sounds actually quite really frickkin’ scary, because with our current technology and resources, the humankind is more than capable of precisely doing that (unlike creating a super-AI). At least, offing all large predators: insects might be more difficult. And removing all the predators from nature because animals eating other animals is icky according to some misguided human notion sounds very much like the kind of a crazy supervillain bioterrorist plan that should be shut down sooner than later. It would be existential risk to the biosphere as we know it, and it’s a bit troubling to see people arguing about it in a nonchalant way. (One man’s philosophical argument about nature and environment is other man’s reason to start sending explosive devices to researchers doing wrong things.)

      Clearly we need much stronger norms and ways of sanity-checking proposed ethical systems before some bright folks manage to get too effective about bringing a catastrophe (or ‘systematic change’) that can’t be canceled. Why to worry about possible superhuman AIs being out of tune of human morality in future, if it’s plausible risk that a bunch of humans who are out of tune of the baseline ethics are capable of wreaking havoc today?

      I’ve always regarded the paperclip maximising super-AI as a distant possibility but not something one should not be really scared about (at least until we have more clear path to effective self-replicating factory technology in the horizon). However, after reading this report the risk got a bit more real to me, if you take a view that well-functioning powerful human organizations are sufficiently similar to AIs in these kind of nightmare scenarios. How do we prevent that group of humans don’t become too convinced that their their particular rational approach to paperclip maximising reducing suffering is shown to be true to their satisfaction, and they are confident about dismissing all the complaints and proceeding with e.g. removing all predator species from the nature?

      • Aponymouse says:

        > offing all large predators
        I would imagine small predators cause comparatively more suffering since they’re more numerous (assuming comparable suffering coefficients for small and large prey).

        Also, removing predators would necessarily mean introducing total population control for the prey that does not involve killing it (otherwise it would obviously multiply too much and eat up all the food and starve). I’m sure they’re thinking about it, of course, but I _really_ doubt there’s a feasible solution to that problem at current technological level.

      • Unirt says:

        I don’t know, maybe it’s not so horrible or undoable at all. After all, large carnivores have been purposely kept at minimum numbers in many countries for centuries; the environment as we know it hasn’t really collapsed as a result. The numbers of herbivores are being controlled by hunting. Most people seem to agree it’s a good arrangement if you compare it to alternatives (large carnivores being so numerous that they steal children from humans; herbivores being so numerous they suffer famines and epidemics). I wouldn’t be at all surprised if it was less awful to die from a gunshot than to get torn to pieces by wolves. I also don’t think most people would object to reducing the cat populations (currently the most numerous predator in our woods), and cats are perhaps the most horrible predators of them all, toying with their prey and all. If the reduction could be managed humanely, like catching the cats and sterilizing them, it would be even more popular.

        • Loquat says:

          It’s reasonably doable to control the population of large animals like deer via means we have available today. It’s a lot harder to control the population of small animals like mice in the absence of small predators, and I don’t know how you’d handle insects at all. (Though if insects have to be protected, all the bug-eating rodents and birds will have to be gotten rid of along with the cats and wolves, so that mostly takes care of the small animal population problem)

        • Deiseach says:

          After all, large carnivores have been purposely kept at minimum numbers in many countries for centuries; the environment as we know it hasn’t really collapsed as a result.

          Oh yeah, and I have no problem with that. The extinction or reduction of predator species came about because they were competing with us in a sense; they were threats, they were on territory we were expanding into, they attacked our food animals, etc.

          What I find repugnant about the current proposals is that it does not use that as any kind of consideration at all because it’s not a matter of interest; it’s deciding that we get to be the arbiters of the world and the final authority on the lives and deaths of other species. Which fine, great, but then go all the way with that. If I can decide for myself that a wolf should not be allowed to breed and should be painlessly put to death at a certain point in its life with no regard to anything the wolf may feel on the matter because I judge that wolf’s life to be negative in utils (or whatever), then I equally get the right to decide “No, I like eating real beef steaks” and to continue purchasing meat from butchers who are supplied by farmers raising drystock.

          You can’t have it both ways. If you take the right to yourself to decide on life and death for animals, then you take it all the way. The fact that in one case you want to exercise it to prevent suffering is not very relevant; you only care about the suffering because it upsets you. If it doesn’t upset me or him or them, you can’t force others to feel that upset because you don’t have the right to do so. And you most certainly cannot say “It’s okay for me to pay a professional killer to administer a lethal injection to a wolf on my behalf to end that wolf’s suffering because I think the wolf is suffering and it upsets me and I prefer not to feel upset so I get pleasure from the cessation of that suffering, but it’s not okay for you to pay a professional killer to slaughter a cow on your behalf so you can get food from its carcass because you get pleasure from the taste”.

    • Ketil says:

      We could test this out by introducing species to places where they have no natural predators. Rabbits to Australia, say.

    • briantomasik says:

      My personal opinion (not necessarily endorsed by anyone else) is that the net impact on wild-animal suffering of eliminating a given species from an ecosystem is often unclear, since ecosystems are so complicated. Instead, I favor those human environmental policies that reduce plant growth so that there are fewer animals in total. Environmentalists sometimes agree with reducing plant growth, such as opposing manmade irrigation of deserts, restoring anthropogenically mesotrophic lakes to oligotrophic status, and reducing use of artificial fertilizers in agriculture.

      • Progressive Reformation says:

        I don’t get it. Eliminating a given species will have unclear effects, but reducing plant growth will not have unclear effects? I really don’t want to be rude here, but this is cray cray.

        I’m not sure whether this is even a real question or a rhetorical one by this point, but given what you’ve said, I have to ask. What’s your opinion on Agent Orange?

        • briantomasik says:

          Reducing plant growth has some side effects, such as on carbon storage and so on, but within the ecosystem itself, the general effect is to reduce the amount of food available for higher trophic levels, which generally means fewer animals will exist. This effect is fairly obvious in some cases; for instance, converting a barren sandbar to a forest increases populations of most animals: https://www.mnn.com/earth-matters/wilderness-resources/stories/indian-man-single-handedly-plants-a-1360-acre-forest

          In contrast, changing a single species within an ecosystem has unclear effects because other species will take its place, predator/prey dynamics will shift in complex ways, etc. It’s very unclear if there will be more or fewer wild animals after the change, what they’ll die from, etc.

          Agent Orange, in its most famous uses, causes significant harm to humans, and there are much less objectionable ways to reduce plant growth on the margin (such as land-use change), many of which are done already for human benefit.

          • Progressive Reformation says:

            Oh, so you’re okay with significant ecological changes (different kinds of animals, different ecologies, etc.) as long as there’s fewer animals in total? That is, if we turn the rainforest into a desert, you would view that as a success?

            On Agent Orange, I don’t feel that you’ve answered my question. I wasn’t asking whether or not you think there were better alternatives; I was asking whether you think the deforestation of Southeast Asia was overall a good thing (given that it would not have happened without Agent Orange or Operation Ranch Hand).

    • KeatsianPermanence says:

      I posted this in reply to another comment but I think it’s perhaps more relevant here.

      Actually, there is good data on the effects of removing predators from ecosystems. For example, predators (ex: wolves) were systematically removed from the midwest starting in the mid-1800s such that many native predators were on the brink of extinction around the 1980s. At that point prey populations – namely deer as their numbers are actively monitored – began to balloon and exceed the carrying capacity of the environment. Up until the mid-2000s, it was fairly common to find starving fawns in the early spring as head level leaf buds were just enough to support most fawns at a starvation level. In response, hunting campaigns through increased tag limits have helped alleviate some of the environmental pressure. However, this is just an example replacing a non-human predator with a human predator. The predator niche provides a positive check on prey populations to avoid a sort of localized Malthusian Catastrophe.

      It seems like the anti-predator movement is exceptionally naïve. Every basic ecology course covers the necessity of predators in the predator-prey dynamic. So, what do they expect will happen once the predators have been eliminated? All the prey animals ravaging ecosystems in pursuit of a starvation subsistence or a gradual fade into nirvanal non-existence? Surely no utilitarian would want to trade isolated instances of short suffering for widespread long term suffering?

      • raw says:

        To me the concept of ‘wild animal suffering’ is a human hubris, as:
        a) anthropomorphization: Who do we know an animal in its natural habitat is suffering? Does a whale suffer when it is month without food before it returns to polar regions? Or is it just his normal life? Does a gnu suffer more if it gets killed by a lion or if it dies from a parasite infection? Is suffering defined by cortisol levels?

        b) even if we accept that suffering exists and humans could change it, why do we know predators are the largest source of suffering? In many animal species males fight, risking severe injuries and death, for the right to procreate. Maybe alpha males of their own species are their biggest source of suffering? (see for example https://doi.org/10.1371/journal.pbio.0020106 or other works from Robert Sapolsky )

        c) And as pointed out by others even if predators are the problem removing them will severely disrupt an ecosystem, leaving humans in charge to keep the balance. History tells as that we are usually failing miserably at this as ecosystems are really complex.

        Applying human concepts or ideas (e.g. suffering, morality or fairness) on wild life seems to my very far fetched and don’t even get my started on applying these concepts on particles.

        • Jiro says:

          To me the concept of ‘wild animal suffering’ is a human hubris

          The concept of wild animal suffering comes up because animal suffering is used to justify vegetarianism. And rationalists of the EA type have a habit of taking ideas seriously, so when vegetarians say they should stop animal suffering, the rationalists actually believe them, and figure out all the weird implications. (The fact that they don’t distinguish between action and inaction doesn’t help, either.)

        • random832 says:

          Is suffering defined by cortisol levels?

          And if so, isn’t the solution animal wireheading?

        • briantomasik says:

          a) Those are good questions, which is why we need more people looking into things like that. 🙂

          b) I don’t know anyone who says predation is clearly the biggest source of wild-animal suffering. We should explore and enumerate many different causes of suffering.

          c) I find it plausible that removal of wolves from Yellowstone reduced total wild-animal suffering by reducing plant growth, which reduces the total number of animals that can exist in the long run. That said, I personally don’t think predator removal is near the top of the list of most promising ways to reduce wild-animal suffering (though other people may disagree).

          • Deiseach says:

            I find it plausible that removal of wolves from Yellowstone reduced total wild-animal suffering by reducing plant growth, which reduces the total number of animals that can exist in the long run.

            Our tenderness for the suffering of animals leads us to kill them so they cannot suffer because they will never have lived. Our tenderness is so inflamed because it is our neurosis, our thin-skinned inability to bear a harsh touch, and the pain from the contemplation of suffering that causes us to burst into tears over the very thought of it then leads us to contemplate massacres so that there will never be visible or notional suffering to twitch our over-sensitive nerves.

            Flannery O’Connor was right.

            When tenderness is detached from the source of tenderness, its logical outcome is terror. It ends in forced-labor camps and in the fumes of the gas chamber.

            No gas chambers in our modern compassion, though; we will reduce plant growth so species can quietly starve to death. Until that seems too cruel, and we turn to ‘humane’ lethal methods, and render the entirety of the natural world an abattoir.

  5. mupetblast says:

    “I figured the speaker named ‘Cashdollar’ was a hallucination, but she’s right there on the website…”

    The interviewer in this convo with an AI org is named Sarah Drinkwater. Perhaps we’re all sharing the same hallucination. Perhaps there’s something…in the water:

    https://www.youtube.com/watch?v=SZpINa6UvpQ

    • pontifex says:

      Perhaps there’s something…in the water

      You mean at the Futurological Congress?

    • Steve Sailer says:

      “Cashdollar Name Meaning: Americanized spelling of German Kirchthaler, from the field name Kirchtal ‘church valley’ + the suffix -er denoting an inhabitant.”

      • Deiseach says:

        from the field name Kirchtal ‘church valley’ + the suffix -er denoting an inhabitant

        With the happy, and etymologically related, coincidence that a thaler was a silver coin, the name of which by the circumlocutions of historical circumstance eventually gave rise to the word “dollar” and so we return where we started 🙂

  6. tomac100 says:

    Has the EA community ever addressed the tradeoffs between donating to charity and the good that money would have done being invested? Donated money, especially in EA charities that do malaria work, is mostly consumption, and doesn’t contribute to capital accumulation, which is how wealth is sustainably built by society. So the question that no one asks is: did Bill Gates do more good as an entrepreneur or as a philanthropist? As an entrepreneur he made just about every company in the world more efficient with his Office Suite, and he built enough wealth to spend on his charitable work. I think the answer is pretty clear.

    • Charles F says:

      There was actually an SSC post related to that a while back, see: here

      Not exactly the same as what you’re asking. I know I have seen some people talking about how much good you could do starting a company and employing people.

    • Deiseach says:

      Has the EA community ever addressed the tradeoffs between donating to charity and the good that money would have done being invested?

      If they’re steering people “Don’t be a doctor, become a management consultant instead!”, I think they’re doing just fine on the “accumulate capital” front.

      Topic for next meeting: workhouses, prisons already effective institutions for dealing with indigent, no need for bleeding-heart do-gooders to ineffectively raise donations to buy festive fruitcake

    • Aido says:

      There are two responses to this you’ll usually hear.

      1. You can generally anticipate the cost of saving a life to go up over time as “low-hanging fruit” is eliminated.
      2. You have to do the EV calculation taking into account the odds that you will in fact happen to be Bill Gates. It’s not enough to zoom into a success case and observe that in the success case it made more sense for the person to be an entrepreneur.

      • Progressive Reformation says:

        My guess is that the results of a success like Gates are so dramatic that the correct answer is still to try your hand at entrepreneurship (presuming you have a halfway-decent business idea). Something something Saint Petersburg Paradox.

        • Aido says:

          As the EA community has grown in size it’s become less infatuated with surefire ways of doing good such as earning to give, and more willing to encourage its members work on high-risk high-upside ventures, so it’s quite possible something like this might be encouraged more as time goes on. Of course, this same willingness to go for raw EV is what is responsible for the drift of the community overtime to things like AI or wild-animal suffering. In the same vein, it would probably be important to take into account the consequences for the movement in terms of perception and movement building if the community consisted of failed entrepreneurs whose only distinguishable feature from the rest of silicon valley is they talk about how when they hit it big they promise they’ll give everything away. Not saying it’s necessarily a bad idea, but I do think it’s probably the wrong choice for the movement in its early stages.

          • Progressive Reformation says:

            Sorry, I wasn’t being sufficiently clear. I meant that the improvement to the world due to the business itself was sufficient to try entrepreneurship.

            The existence of Microsoft and its products made many, many other companies far more efficient, allowed smoother scientific development, etc. which led to improvements downstream, and so forth.

            And potentially winding up with 50 billion dollars to give away can’t hurt either.

            [Of course, one can always consider the counter-factual — maybe Microsoft made the world worse because otherwise people would be using superior Apple products and something something network effects. But maybe not; and I think it’s clear that the average effect of a big tech company is positive.]

    • Ketil says:

      Has the EA community ever addressed the tradeoffs between donating to charity and the good that money would have done being invested?

      As pointed out, very few become Bill Gates. On average, you can set some estimate for return on investment, let’s say it’s four percent. Now, if it is better to invest money now and altruistically spend it in the future, you need a convincing argument for when you will spend it. Something has to change, otherwise you will just keep reinvesting indefinitely, while people die from malaria. And it has to be something disruptive, a definite vaccine against malaria, not just a vague “technology gets better” argument – technology will always get better.

      • sconn says:

        I would like to know if the return on investment I might expect is greater than or less than the increase in the cost of saving a life over time. If I invest $100 today, and 50 years from now I have $1000, that doesn’t do me a lot if all the $100 ways of saving lives have been done already and now it costs $1000 to save a life.*

        *Numbers totally made up and not correlating at all to investment rates or costs to save a life, since I don’t know either.

        • Witness says:

          If all the $100 ways of saving lives are going to be exhausted in 50 years (even without your help!), wouldn’t it be nice if someone had invested wisely enough to come up with an extra $1000 to save another life?

          • Witness says:

            Note that something like this ties back into Scott’s anxiety about being “low-impact”. Even if the marginal value of additional doctors is currently low-impact, it seems obvious that we need some number of doctors, and there’s no reason Scott should be sad to be one of them.

          • sconn says:

            But, I mean, during those 50 years a lot of people are going to die of preventable stuff.

    • Yosarian2 says:

      I actually think that reducing malaria is an incredibly good long-term investment. When you reduce disease load, you make the whole community more economically productive. Children who don’t get malaria do better in school, and learn more. Adults who don’t get malaria are more economically productive at work, or better able to grow food on their farm, ect. These kind of factors tend to create positive feedback loops that make it easier to deal with other issues.

      Malaria also has some specific things about it that make reducing malaria an especially good investment. For example, there’s a nasty side effect where having malaria makes you much more suspectable to being infected by HIV, which may be one of the reasons AIDS is such a huge problem on the African continent. Reduce the number of people infected by malaria, and you likely slow the spread of HIV, which has global impacts of it’s own.

      Diseases like malaria are an economic drain on the entire society, and when you reduce that drain, you increase the annual GDP growth rate of the entire country. You’re helping to eliminate one of the main “poverty traps” that hold the third world back. Eventually, as more and more of the third world comes out of poverty and is able to contribute more to the global economy, it will tend to speed up the scientific and technological progress of the entire human species.

      • Salem says:

        It’s not at all clear to me that eliminating malaria will noticeably increase the annual GDP growth rate of the region, at least in the long term. Your argument above does little to argue against the naive position that it will be a one-off effect on the GDP level. There may be some small, second-order effects on growth, but honestly, it’s really hard to argue that growth rates in tropical Africa are held back by lack of human capital to invest in. Unless malaria prevention magically fixes the institutional problems, nothing doing. More likely, effective elimination of malaria will only come once the institutional problems of these countries are fixed – meaning that malaria elimination will be more caused by increased growth than causing it.

        But I agree with your overall claim that it looks like a great long-term investment.

      • Dragon God says:

        Meh, living in the third world, I think you’re being too optimistic. I feel there are much more important (and fixable) problems holding the third world back than malaria.

        • rlms says:

          Such as? I’m sure effective altruists would like to know! (Also, reading some of your other comments here, you seem to think that believing that animals don’t have moral relevance is incompatible with EA. It’s not! I know at least one EA person who strongly holds those views, and lots of non-vegetarian EAs who presumably hold the common view that animals have some moral relevance but much less than humans.)

  7. Nancy Lebovitz says:

    “There is a telling story (probably not historically accurate) about George Fox and William Penn. Befitting his station as the son of an admiral, William Penn wore a ceremonial sword. Since he knew the Friends were opposed to warfare, he wondered if wearing his sword was appropriate. George Fox advised Penn to, “Wear it as long as you can.””

    • Scott Alexander says:

      I’m not sure I understand.

      • Nancy Lebovitz says:

        This brought it to mind:

        “And he said – no, absolutely, stay in your career right now. In fact, his philosophy was that you should do exactly what you feel like all the time, and not worry about altruism at all, because eventually you’ll work through your own problems, and figure yourself out, and then you’ll just naturally become an effective altruist.”

      • faoiseam says:

        From Janney’s “The Life of William Penn”

        “When William Penn was convinced of the principles of Friends, and became a frequent attendant at their meetings, he did not immediately relinquish his gay apparel; it is even said that he wore a sword, as was then customary among men of rank and fashion . Being one day in company with George Fox, he asked his advice concerning it, saying that he might, perhaps, appear singular among Friends, but his sword had once been the means of saving his life without injuring his antagonist, and moreover, that Christ has said, “he that hath no sword, let him sell his garment and buy one.” George Fox answered, “I advise thee to wear it as long as thou canst.” Not long after this they met again, when William had no sword, and George said to him, “William, where is thy sword?” “Oh!” said he, I have taken thy advice; I wore it as long as I could.” This anecdote, derived from reliable tradition,* seems to be characteristic of the men and the times. It shows that the primitive Friends preferred that their proselytes should be led by the principle of divine truth in their own minds, rather than follow the opinions of others without sufficient evidence.
        “It must have been .manifest to George Fox that his young friend, while expressing his uneasiness about the sword, was under the influence of religious impressions that would, if attended to, lead him, not only into purity of life, but likewise into that simplicity of apparel which becomes the disciples of a self-denying Saviour.”

        • Dedicating Ruckus says:

          Just to rain on this parade as well, this sounds as much like an effective manipulation tactic designed to take advantage of people with high scrupulosity as it does divine guidance toward pacifism.

          • Jiro says:

            I could say the same thing about EA as a whole (replacing “divine guidance towards pacifism” with “rational guidance towards EA”).

            Actually, now that I think of it, that captures in words one of the things I find disturbing about EA, and I could say it as more than just a quip.

          • sconn says:

            I am really wondering if there is a rational answer to scrupulosity. When I was Catholic I had Catholic guilt, now I’m atheist and have utilitarian guilt. I mean, there is NO WAY that the utility I’m getting from spending five minutes resting/eating a bar of chocolate equals the utility I could obtain for someone else by spending those five minutes or two dollars in a more altruistic way.

            The answer I usually hear is “well you can’t be generous if you don’t take care of yourself,” but I was in a cult and found that I could be massively more generous with my time than most people are, without actually dying. It’s true that I was miserable, but if I’d been spending my time on something other than a cult that was actually useful, would it have mattered that I was miserable compared to other people getting the chance to *not die*?

            I’m afraid “that’s no way to live” or “that’s emotionally unhealthy” are not satisfying answers. What right do I even have to be happy when other people struggle to even stay alive? This question bothers me all the time; I thought it would go away when I left religion but it’s actually gotten more pressing since I don’t believe in an afterlife.

          • Jiro says:

            I think the right answer is to conclude that utilitarianism is producing results that are absurd enough that you should discard (nontrivial forms of) it.

          • Kaj Sotala says:

            What right do I even have to be happy when other people struggle to even stay alive? This question bothers me all the time; I thought it would go away when I left religion but it’s actually gotten more pressing since I don’t believe in an afterlife.

            Speaking from personal experience: I strongly suspect that the true reason behind your feelings isn’t any intellectual structure like theology or utilitarianism, but rather some deep emotional sense of being fundamentally unworthy and undeserving. I didn’t have the part about belonging to a cult, but I did have a (very long) phase where I was explicitly struggling with thoughts like “what right do I even have to be happy when other people struggle to even stay alive”, which I thought was a mostly rational thought derived from a utilitarian framework… but it was rather just a fundamental sense of really disliking myself, manifesting itself in the guise of utilitarian reasoning.

            There were a lot of different things over the years that gradually helped with this. At some point I just hit a rock bottom and came to the realization that this wasn’t working, and that spending large amounts of time with too much anxiety to do anything else than lie in my bed wasn’t actually very useful – not even in the terms of a strict utilitarian framework that assigned no special value to my own happiness in particular. So I decided to partially reject that framework, on the theory that not following it would actually lead to better results (in terms of the framework itself) than following it.

            This was sometime around 2014; I wrote a bit about it at the time, in an article called “Two arguments for not thinking about ethics (too much)“. This wasn’t sufficient to make the underlying feelings of unworthiness go away, but it was a useful start to deprogramming the kinds of thought structures I had adopted while trying to rigorously follow utilitarianism, which further maintained my feelings of being miserable.

            There have been a lot of additional steps during the journey, but at their heart, they’ve all been about declaring myself fundamentally worthy and deserving, and replacing questions and thought patterns of “what should I do” with “what do I want to do”. (This hasn’t led to immense egoism and selfishness either, because it turns out that I often want to help people even in the absence of any shoulds.)

            The latest and one of the biggest breakthroughs for me was about two months ago, when I figured out why exactly I had had those feelings of unworthiness and disliking myself in the first place, and fixed those reasons. I wrote about the process in this article. It might or might not be useful for you: I don’t know how much of that preceding work was necessary for making the techniques described in my article, actually work for me.

          • mack says:

            What right do I even have to be happy when other people struggle to even stay alive? This question bothers me all the time; I thought it would go away when I left religion but it’s actually gotten more pressing since I don’t believe in an afterlife.

            I might be weird, but as a dedicated EA I have always thought about this in the negative sense. I see the world around me, I know there’s suffering everywhere, and I think “My life is objectively awesome. What right do I have to be unhappy?” And that works for me.

          • sconn says:

            @Kaj Sotala, thanks for the articles, I’ll go read them.

            I do like myself fine. I think I’m as worthy as the next person. And I am not spending all day lying in bed. It’s more that when considering ethical choices, I can’t really work out a rational line between caring about myself and caring about others. Should I care about myself exactly as much as any other human? Because that way lies madness. Yet there’s no rational reason why I’m worth more, or need it more. I tend to draw the line, purely by fiat, as 90% me and my family*, 10% others, but I admit that’s not rational at all.

            *I consider care for one’s closest people to be equivalent to caring for oneself; our happiness is so interconnected that I can’t be happy if my kids aren’t taken care of, and they can’t be happy if I’m an emotional wreck. But it’s not exactly altruistic to care for kin; it’s instinct.

          • Witness says:

            @sconn

            “Should I care about myself exactly as much as any other human? Because that way lies madness. Yet there’s no rational reason why I’m worth more, or need it more.”

            That way lying madness is a reason. Not because you’re worth more, or need it more, but because for purely practical reasons: You can (typically) be more effective at improving your own lot (and that of your family and close friends) than you can for others, at least to a certain point. Also, going mad is likely to be counterproductive.

            (And trying to precisely define some cutoff where you “ought” to shift your attention to broader matters may not be all that effective compared to improving your own lot well beyond your needs and then using the more obvious excess well.)

          • Kaj Sotala says:

            @sconn: Alright, I don’t want to overgeneralize – it’s possible that the specific emotional problem that I had, isn’t the cause of your troubles. In particular, if none of the “symptoms” described in my second article ring any bells, then I’d guess that it’s probably something different. I’d be surprised if there wasn’t some underlying emotional issue causing these feelings, though.

          • sconn says:

            @kajsotala, The deep underlying cause is probably that I was in a cult whose main focus was “efficiency” and we were constantly regailed with gems like, “that bite of chocolate is for a second, but eternity is forever” and “with that five minutes you took to look at that sunset, you could have saved one soul by doing sacrifices” and crap like that. The trouble is it’s not really answerable. Switch out souls for QALY’s and you still have the same feeling of, you could be doing more.

            The one improvement I’ve had is that I no longer think that my suffering *actually* is instrumental toward saving anything, so just straight-up suffering on purpose is something I was happy to quit doing. But if you replace suffering with units of time or cash, then there’s still this feeling of having to justify everything I do by proving it’s for some altruistic benefit.

            I did find your posts interesting, though, and am saving one of them to share with a friend, because it explains her situation (as she’s described it to me) very well.

        • Hyzenthlay says:

          It’s more that when considering ethical choices, I can’t really work out a rational line between caring about myself and caring about others. Should I care about myself exactly as much as any other human? Because that way lies madness. Yet there’s no rational reason why I’m worth more, or need it more.

          There’s no rational reason to care about other people, either. Or to care about anything at all. Even if you’re a pure utilitarian whose goal is to reduce the total amount of suffering in the universe, it’s hard to articulate a reason for why you should care about suffering in the general sense. It only works, as a philosophy, if you’re someone who already cares about suffering in the general sense.

          In a broader existential sense you’re not worth more than other people, but in that same sense, all of humanity is worth no more than a single amoeba. There is no existential sense in which anything “matters”; things only matter to sentient beings with values.

      • Fuge says:

        May be more of a reference to this:

        “17 Regardless, each one should lead the life that the Lord has assigned to him and to which God has called him. This is what I prescribe in all the churches. 18 Was a man already circumcised when he was called? He should not become uncircumcised. Was a man still uncircumcised when called? He should not be circumcised. 19 Circumcision is nothing and uncircumcision is nothing. Keeping God’s commandments is what matters.

        20 Each one should remain in the situation he was in when he was called. 21 Were you a slave when you were called? Do not let it concern you, but if you can gain your freedom, take the opportunity. 22 For he who was a slave when he was called by the Lord is the Lord’s freedman. Conversely, he who was a free man when he was called is Christ’s slave.

        23 You were bought at a price; do not become slaves of men. 24 Brothers, each one should remain in the situation he was in when God called him.”

        1 Cor 7:17-24

  8. cvxxcvcxbxvcbx says:

    Is everyone sad now? I’m happy, I had a good week. Not to brag or anything. Read something funny, like this maybe: http://oglaf.com/morality/
    (Warning: nudity)

    • ManyCookies says:

      Uh that’s a little bit more than nudity!

    • CatCube says:

      I think this one is the most succinct statement that I’ve ever seen for why being clear about your expectations with your contractors is important: http://oglaf.com/blockade/

      That one is completely safe for work, and I’d post it in my cubicle but for the fact that when people found out where it was from the entire rest of the website would probably result in me losing my job. You probably don’t want that domain in your work browser history.

  9. suntzuanime says:

    It’s good that you recognize that there’s a problem, but what concrete steps are you taking to make the effective altruist community more welcoming to bad people?

  10. Joe says:

    It concerns me how strongly the conclusions for what’s most effectively altruistic seem to depend on particular moral intuitions being taken as given. For example Brian Tomasik’s writings that advocate for destroying and/or preventing various kinds of life, while quite reasonable and well qualified in their arguments, ultimately require his intuition that suffering is so much more bad than pleasure is good. Without this intuition, working hard to prevent life looks like a pretty terrible thing to do.

    I remember seeing someone comment a while back that “Effective Altruism puts so much focus on working out how to be effective, I worry they don’t spend enough time working out how to be altruistic”. Which seems a valid concern. Or to repurpose the phrasing from the title of one of Scott’s recent posts: “Effective Altruism Considered As Unfriendly Superintelligence”.

  11. I see a parade, I rain.

    If EA ever gets big, I think it would become just as ineffective as other less systematized moral causes. EA is a “movement for all” because it has approximately zero political power. If the stage of action eventually encompasses the resources of the state then different kinds of utilitarian are going to be battling in just as abitrary ways as the pre-effective moralists did for control of those resources. An effective morality might be more mathematical and rigorous once you’ve conjured the basic moral units ex nihilo, but before you’ve done that you just have a load of non-arguments about why we should draw the line at humans instead of animals… or fundamental physical particles apparently.

    The fundamental fact of humans having terminal values all over the place cannot be sidestepped by being “more scientific”. Quantizing things doesn’t help if you’ve quantized the wrong things. Overconfidence in having achieved a more “effective” morality is probably unavoidable, because the concept creep monster has nasty plans for an ideology with a name like that.

    They sound like an affable bunch of goofs with degrees, and I wish them happiness (as much as they can plot on a spreadsheet) but they’ll be the victims of their own success it they get any, mark my words. Maybe they should remain as they are now for their sake and ours.

    • Scott Alexander says:

      I talked to a guy there who says (don’t know if this position is common) that he wants to avoid EA becoming a mass movement. His idea of “recruitment” is targeting enough rich people to provide funding, enough smart people to analyze research and determine what’s most effective, enough competent people to start new nonprofits and work in lobbying/policy/etc, and enough persuasive people to attract more people in the other categories.

      This isn’t to say that other people aren’t welcome or don’t have anything to contribute, just that the movement has very specific needs and doesn’t have to be obsessed with recruiting everybody all the time.

    • I think he has the right stance, but the question there is whether you can slip into becoming a mass movement by accident, through institutional osmosis. Too much positive magazine coverage and BAM! (I’m only half-joking)

      Conventionally, building movements is hard, but once you’ve got momentum so is stopping. Avoiding becoming a political movement might be something you have to actively try to do. It would be very hard to maintain a sense of what you are once you’ve got all the rich people giving you money, all the smart people from Universities doing analysis for you, and all the NGOs and lobbyists behind you, and the consequent favorable coverage from certain partisan actors polarizing public opinion and politicizing your efforts. I’d like to see EA figures analyzing the political dimension and where EA itself is going as a meta-level that wraps back around to be included in the “effectiveness” criteria. Think about what you’ll do with power in detail before you have power.

    • Itai Bar-Natan says:

      If EA ever gets big, I think it would become just as ineffective as other less systematized moral causes.

      You mean ineffective like the scientific method, democracy, abolitionism, and feminism? Gosh, I sure hope that happens!

      Seriously, not all attempts at moral progress have been failures. I don’t think effective altruism will be “the last social movement we need“*, and I agree with you that it will encounter serious problems if it gets big. If and when it draws people from a more diverse set of values and beliefs it will encounter more and more internal conflicts, and these conflicts might be never be resolved or become resolved by establishing a dogma that is false or becomes outdated. Still, I think the effective altruism may still be a net positive in this situation. Effective altruism is not just a set a values but also a cluster of principles on how to achieve the things people value. Ultimately, if these principles were part of the common set of ideas that most people are aware of in the same way environmentalism is, and adopted by people with the existing diversity of values, I think the world would be a better place.

      * I don’t mean that imply by that quotation and reference that this opinion is representative of what most other effective altruists believe; it just happens to be a particularly apt we of describing the position I’m denying.

    • phoenixy says:

      Incidentally, this is exactly how I feel about the Libertarian Party

  12. Deiseach says:

    I’ll bite my tongue and not be snarky about this. I’m glad to get reports from the frontiers of things I will never in my entire life get within a donkey’s roar of, and even gladder I’m not there –

    “Scott Alexander – eating vegan Caesar salad so you don’t have to” 🙂

    Okay, I’ll even bolster a few things in this report!

    (a) This talk was called ‘Christians In Effective Altruism’. It recommended reaching out to churches, because deep down the EA movement and people of faith share the same core charitable values and beliefs.

    (b) So if suffering made up an important part of the structure of the universe, this would be so tremendously outrageously unconscionably bad that we can’t even conceive of how bad it could be. So the most important cause might be to worry about whether fundamental physical particles are capable of suffering – and, if so, how to destroy physics.

    (c) Romans 8:22
    22 For we know that the whole creation groaneth and travaileth in pain together until now.

    See? Fundamentally in agreement! 😀

  13. Paul Crowley says:

    Scott, the most effective thing for you to do is whatever makes it easiest for you to write, which is probably staying in medicine.

    Also, come see the eclipse with us.

  14. MrApophenia says:

    Ok but real talk, how was the Impossible Burger? I want to know how my vat-meat future is going to taste.

    • beleester says:

      I also want to know this.

      I’m also wondering, what’s the Jewish stance on vat-grown pork?

      • The halachic question is complicated, depending on where the vat-grown pork came from.

        For a small taste of the issue , if the initial cells come from a pig, it’s very likely that it’s subject to a debate closely related to an extent one; whether gelatin made from nonkosher animals can be kosher. This is unclear because gelatin is completely unrecognizable as part of the original animal once it is made into gelatin, which may make it no longer nonkosher. That question is a split between different segments of the Orthodox Jewish community, one which is likely to remain unresolved, with each side following a different rule.

    • Chalid says:

      I’ve never had an Impossible Burger, but “Beyond Meat” makes a decent burger that you can get at Whole Foods and various other places.

    • romeostevens says:

      It mostly tastes like a veggieburger but with the same texture/fattyness as meat. They seem to be using something salty tasting for the umami as that was a bit strong. I’m hopeful they can iterate.

      • Aido says:

        I work there — can confirm we are iterating. Can also confirm I have heard less than optimal things about Umami. Best place to get it is at Gott’s in SF.

        Also, Impossible isn’t a “Vat-meat” (clean meat; in-vitro meat) company. That’s Memphis Meats and others. Our approach is much less energy intensive.

        • grendelkhan says:

          It’s super cool that you work there! Is there anything nifty you can tell us that wouldn’t violate confidentiality? Anything surprising you’ve learned in terms of meat and meat-related facts, or about the resources or energy involved?

          • Aido says:

            Sure.

            Regarding resource usage, the Impossible Burger uses 75% less water, 95% less land, and emits 87% less GHG. Viewed another way, each time you eat an Impossible Burger instead of a regular burger, it prevents emissions of GHG equivalent to 14 (or 18, depending on your math) miles of driving, saves 75 square feet of land, and saves 10 minutes of shower time. You can find our full sustainability report here.

            Edit: The thing I would keep in mind is that scale is the important factor. Yes, these numbers are nice, but the real impact comes once you start supplanting 1/10th or 1/2 of all beef consumption.

          • Deiseach says:

            saves 10 minutes of shower time

            Only in America would calculations like this be made. Can you turn that into “saves X minutes of washing with a ewer and basin” for those of us who grew up in 19th century conditions? 🙂

  15. caryatis says:

    They are not crying because of the abstract realization that suffering exists. They are crying because of their own past suffering. There is no such thing as a non-selfish human. Read La Rochefoucauld.

    • Ozy Frantz says:

      If there is no such thing as a non-selfish human I suggest it would be most efficient to redefine the word “selfish” to refer to the kind of selfishness that involves eating cake and “altruistic” to refer to the kind of selfishness that involves buying malaria nets and/or researching suffering in fundamental physics, an useful distinction we would otherwise have no words for.

      • caryatis says:

        We can use more than one word to discuss those differences.

        • beleester says:

          Why not use “selfish” for that? Everyone already uses it that way, except for people who are trying to be clever and argue that no one really acts altruistically.

          • Naclador says:

            So much this! I think calling people selfish because they are only doing good to feel better themselves is the most stupid argument in the history of ethics. They feel better when they do good exactly because they are not being selfish!

            And although I agree that no human being can possibly be entirely devoid of selfishness, noone should dare to take this as an excuse to cheat old ladies out of their savings by selling them shady investment products.

      • [Thing] says:

        Thank you. Does anyone know of a name or a page on Less Wrong or someplace like that for the general form of this particular argument? I.e.:

        Alice: “According to [plausible definition], [word] applies to everything/nothing in [relevant domain]” or “The boundary between [word] and not [word] is nigh-impossible to demarcate, therefore [word] is meaningless.”

        Bob: “And yet, somehow people find it easy enough to use [word] to convey meaningful information in practice, so maybe we shouldn’t define it in a way that renders that logically impossible?”

        It seems to come up a lot, and it would be convenient to have a succinct, general rebuttal.

        • Sniffnoy says:

          You could say “fallacy of gray”, but really I’d just say “go read ’37 ways that words can be wrong’.” 😛

        • JulieK says:

          Didn’t a Supreme Court justice say something like “We may not be able to define twilight, but that doesn’t mean we can’t distinguish between noon and midnight?”

      • Ketil says:

        Maybe we should reject the binary notion of selfish and non-selfish, and accept that every action has multiple causes and effects, and that some of them always benefit the actor in some way?

        People are “altruistic” because it makes them feel good to help others, or because it is a means to social signaling, or because of the salary they receive from working in an NGO, and so on.

      • edralis says:

        I think, on the contrary, that it would be useful to *not* conceive of altruism as being inherently non-selfish behavior, but rather as a sort of qualified selfishness – because I think it is useful to remember that ultimately all people do what “feels right”, and if we desire to change their behavior, one of the options is to change what makes them feel right, i.e. to have them feel the right kind of emotion connected to the right kind of behavior (which amounts to altruism). By keeping altruism selfish (i.e. by choosing the definition of selfishness that keeps altruism selfish), we are (perhaps?) more likely to approach them thinking about the circumstances that made the connection between the emotion and behavior, and can work to change that connection (and make them altruists). I’m not claiming this is a “correct” definition, of course, but perhaps it might be more useful – I think it might be interesting to ponder the psychological consequences of people being reminded that what makes a serial killer and a serial altruist really behave differently is the link, or lack thereof, between “my suffering” and “the suffering of others”.

    • [Thing] says:

      Read La Rochefoucauld.

      Do you have a more specific citation in mind?

      • drachefly says:

        Given the thesis they were attempting to establish, the reading the citation itself would probably be as helpful as reading what they were hoping you would read.

  16. publiusvarinius says:

    Over lunch, one a friend told me about his meeting with an EA philosopher who hadn’t been able to make it to the conference. This friend had met the philosopher, and as they were walking, the philosopher had stopped to pick up worms writhing on the side walk and put them back in the moist dirt.

    Well, I’m cynical about this type of behavior. It’s not like these philosophers ever offer analogous acts of kindness to mosquitoes of the genus Anopheles.

    • anonymousskimmer says:

      You’re wrong. (Personal anecdote – yes, even when they’re sucking blood from me. After reaching the age of “maturity” I’ve only ever breeched this on purpose twice: 1) on explicit orders from my father [still young enough to live with my parents at the time] that I argued against before vacuuming termites, and 2) fire ants crawling up my leg [the pain was too much]. Both of these events were two decades ago. I’ve reached the point that I’ll kill for mercy or necessity, but that’s it.)

      • Naclador says:

        Well, you have to draw a line somewhere. Of course you could argue that hand disinfection is mass murder, or imagine the carnage among microbes when you boil your vegetables. Best you kill yourself instantly then. No, wait, that would doom your entire gut flora.

        You see, you cannot by life or death prevent all suffering.

        • alchemy29 says:

          One can draw line at having a nervous system or not (personally I would not, but it doesn’t seem contradictory)

        • anonymousskimmer says:

          I work in Synbio and only rarely feel slight pangs at lysing or autoclaving bacteria.

          Life will out, to the maximum extent it can, but we lifeforms do as we must also.

      • publiusvarinius says:

        Interesting. What are your opinions about mosquito extermination?

        • anonymousskimmer says:

          I’d rather they be cured of the parasites they spread.

          Parasites, like fire ants crawling up my leg, pass my point of forbearance.

          In the now, if exterminating them is the fastest way to help humans, then I’m willing to give it a pass. But I’d like a reintroduction done as soon as feasible.

          • Naclador says:

            Just as a side note: fire ants are not parasites.

            But independently from that:
            Why should the suffering of parasites (or frustrating their preference for living) be outweighed by mere human inconvenience?

            From a utilitarian perspective I see no jusification for that.

          • anonymousskimmer says:

            I realize in retrospect that the use of a comma parenthetical and the word “like” was ambiguous, but it was meant to compare parasites to fire ants, not include one in the other group.

            Straight up gut response. I don’t justify it, I just say my active sympathy ends at this point. http://nautil.us/issue/35/boundaries/no-you-cant-feel-sorry-for-everyone

            I am very happy if other people do extend their sympathy to parasites, as long as I’m not the one who has to suffer them.

  17. eyeballfrog says:

    I think some of these people are forgetting that for something to be effective, you actually have to convince people to implement it. If you can’t explain your idea to an IQ 100 person without them thinking you’re a lunatic, it’s not going to be very effective. Applying this heuristic to the examples given yields

    Malaria prevention — good
    Vat-grown meat — probably doable
    Wild animal suffering — no
    Qualia hacking — god no
    Zero-point suffering — even I think you’re a lunatic

    That last one makes me wonder if there’s some sort of mass hysteria going on here. I’m struggling to imagine a less effective use of your presentation at an EA conference than that.

    • Ozy Frantz says:

      Actually, I think wild-animal suffering is pretty tractable. Lots of people support wildlife rehabs. Having a bird feeder is pretty much the most normie thing possible, and it’s often motivated by a sense that it’s helping the birds to feed them. You’d have to be careful about framing (when I’m talking to normies, I tend to take the tack of “when wildlife managers make decisions about wild animals, they should incorporate the animal’s welfare into decision-making along with conservation and benefit to humans”).

      Also, qualia hacking is ideal for development by scientists funded by rich people, so you really don’t have to explain it to normal people until after it’s invented.

      • eyeballfrog says:

        How’s your success been on getting them on board with the “drive predators to extinction” proposal? That was the part that seemed most likely to lose people to me.

        • HeelBearCub says:

          Oh, I’m worried they’ll have converts.

          There are plenty of people who loathe deer hunters and think they are cruel and barbaric … and still eat meat.

          Fluffy baby bunnies and deer are powerfully attractive.

          • Nornagest says:

            Probably more true for people that watched Bambi once than for people that have actually seen deer up close. They move gracefully and the fawns are cute, but the adults tend to lie somewhere between weird-looking and hideous depending on how lucky the deer is and what parasitic diseases are endemic to the area.

            On the other hand, I’ve always thought pigs didn’t really deserve their reputation for ugliness. Still eat bacon, though.

          • HeelBearCub says:

            I don’t know, we have herds of deer in our neighborhood. They always seemed pretty cool to me.

        • Nancy Lebovitz says:

          If you want to relieve wild animal suffering, don’t you need to end aging for them?

          • Inty says:

            Not necessarily. Most relief takes the form of preventing them from ever having been born.

          • Inty says:

            I’m surprised by how critical many of the comments on SSC are, as well as how quick they are to dismiss the ‘weirder’ aspects of the EA movement. I’d like to present a few arguments in favour of weird causes.

            1) The causes that have found their way into the priorities of EAs, are optimising for a bunch of different things. One of those things is likely to be ‘not weird’. I know you might be thinking that it’s the other way around and ‘weird’ has more signalling value and therefore weird causes are over represented, but if you consider for example Givewell’s top recommendations, or where most of the money goes or what most members are into, it’s the ‘not weird’ stuff. In my opinion ‘not weird’ is noise, not signal. Goodness should not be related to weirdness, so the causes that have made it in despite being weird are likely better at optimising for the signal of doing good.

            2) Meta-point: Non-utilitarian arguments in favour of weird causes are disproportionately likely to go unmade and unheard even if they exist, because most arguments in this domain are made by utilitarians for utilitarians. Therefore if you are not a utilitarian and you are put off by the utilitarian arguments for things like WAS, consider that what you’ve heard is a biased segment of the ‘argument space’.

            3) In my experience people often end up endorsing weird points not because they lack a conservative assessment of their risk, but because their assessment of the risk of ‘ordinary’ actions is far higher than most people’s is. For example, most people never even *think* about S-Risk (the risk of extraordinary amounts of suffering in the future). It would be shocking if a minority of people considered this concern and then went on to endorse similar actions to what other people endorsed.

            4) Marginal utility is a thing. I don’t think anyone who values qualia hacking wants it to be the only charity; they just think it’s a low-chance high-reward investment. And it’s a much harder argument to deny that the rewards would be very high if it worked than to argue that it probably won’t work.

            It’s really hard to properly convey my reason for favouring weird causes though, because it isn’t any of these points. It’s more a sense that I should resist the impulse to balk from following my convictions to their logical conclusions. That was the impulse that led me to EA in the first place.

        • Ozy Frantz says:

          My next sentence is usually the (correct) “we don’t know what interventions would help yet– in fact, we don’t even know whether a lot of proposed interventions, like predator control or supplemental feeding, would be good or bad– so my research is on trying to understand how to actually benefit wild animals.”

      • Naclador says:

        I think you vastly overestimate our capability of eco-engineering. Also you disregard our history of making extremely poor choices when meddling with ecosystems, leading to the greatest extinction wave since the end of the Cretacean. I don’t think we can easily make up for that by adding some more predatory species to our kill list.

        I think the best way to minimize suffering on earth would be human mass suicide. I hear Kim Jong Un and Trump are already working on that. They might become the most effective altruists of the millenium if they manage to blow humankind off the face of the earth.

    • Inty says:

      I favour qualia hacking as a cause, and I disagree that we need to convince people, or ar least that we need to convince many people. Consider cars. To make the automobile a successful innovation, you do not need to convince everyone to drive it at once. You only need to ensure that using one gives an obvious advantage to its owner. Likewise I believe if we made qualia hacking possible then the people who don’t employ it at all would notice that they’re missing out on an advantage. The only people you need to convince are funders and researchers.

      • Qualia hacking, if I understand it correctly, consists of altering people so their natural level of happiness is higher. Surely the obvious starting point is to observe people whose natural happiness level is high and figuring out why. I’ve known such.

        • Deiseach says:

          Surely the obvious starting point is to observe people whose natural happiness level is high and figuring out why.

          There are folktales about this and they generally run on these lines (the happiest man in the world turns out to be someone too poor to be able to afford a shirt).

          Or Amyclas, the poor fisherman who was unafraid when Caesar entered his home, because being so poor, he had nothing to lose.

        • Bugmaster says:

          Why not just wirehead everyone ? Boom, maximum happiness achieved.

    • Deiseach says:

      I think most people’s idea would be “worry about wild animal suffering after you’ve successfully tackled human suffering” and if you don’t, it’s because worrying about the deer and the bunny rabbits is a lot easier than solving human misery (after all, you can always end the problem of suffering by exterminating the animals, whether that be the soft solution of making them sterile so they cannot reproduce and die off naturally, or the hard solution of euthanasia. Not really allowed do that with “most of sub-Saharan Africa is sunk in poverty and misery, let’s painlessly and humanely knock off their populations”).

  18. jrdougan says:

    Who were the grifters? There is no movement that involves influence over significant amounts of money (or power) that doesn’t have grifters. Obviously they are going to camouflage, but they will be there.
    So, who were they likely to be?

    • Deiseach says:

      To be fair, I don’t think the grifters have latched onto EA yet. As it gets bigger and more visible, sure and especially for the SF version, because that’s where the Big Money is. London is cute but not worth the trouble for the reward since Oxford is the official headquarters of the regional movement there (although local chancers will attach themselves when/if London gets big on the local scene) and Australia? Pfft!

      But Silicon Valley and all that lovely, lovely cash sloshing around in the possession of people who are mega-rich and now looking for a chance to make themselves feel good about being obscenely wealthy of doing good? Oh yeah baby, give momma some of that sugar!

      • rlms says:

        On the other hand, Silicon Valley has lots of other puddles of money for grifters to attach to, so EA might get fewer than you would expect.

  19. lil_copter says:

    Scott’s low-impact career keeps him happy and writing. I’d argue that his writing IS his most effect way of giving.

    • Reasoner says:

      +1. As an EA, I’d be very happy with my career if I wrote a blog that had 10% as much impact as Scott’s does.

    • Standing in the Shadows says:

      I think I remember either David Friedman or Milton Friedman say they didn’t bother to vote, because the time spent persuasively writing was much more effective at social change.

  20. Evan says:

    Scott, does nobody ever tell you you’re a hero to effective altruists and rationalists? Every rationalist and effective altruist I know locally in Vancouver reads your blog. People are such zealous fans of your blog when one friend asks another if they’ve read the latest post, they say “no, I really should, though”, as if it’s a moral responsibility. Completely normal people I know who’ve entirely avoided weird subcultures like EA have Everyone I know in the Bay Area also reads your blog. Effective altruists from across the world who don’t know each other yet and have nothing in common break the ice by asking the other if they read the latest SSC post.

    After “reducing meat consumption (if one can)” or “GWWC membership”, the one thing that the greatest number of effective altruists have in common is probably “reads SSC”. If I were to write a fair list of what this blog has done for effective altruism, it’d take too long. Suffice to say, as much as you feel you don’t put in as much effort to be consequentially good with your choices and thus you’re relatively not praiseworthy, the consequences of your blogging you didn’t predict have been so monumental on EA it’s hard to imagine what EA would be like if you never started blogging.

    I’m literally incredibly surprised you don’t realize, even if you don’t believe it, lots of people believe you’re one of the most impactful people in the community through your blogging alone.

    • sconn says:

      It’s certainly where I heard of EA.

    • Scott Alexander says:

      I talk in Part I here about the distinction between how many Utility Points vs. Virtue Points people get.

      Utility Points can sometimes be anticorrelated with Virtue Points. If an incompetent person is lazy, no harm done. If someone who creates a million Utility Points per hour of work is lazy, they’re being pretty callous.

  21. DataPacRat says:

    One of my rules-of-thumb is to pretend that I have a terminal moral goal of “that which maximizes the odds of DataPacRat existing (at least moderately happily) in the indefinitely-distant future”. Thanks to convergent instrumental goals, this generally leads to quite conventional and sociable behaviour, and tends to be compatible with my various other moral goals, both known and unconscious.

    However, if people are seriously proposing destroying the universe to eliminate suffering, because suffering is worse than the benefits of existing… could somebody take a rolled-up newspaper and swat those persons on the head with it, with a firm “No!”?

    • Jiro says:

      That rule of thumb fails when faced with the possibility of self-modifying to not want anything other than EA (or milder but more realistic versions of that).

      • DataPacRat says:

        A less compact version of that rule of thumb runs along the lines of, “I have various values I desire to be fulfilled. The person with the best understanding of the particulars of those values is myself; thus, in order to help maximize the odds of those values being fulfilled, I should try to arrange for my own survival into the indefinitely distant future. Which includes being careful about what counts as ‘me’, likely by tying part of my concept of self-identity to some coherently extrapolatable version of my unedited values.”

        (I’m also hoping to arrange for enough backup copies of myself to allow any direct editing to be reversible, all the way back to my baseline.)

  22. hlynkacg says:

    As others have said, thank you for lightening an otherwise grim mood. That said I am disappointed that the post did not open with…

    We had two bags of mealsquares, seventy-five pellets of ritalin, five sheets of high-powered MDMA, a saltshaker full of adderall, and a whole galaxy of multi-colored nootropics along-side, a quart of tequila, a quart of rum, a case of beer, and half a pint of raw either…

  23. Eric Zhang says:

    This post was really eye opening. I just realized that if the Comet King from Unsong went to an EA conference, he’d be considered one of the more normal people attending despite being a half-Indian half-archangel linguistic wizard who sometimes fights demons.

  24. hopaulius says:

    There is apparently a prior in the EA culture that needs examination, namely: suffering = evil.

    • anonymousskimmer says:

      It’s not just EA culture. A ton of people think susceptibility to pain is the ultimate measure of an entity’s right to exist (arguments against not catching fish in a violent manner, eating clams, etc…).

  25. J says:

    Hugh Nibley is perhaps the most interesting of the Mormon scholars. I wish I could find a clearer video without the schmaltzy music, but this story always stuck with me. Maybe it’s because Hugh gets choked up about it; you can tell it gets to him.

    There’s a story told in the Midrash. It begins with Abraham sitting in the door of his tent in the plain of Mamre in the heat of the day. … It was a hot day. It said it was a day like the breath of Gehinnom. Like the breath of Hell was coming out … the heat and the dust and the sand … that’s utter desolation.
    And he was worried, of course, because he says some poor stranger might be lost out there. Someone might have lost his way, and be perishing, because you’re not going to last an hour in this. So he sent his faithful servant Eliezer out to look everywhere. He sent him out in all directions and he came back, “No I can’t find anyone anywhere.”
    You have these feelings…so he went out himself, though he was very sick at the time. He was sick and ailing, and old, and he went out into that Hell. And he looked and searched, but he found no one. And at the end of the day he came back exhausted toward his tent. As he approached the tent the three strangers were standing there.
    It was the Lord and the two with Him. … and then it is that He promises him Isaac. As a reward for what he had done. This supreme offering. It’s a very moving story. He’d gone out to look for his fellow man … out in that dusty hell, you see, all alone. Eliezer couldn’t find any, and he said, “I think I can find someone.” Well he found something. He found the answer to the thing he’d prayed for all his life. His son Isaac. It’s a beautiful story.

  26. anonymousskimmer says:

    Over lunch, one a friend told me about his meeting with an EA philosopher who hadn’t been able to make it to the conference. This friend had met the philosopher, and as they were walking, the philosopher had stopped to pick up worms writhing on the side walk and put them back in the moist dirt.

    And this story struck me, because I had taken a walk with one of the speakers earlier, and seen her do the same thing. She had been apologetic, said she knew it was a waste of her time and mine. She’d wondered if it was pathological, whether maybe she needed to be checked for obsessive compulsive disorder. But when I asked her whether she wanted to stop doing it, she’d thought about it a little, and then – finally – saved the worm.

    Maybe add this as a question in your next survey?

    I’ve actually been diagnosed with OCD years ago. I do not see this as linked in any way to any of my past or current obsessions or compulsions.

    My wife has warned me against picking up snails though, and if you live in the wrong location I warn you against picking them up too (with a bare hand): https://www.theatlantic.com/health/archive/2017/04/when-globalization-brings-brain-invading-worms/522153/

    If you live in the right location then I warn you against picking them up after they have cemented to the ground – it may do traumatic injury to them. And watch where you are stepping when moving them away from the sidewalk. 🙁

    It is possible to “herd” them away from the sidewalk via proper positioning of your feet around the area they are moving. This might take an extra minute or two per snail, but is otherwise effective.

  27. Bugmaster says:

    I hate to sound negative, but still: endeavors such as “hacking qualia” or “killing off all predators” do not sound especially Effective to me. Learning that the EA movement does not merely tolerate, but apparently encourages such efforts, has diminished my trust in EA.

    • Nornagest says:

      Same. I was down with EA when it was about buying malaria nets and funding deworming efforts, but I feel like it’s getting into some pretty fraught territory when it starts coming out with high-concept high-impact altruism which relies for its effect on a moral framework that you have to talk Joe Sixpack’s philosophy-major cousin into, let alone Joe Sixpack himself. Both because it’s a lot more likely to be wrong, and because it’s a lot less likely to get any real-world traction.

      • lkbm says:

        I do think these ideas are alienating to many people, so they might have a negative in terms of recruitment, but I enjoyed hearing about outlandish ideas on how to improve the world.

        They seem weird and thus probably wrong, but other weird probably wrong ideas included bendy space, wave-particle duality, and the sphericalness of the ground upon which we stand.

        Until we find a more-right solution, I’m going to keep donating malaria nets, but I ‘m happy that people are thinking about this stuff.

        OTOH, I majored in philosophy, so maybe I’m inured against ridiculous ideas. 🙂

        • Bugmaster says:

          If I wanted to talk about awesome new ideas for improving humanity in some hypothetical way at some point in the distant future, I’d join a LARP. Or possibly a church. But when an organization puts “Effective” in its name, I expect them to focus on achieving tangible results by efficiently allocating limited resources. Malaria nets and middle management are not nearly as sexy as qualia and bodhisattvas, but they work. When I donate money to a cause, the bare minimum they must be able to do is show that my hard-earned cash is going to actually make a tangible difference. In this world, not the next.

          • lkbm says:

            This isn’t a conference asking you to donate to each and every attendee. Yes, donate to bed nets–if you ask the aggregate group of attendees where to donate, I suspect that’s what they would have said too.

            Welcoming people who are thinking about other methods hasn’t prevented the leading organizations in the community from pushing that intervention as the best option.

          • Bugmaster says:

            My problem is that I expect an organization with “Effective” in its name to be laser-focused on efficiency. Their meetings shouldn’t be filled with starry-eyed idealists who spend their time discussing animal qualia. Instead, they should be filled with boring men in suits saying stuff like, “we ran an extensive six-month audit, and were able to cut costs by 10% while increasing overall productivity by 3% annually; please make sure to reference chart 3A on page 31”.

            Instead, it looks like the EA movement is actively encouraging, as well as financing, the kinds of people whose efforts will be completely ineffective at best (and harmful at worst). When I donate money to a charity, the last thing I want is for my hard-earned cash to be flushed down the drain.

    • Jacob says:

      I’m sympathetic. However, keep in mind that EA conferences are where the craziest ideas are going to get pitched. Most will be shit. As long as there is a selection process killing off the bad ideas early and promoting the good ones, the fact that bad ideas existed in the first place is fine. Necessary, in fact, because some of the best ideas (give kids cowpox to prevent them from getting smallpox, heavier than air flight, AI) started off really crazy.

      So keep calling out bad ideas and help with the selection process.

  28. Wrong Species says:

    I don’t understand the utilitarian obsession with suffering. Yes, it’s bad but how is taking away the life of a large number of beings better than just allowing suffering? Are they willing to take the extreme view and endorse murdering certain people in the world? Should we go genocide some poor country and replace their populations with those from richer countries. I don’t think many utilitarians would endorse this reasoning but it has a non-trivial chance of both removing suffering and increasing happiness.

    • shar says:

      While I can’t speak for any utilitarian consensus, presuming one exists, this is clearly an issue they’ve considered pretty deeply (if not resolved). I mean, it was Parfit himself who first described the Repugnant Conclusion, and they don’t call it that just because it sounds metal.

      If you care to wade through this review you’ll see that the authors consider a number utilitarian models that imply something along the lines of “unhappy people should be killed in order to raise the average happiness”, and generally the academic literature seems to consider such an implication as sufficient grounds for rejection, or at least some serious side-eye.

      • shar says:

        Note that I’m by no means an expert, or even a dilettante. I thought that review I linked was interesting, but it and Scott’s writing on the subject account for ~95% of my understanding of utilitarianism.

        • Sam Reuben says:

          A thorough reading of a Stanford entry is going to give you a pretty excellent understanding of a topic, I would note. Stanford is basically the most respectable philosophical source out there, at least from an analytical side.

          There’s always more to learn, but you didn’t pick a remotely spurious or vague first source.

          • shar says:

            Thanks – I bumbled across that particular entry by accident, but on your recommendation I’m going to go back and do some browsing.

      • Wrong Species says:

        I’ve heard of the Repugnant Conclusion but I don’t think they generally talk about straight up murder. Has anyone ever explicitly dealt with whether it’s ok to murder people other than Peter Singer and disabled infants?

        • shar says:

          My understanding is that the process typically goes something like this:

          Alice: Hey, what about… maybe let’s call it “average utilitarianism”? The terminal goal of morality is to increase the average happiness of sentient beings in the universe. Seems like that should sidestep the first-level Repugnant Conclusion, where you can maximize utility by indefinitely adding people who are just barely above suicidal.
          Bob: Doesn’t that imply that if you meet someone with below average happiness, and you can’t make them happier – say, somebody with treatment-resistant depression – you’re morally obligated to kill them?
          Alice: Back to the drawing board.

          That is, most people consider a utilitarian system that implies something that flagrantly contradicts our basic moral intuitions (“don’t murder”) to be a failure, or at least incomplete. Most, not all; there’s a section towards the end of that review about “Accepting the Repugnant Conclusion”, but those philosophers look to be a small minority.

          My limited understanding of Singer’s arguments are that they’re typically based less on happiness than sentience as a prerequisite for moral standing. He might be okay with killing certain infants, people in vegetative states, etc. not so much because they’re unhappy, but because they have so little sense of self that it’s not really killing at all:

          Newborn human babies have no sense of their own existence over time. So killing a newborn baby is never equivalent to killing a person, that is, a being who wants to go on living.

          Also, somewhat related, there’s a strand of “negative utilitarianism” that focuses more on minimizing suffering than maximizing happiness. Some NU proponents conclude that suffering will always dominate in this universe*, and so the most moral act would be to painlessly and permanently annihilate all life. Note that this doesn’t necessarily imply an obligation to kill individuals, as that might easily increase the suffering of the world.

          *Apposite quote: “The pain in the world always outweighs the pleasure. If you don’t believe it, compare the respective feelings of two animals, one of which is eating the other.”
          -Schopenhauer

          • Tibor says:

            The quote by Schoppenhauer seems to implicitly assume that zero-sum and negative-sum games (of utility) are more prevalent than the positive-sum ones or that they necessarily have to be more prevalent. The first might or might not be true (but Schoppenhauer provides no supporting arguments) and I don’t see no basis for the second.

        • Deiseach says:

          Has anyone ever explicitly dealt with whether it’s ok to murder people other than Peter Singer and disabled infants?

          I think that sentence could use some re-arrangement, else Peter Singer will end up murdered an awful lot of times 🙂

      • Nornagest says:

        Now that you mention it, “Repugnant Conclusion” would fit right in as the title to a Dimmu Borgir song.

      • Joe says:

        Regarding the repugnant conclusion, most of the attempts to avoid it that I’ve seen try to make some distinction between ‘people who are currently alive’ and ‘new people’. But it seems to me the inescapable reality is that this is a fiction: there really is no such distinction. To give one example: our consciousness shuts down for hours at a time whenever we go to sleep. If you already accept the fairly entry-level utilitarian point that action and inaction are morally equivalent, there doesn’t seem to be any meaningful way to distinguish between killing a person in their sleep and not creating a new person.

        While it’s easy enough to just flatly reject any conclusion your moral reasoning produces that you don’t like, I think doing so makes it more difficult to defend other unintuitive conclusions you might want to draw, like “strangers are equally as important as your family”.

        (And the repugnant conclusion is vastly overstated anyway, since it ignores opportunity costs. In reality, creating ten thousand barely-happy people is not remotely the most efficient way to generate utility from the resources needed to create and keep miserable ten thousand people.)

        • Ketil says:

          I think the most important fallacy is pretending that any action doesn’t in itself affect happiness – total or average, or what have you. Clearly, if I know you will kill me in my sleep, that is a fact that will significantly reduce my happiness. I think this is also the distinction between “new” and “existing” people – it’s easier to select who get put into this world, that to actively remove them.

          But also for “new” people, selection incurs suffering: people with disabilities (downs for instance) and their relatives tend to react strongly and negatively to the fact that many choose abortion when the fetus has the same condition. Even more so when the fact is advertised or recommended.

        • Lirio says:

          It’s not just easy to reject moral conclusions you don’t like, it’s inevitable. No moral system thus far devised will stand up under all scenarios. Ultimately they all require hackish patches to work in anything like an acceptable manner, because most people generally place more weight on their inherent moral intuition than they do on whatever moral laws they have drawn up for themselves. Moral law mainly exists to justify and supplement intuition, not replace it. The few people who do use it as a replacement tend to be seen as close minded, dogmatic, and frequently more than a little crazy.

          As it stands, one of things that makes Utilitarianism so attractive to me is that it’s actually very easy to patch. It’s not a dictate from on high, it’s the not ultimate proof of right and wrong. It is by its own acknowledgement simply a tool that helps us morally evaluate and assess our decisions. If a tool doesn’t work for a give scenario, then we just don’t use it, it’s that simple.

          Not to mention that if you have to come up with a completely artificial scenario to make a moral system fail, then it may be safely regarded as being fairly robust for all practical intents and purposes. Like, even if Utilitarianism says it would be good to add as many people as we can keep alive in a state just above utter misery. Does anyone actually want to do that? No? Then why are we talking about it?

    • phoenixy says:

      Surely this would upset a lot of the non-genocided people though, lowering their utility

      “No man is an island entire of itself; every man is a piece of the continent, a part of the main; if a clod be washed away by the sea, Europe is the less, as well as if a promontory were, as well as any manner of thy friends or of thine own were; any man’s death diminishes me, because I am involved in mankind. And therefore never send to know for whom the bell tolls; it tolls for thee. “

      • Wrong Species says:

        I think this answer is kind of a cop-out. First off, it may or may not be right, but it’s avoiding the question. Anyone doing ethics should be willing to plan for the least convient possible world.

        Second, this kind of caring is not the historical norm. Most people had no qualms with massacring other populations to benefit themselves. Our intuitions are plastic on this matter and a justification that involves the greater good could mean that we would suffer less than we would now.

        Third, I think there’s a few good reasons to believe that it’s not right. Adam Smith had a good point on this:

        Let us suppose that the great empire of China, with all its myriads of inhabitants, was suddenly swallowed up by an earthquake, and let us consider how a man of humanity in Europe, who had no sort of connection with that part of the world, would be affected upon receiving intelligence of this dreadful calamity. He would, I imagine, first of all, express very strongly his sorrow for the misfortune of that unhappy people, he would make many melancholy reflections upon the precariousness of human life, and the vanity of all the labours of man, which could thus be annihilated in a moment. He would too, perhaps, if he was a man of speculation, enter into many reasonings concerning the effects which this disaster might produce upon the commerce of Europe, and the trade and business of the world in general. And when all this fine philosophy was over, when all these humane sentiments had been once fairly expressed, he would pursue his business or his pleasure, take his repose or his diversion, with the same ease and tranquillity, as if no such accident had happened. The most frivolous disaster which could befall himself would occasion a more real disturbance. If he was to lose his little finger to-morrow, he would not sleep to-night; but, provided he never saw them, he will snore with the most profound security over the ruin of a hundred millions of his brethren, and the destruction of that immense multitude seems plainly an object less interesting to him, than this paltry misfortune of his own.

        We didn’t evolve to care about humanity as a whole. We evolved to care about people we know. Our sorrow over other people’s fortune in distant lands is very abstract. Do you lie awake at night, unable to sleep because there is a war going on in Syria? You may think it’s a terrible thing but I doubt it evokes the emotional response that losing your finger would.

        Let’s go ahead and say that it would actually cause enough suffering among the people in the world that it doesn’t pass the cost-benefit test. There’s also the issue of time. Maybe it causes suffering in the short term but people are especially insensitive to massive suffering that happened a long time ago. Does it especially bother you what happened to the Native Americans? It’s actually pretty similar to my above proposal although it was more about disease and decentralized killing rather than a coordinated genocide. But from a certain perspective, it worked out pretty well. The United States is one of the richest countries in the world, much richer than its neighbors to the south. And while people lament what happened to the Native Americans, we’re not exactly suffering over it. So if we could do it all again but with a better understanding of diseases to avoid the travesty, would it be better to just let them all die again? Hell, Native Americans today generally have lower standards of living than others, especially in the reservations. Maybe if we went back in time, it would actually be more justified in better coordinating the genocide. Then, we wouldn’t have as much suffering in South Dakota where 90% of people on this reservation are impoverished.

        At the end of the day, what would cause more happiness/suffering involves empirical work. We can’t know a priori if it would pass a cost-benefit analysis. But I think there’s a prima facie case for taking it seriously as a possibility that can’t be dismissed.

    • Kaj Sotala says:

      Are they willing to take the extreme view and endorse murdering certain people in the world? Should we go genocide some poor country and replace their populations with those from richer countries.

      As someone who works for the Foundational Research Institute and has suffering-focused utilitarianism forming a fair share of his moral system, let me answer this question:

      No.

      • HeelBearCub says:

        You answer is actually far, far too short.

        You seem to indicate you are offended by the question, but you need to justify why “humanely eliminate all predators” is different than “humanely eliminate all sub-standard people”.

        If following idea to logical ends leads you to a certain place, you have to justify why you aren’t going further.

        • Kaj Sotala says:

          This strikes me as an isolated demand of rigor. Indeed one of the implicit criticisms that people in this section seem to be wielding against philosophies such as utilitarianism, is that morality is too complex to be distilled into explicit rules that could just be blindly followed. That’s also an argument that I’ve made myself, in the past.

          I could of course try to offer some kind of logical justification for my exact position, but trying to come up with it would be dishonest, because I haven’t formed my exact position using any 100% logical process. The way by which I’ve formed my exact position is that there are parts of my mind which just scream “NO, THAT’S WRONG” if one suggests genocide as the right approach, and that’s that.

          If I tried to put this in a slightly more rigorous footing, I’d say that my morality is composed of many different sets of intuitions and preferences (you might notice that my original comment said that s-focused utilitarianism formed a “fair share” of my moral system, not all of it). My overall moral system is something like a moral parliament: if one of my moral systems recommends a certain course of action, and no other moral system seems to have major objections, then I go with that. If there are major objections from other moral systems that I also subscribe to, then I try to go with the kind of approach that all of them agree on.

          For example, a few days back I was considering the issue of whether I want to have children; several parts of my mind subscribed to various ethical theories which felt that the idea of having them felt a little iffy (since if you create a new mind, then that mind can experience suffering). But then a part of my mind piped up that clearly cared very strongly about the issue, and which had a strong position of “YES. KIDS”. Given that the remaining parts of my mind only had ambivalent or weak preferences on the issue, they decided to let the part with the strongest preference to have its way, in order to get its support on other issues.

          So the most honest answer I can offer is “most of my mind disagrees with the idea of committing genocide, in a way that it doesn’t necessarily disagree with on some other positions”. (Which might or might include eliminating predators; I should note that I haven’t given the idea much thought, so don’t actually have any strong position on that particular issue. I do support reducing wild-animal suffering in the abstract, but our current technological level doesn’t seem to allow for any very feasible ways of doing it; as people otherwise in the thread have noted, the naive approach of just killing off the predators wouldn’t lead to the actual results we’d want.)

          • Wrong Species says:

            If I tried to put this in a slightly more rigorous footing, I’d say that my morality is composed of many different sets of intuitions and preferences (you might notice that my original comment said that s-focused utilitarianism formed a “fair share” of my moral system, not all of it). My overall moral system is something like a moral parliament: if one of my moral systems recommends a certain course of action, and no other moral system seems to have major objections, then I go with that. If there are major objections from other moral systems that I also subscribe to, then I try to go with the kind of approach that all of them agree on.

            There’s nothing wrong with this answer. It’s similar to how I think. My ethical thought process can essentially be boiled down to “don’t ever bite the bullet”. But my question was more for utilitarians who dismiss other ethical considerations unrelated to happiness/suffering.

          • Randy M says:

            This strikes me as an isolated demand of rigor.

            I don’t think it is; EA is an out-growth of utilitarianism, isn’t it? A philosophy designed to follow cold logic and numbers to whatever unintuitive conclusions that present. In that case, someone saying “doesn’t your logic lead to Y as well as X?” should not be treated as if laying a trap, but genuinely trying to predict what they may advocate in the future.

          • Kaj Sotala says:

            EA is an out-growth of utilitarianism, isn’t it?

            Certainly EA is highly compatible with utilitarianism, but I don’t think you need to be a utilitarian to be EA; I know of at least one deontologist who thinks that “EA is correct”. As long as you endorse some reasoning like “other things being even, it’s better to help a lot of people a lot than the opposite”, you may end up at EA.

          • Bugmaster says:

            Are you saying that you do employ utilitarianism, but only in those cases that don’t violate your moral intuitions ? How is that better than just following your moral intuitions ?

          • Kaj Sotala says:

            Utilitarianism is just a formalization of some of my moral intuitions: following utilitarianism is following my moral intuitions.

          • Bugmaster says:

            @Kaj Sotala:
            Ok, so you brought up the example of how Utilitarianism indicated “no kids”, but your base intuitions indicated “YES KIDS”. Why did you decide to honor the latter but not the former ?

          • Kaj Sotala says:

            Because the part-of-me-that-wants-kids had a very strong desire to have them, whereas the part-of-me-that-is-utilitarian was conflicted on what to think of having them, so the strong clear preference overrode the muddled preference. (For now at least. Given that I’m not actually romantically involved with anyone at the moment, the decision manifests more as “look for a partner who also wants kids” than as actually having them right away.)

          • Bugmaster says:

            @Kaj Sotala:
            Firstly, how can the Utilitarian part of you be “conflicted” ? As far as I understand, the whole point of Utilitarianism (roughly speaking) is to write out (for real, on paper or in Excel, etc.) a detailed cost/benefit analysis of each option, then pick the one with the highest expected value — at least, as far as major life decisions are concerned. It’s not about gut feelings, it’s about multiplying numbers together.

            Secondly, if you always go with the option to which you feel the strongest emotional attachment, then doesn’t Utilitarianism become totally redundant ?

          • Kaj Sotala says:

            Firstly, how can the Utilitarian part of you be “conflicted” ?

            The utilitarian part was conflicted because if you try to think about the utilitarian value of having children in suffering-focused terms, there’s a whole lot of things to consider: they might suffer during their own lives, but on the other hand if you raise them well then they might by their actions end up reducing suffering more than they experience, but then again there are all kinds of reasons that might prevent them from or make them uninterested in doing so and you have to be willing to accept that, also one has to consider the fact that personality traits are hereditary and if people with altruistic tendencies abstain from having children then that will select against altruism in the population and that would be bad in the long run, also intelligence is also hereditary and I’m smarter than average and it’s better to have more smart people in the population, but on the other hand genetic engineering technologies may become widely available within a reasonably short time which will mostly take care of that, though with that in particular there are also all kinds of political and cultural factors that may slow down the adoption, and…

            Etc. etc., a long complicated analysis with lots of factors going in different directions. In theory I could have written down some kind of a breakdown and overall analysis, but there would be so much uncertainty in all the estimates I could come up with for the numbers that I couldn’t put strong faith in any result. It didn’t look like the utilitarian method could have provided any clear result either way.

            Secondly, if you always go with the option to which you feel the strongest emotional attachment, then doesn’t Utilitarianism become totally redundant ?

            Not when the utilitarian reasoning produces a clear and compelling judgment, causing me to feel the strongest emotional attachment to that.

          • Bugmaster says:

            @Kaj Sotala:

            The utilitarian part was conflicted because if you try to think about the utilitarian value of having children in suffering-focused terms, there’s a whole lot of things to consider…

            That is the case with every major decision. If Utilitarianism cannot cope with such complexity, then it’s useless, and you should just go with your intuitions every time. You say,

            Not when the utilitarian reasoning produces a clear and compelling judgment, causing me to feel the strongest emotional attachment to that.

            But when does that ever happen ? Are you just using Utilitarianism for relatively minor and simple decisions, like what to have for lunch ? In that case, it probably doesn’t matter much which moral philosophy you end up applying.

          • Kaj Sotala says:

            Mostly utilitarianism comes into play when considering possible career paths: most of the career-related things that I’ve devoted considerable effort towards in my life, are ones that have seemed promising on utilitarian grounds. And likewise, I’ve rejected various possible career paths that seemed otherwise cozy – comfortable, not too difficult to get into, reasonably paying etc. – in part because they didn’t seem particularly high-impact.

  29. Sam Reuben says:

    The EA movement is rather interesting, and I can see how it can do good things, and I do very much think it’s staffed by good people – and yet, I feel completely disinclined to join. It feels a little like it’s playing around with what’s good, experimenting with and maximizing certain models of goodness, but not ever pressing in towards the heart of it. All that is noble and commendable, but if I were to live that way, I think I’d end up feeling hollow.

    What I mean is something like this: the core of effective altruism is utilitarianism, which (in its naive form) states that goodness is pleasure and evil is pain (the hedonistic thesis, shared by ethical egoism). It goes without saying that they similarly argue that the two are opposites. However, we can easily come up with examples that shake the opposite-thesis: what about a vaccine, which hurts but is good? The response is that the vaccine potentially stops even greater pain down the road, which grants it goodness, but you’ll notice that we’ve subtly shifted the definition of evil = pain to evil = statistically calculated likelihood of pain. If we didn’t, then we’d have to say that in the context of someone who was never exposed to the disease the vaccine was for, the vaccine would be mildly but unmistakably evil. The actuality of the situation doesn’t matter, only the statistical likelihood – which means we’ve already dived head-first into severe abstraction. The problem gets worse when you start looking at cases of toiling towards some goal: the person will experience pain in, say, exercise, but often times they will feel happy while doing it. Or someone who’s hungry can be happy, if they’re really looking forward to a good meal, or… well, let’s not bring masochism into this.

    All of these problems can be answered from within utilitarianism, but that’s no surprise. Any thinker worth their salt can rationalize absolutely anything. What they do say is that utilitarianism as a model is pretty bad at figuring out moral cases at the individual level, and instead is good at solving them at the level it was designed for: the societal. (This is why effective altruism can be so effective, incidentally. It uses a powerful tool to do exactly what it was meant to do. Also incidentally, the effective altruists seem to be guided to their cause by a deontological principle of must-optimize-happiness or must-minimize-suffering.) As such, the effective altruists are doing some very useful things, but are really just working with one model of goodness.

    As for myself, I’ll admit I’m captivated by the idea of goodness being wisdom, in the greatest possible sense. That makes me a good philosopher, if we go by etymology. The commitment is, essentially, that true understanding (wisdom) is understanding of what is good, and that understanding what is good will result in one doing good, because it’s simply the obvious right choice. Wisdom, here, is far more specific than semantic knowledge: any child with good parentage knows that fighting is bad, but it takes real experience to truly understand that. If wisdom is goodness, then of course the path to goodness is through gaining wisdom, which has to be done through refinement, contemplation, and discussion. So I go around talking to people, hoping to gain wisdom or share it. It’s the easiest thing in the world to get me to talk about philosophy of any sort, because I really and genuinely believe it’s good.

    But although that’s the kind of life I imagine for myself, the effective altruist life seems quite good. They clearly have some of the same wisdom-commitments, given their attitudes towards learning and refinement, and a wish that they’d move a little further away from utilitarianism (or treat it as having limited use-cases) doesn’t reduce the respectability there. I guess it’d just be nice if they supplemented their talks about utility-optimization with some more general philosophy.

    • rlms says:

      I don’t think effective altruism really requires utilitarianism. You do need some sort of outward-looking and probably vaguely consequentialist moral system (it’s not for people who think their only obligations are to be virtuous themselves, or nihilists), but although a lot of effective altruists buy into utilitarianism, and specifically the utility-maximisation version it’s not necessary. I think that most of the effective altruists I know personally work on a more intuitiony basis of just “trying to do good”, and I know some who have explicitly non-utilitarian moral systems.

      Basically, I think there are two main ideas in effective altruism. One is that you should try to do good efficiently, and that it’s OK to compare charities in the same way you’d compare other products. The other is that most people should spend a lot more money trying to do good than they do. Choice of moral system is only really relevant to the first part. Someone with a weird (from an EA perspective) moral goal like “convert a lot of people to Christianity” could easily use the other two ideas.

      • kokotajlod@gmail.com says:

        Agreed. I’m an EA and I’m not 100% a consequentialist and I’m certainly not a utilitarian. There are more good things in the world than happiness & more bad things than pain, and they interact in more interesting ways than “sum it all up.”

  30. Thecommexokid says:

    The guy on the right works for MealSquares, a likely beneficiary of technology that hacks directly into people’s brains and adds artificial positive valence to unpleasant experiences.

    Although I am a MealSquares consumer who first found the product through their advertisement on your site, I nonetheless live in constant joy and amusement over your perpetual ribbing of how bad you apparently think they are.

    • Scott Alexander says:

      I actually think they’re okay, they’re just fun to tease.

      • Ninmesara says:

        I find it awesome that you’ve written this sentence about them even though they pay you money to host their add 🙂 I’m very undecided on whether this should increase my priors that you’re honest or view this a a Machiavellian scheme of feigning honesty for people like me to increase their priors (and will be hosting the add for free for the upcoming months to compensate for it)

  31. dank says:

    I think there’s a pretty strong argument that your decisions have a much bigger impact on your own happiness than anyone else’s, so you should first figure out how to be happy (without harming others) if you want to maximize total happiness. In that sense, “Stay at your job if it makes you happy” can be sound EA advice.

  32. Wrong Species says:

    The theater hosted a “fireside chat” with Bruce Friedrich, director of the pro-vegetarian Good Food Institute. I’d heard he was a former vice-president of PETA, so I went in with some stereotypes. They were wrong. Friedrich started by admitting that realistically most people are going to keep eating meat, and that yelling at them isn’t a very effective way to help animals. His tactic was to advance research into plant-based and vat-grown meat alternatives, which he predicted would taste identical to regular meat at a fraction of the cost, and which would put all existing factory farms out of business. Afterwards a bunch of us walked to a restaurant a few blocks down the street to taste an Impossible Burger, the vanguard of this brave new meatless future.

    I’ve had a theory that what we call “moral progress” is, for the most part, really just economic and technological developments that allow us to care about a wider view of people without it affecting them that badly. I’m sure someone could think of a counter-example but it seems to apply in many situations. It seems Friedrich has a similar thought. It means that we aren’t getting better as people, we just have more stuff to “buy morality points”. Abolitionism became prominent in industrialized countries before rural ones. Welfare states became prominent among richer countries and soon enough, we’ll probably replace meat with lab grown meat. They could try to convince millions of people that it’s something worth banning and start a large political fight or they could quietly try to reduce costs of a meat replacement until it’s more economically viable.

    • rlms says:

      Expanding the franchise is the counterexample that springs to mind. Ending capital punishment probably also counts, if you view that as progress.

      • Wrong Species says:

        I think capital punishment works pretty well actually. Back a few hundred years ago, they didn’t have enough money to keep people locked up in prison for a lifetime. So they usually did something like fine them for small transgressions or kill them for more serious offenses. We live in a society that has enough to money to lock them up for years on end. Next step? Ankle Monitors.

        You could be right on your first point though. I think social justice issues don’t work as well. The easiest argument against my hypothesis is probably gay marriage. What’s the economic development that allowed gay marriage to happen?

        • Deiseach says:

          What’s the economic development that allowed gay marriage to happen?

          Sufficient growth in the number of nice middle-class professional white gay men (white lesbian women are probably secondary in this) to be a bloc with economic (the pink pound/pink dollar) and voting (political campaigns, donations, supporting supportive candidates) clout. A common complaint I’ve seen (in what little I’ve seen online) about the whole gay marriage debate was that it was assimilationism and ‘selling-out’; middle-class gay white guys wanting to have the nice comfortable lifestyle and be accepted as part of cishet society, and throwing the black, other minorities, and trans queer folks who were disproportionately poor and sex workers under the bus in the name of respectability politics, e.g. see the criticism about the whitewashing and trans erasure in the Stonewall movie.

          Looking at my own country, the marriage equality amendment was pushed not alone by the Usual Suspects and the Labour Party but by Fine Gael, the right-wing, business-friendly (Thatcherism-light is a model that has never lost popularity amongst a strong core of both their support and politicians) and formerly strongly socially conservative party. It was publicly popular and didn’t cost them a penny while gaining them a lot of kudos, and the pendulum has swung to where it’s more vote-grabbing to be seen to be standing up to the Church than to be kissing the bishops’ rings, so they were all out there waving the rainbow flag and pushing for it (even though as one of the two parties in power in a coalition government, technically they were supposed to be neutral – that is, not advocating for one outcome over another –
          when running information campaigns during a constitutional referendum).

          Socially liberal? Yes. Economically liberal? No, they’ve now introduced a scheme for the unemployed that is run by a private for-profit company made up of a partnership between an Irish recruitment firm and an English company, the English half of which partnership has some dodgy past in providing such services for the British government (allegations of fraud and fiddling figures).

          • spkaca says:

            Looking at my own country, the marriage equality amendment was pushed not alone by the Usual Suspects and the Labour Party but by Fine Gael

            It was the same in the UK, with gay marriage brought in by a Conservative-led coalition IIRC.
            As for the OP, I think this is an under-rated fact of history. It is an error (within certain limits; it is culpable to persist with a bad institution once it’s become clearly anachronistic) to condemn earlier generations for bad institutions that they lacked the means (economic, technical or even ideological) to do without. And I wonder if people in the future will condemn us for our evil ways, without giving us the charity of understanding that we don’t know any better.

        • sconn says:

          Social security?

          I mean, many people throughout history procreated not because they wanted to, but to have someone to take care of them in their old age. As pressure to procreate became less (since we have social programs paying for retirement) people started thinking about what they wanted to do rather than what they had to. So we marry for love, we marry later, we marry and stay childless … and at that point banning gay marriage just feels a little silly since nobody else is doing the marriage-only-for-survival-and-kids thing.

  33. A1987dM says:

    Triggering such a decay might require extremely high-energy collisions — presumably more than a million times those found in current particle accelerators — but it might be possible.

    Collisions with energies more than a million times those found in current particle accelerators occur every day in interactions of cosmic rays with the atmosphere and no such decay has occurred.

    (Yeah, yeah, I know, anthropic principle, quantum immortality, etc., but still.)

    • Scott Alexander says:

      “More than a million” includes a lot of different numbers.

      • A1987dM says:

        It does, but if somebody said that the market cap of Bitcoin is “more than a million” dollars I’d still guess they have little idea of how much the market cap of Bitcoin is and probably way underestimate it, even though their statement is technically correct.

        (But I was wrong anyway, if the guy meant the energy in the center of mass of the collision rather than in the lab frame: those in interactions of the highest-energy cosmic rays observed with the atmosphere are a “mere” few tens of times those in the LHC. Still not something that could feasibly be achieved in accelerators in the next few decades short of spending a sizable fraction of the world GDP on it, though.)

  34. pontifex says:

    One of the things I wonder about sometimes (and I’m sure I’m not the only one) is whether some amount of suffering is necessary, or even good.

    However, I have never accused people who wanted to end suffering of wanting to exterminate species, destroy planets, or alter fundamental physics to end suffering. That always seemed like a strawman position to me. If people are really taking it seriously, then I think it’s very worrying.

    • Scott Alexander says:

      People mostly don’t. The physics thing is just one guy, and the anti-predator people would prefer to just restrict the predators to zoos or something.

      • HeelBearCub says:

        Cats, dogs a number of other pets are predators. So they need to go?

        Every shark and snake? All of the spiders? Most fish? Alligators and crocodiles and frankly most reptiles? A huge subset of the birds?

        the anti-predator people would prefer to just restrict the predators to zoos or something.

        I mean, frankly, this is not reassuring.

        • Naclador says:

          Also, have they ever considered that a predator killing a sick animal might actually decrease overall suffering?

        • Ozy Frantz says:

          You should definitely keep your cats inside so they can’t torture birds to death. Death by cat is an unusually unpleasant death. This is also important from a conservation perspective: in many areas cats are a major predator of endangered birds.

          Keeping indoor cats would be perfectly ethical in the Vat Meat Future. I haven’t seen anyone run the numbers on indoor cat ownership in the present, but I’d suspect it’d work out net-positive for people who particularly care about having a cat (whether because of fondness for the species or a love for their current cat). I’d suggest switching to a beef-based cat food if possible and avoiding chicken. For future pets, consider pigs, which have similar personalities to cats (although more intelligent, and they are not a suitable pet for apartments or households with small children).

          (Not speaking for my employer.)

        • rlms says:

          On the one hand, wild animal suffering stuff is weird and I’m skeptical about it on many levels. On the other hand, normal conservationists are also weird! They seem to choose which species must be protected and which must be culled based on a combination of nativeness, cuteness, interaction with other species, and probably some other factors, in a way that I (as an outsider) cannot understand at all. Throwing suffering in there likely wouldn’t make things any more confusing.

          • Randy M says:

            I think the difference is between choosing which species to focus preservation efforts on, and choosing which species to focus elimination efforts on.

          • HeelBearCub says:

            On the other hand, normal conservationists are also weird! They seem to choose which species must be protected and which must be culled based on a combination of nativeness, cuteness, interaction with other species, and probably some other factors, in a way that I (as an outsider) cannot understand at all.

            I don’t think this is fair, as frequently conservationists are lambasted for caring about a rare species of snail, or some other thing. Mostly they are just doing threat assessment and trying to avoid environmental degradation where they can. They have no means of stopping most degradation, so it seems arbitrary to you.

            I think it may look inscrutable because you haven’t cared to research the details.

          • rlms says:

            @HeelBearCub
            “I think it may look inscrutable because you haven’t cared to research the details.”
            Yes, probably (although I think there is some randomness). They do often ignore cuteness in choosing what they want to do, although I think it still plays a part sometimes, but it’s definitely a factor in choosing what public opinion will let them get away with. No-one is going to tell you to do a trap-release program for rats, but if you announcing you’re going to poison hedgehogs won’t go down well.

          • keranih says:

            No-one is going to tell you to do a trap-release program for rats, but if you announcing you’re going to poison hedgehogs won’t go down well.

            Oh, you sweet summer child.

          • rlms says:

            @keranih
            Mice are cute! At least, they cause strong disgust reactions less frequently than rats, whom the humane society seem to be ignoring the existence of.

          • Deiseach says:

            Mice are cute!

            Mice are indeed cute, and I’ve often gone “Aw, look at the widdle paws!” as I disposed of their corpses from the poison I’ve put down.

            I will still put down poison, however, whenever I heard the scratching in the walls or see their droppings around the house, cute little corpses or no.

        • registrationisdumb says:

          It would also cause environmental disasters of hilarious proportions. Kill off all the predators of deer? Deer population explodes, they decimate the ecosystem, spread stupid amounts of disease, then there is an exponential increase in “suffering.” Repeat this for every prey animal.

          Predators fill an important niche in the environment, and anyone who seeks to remove them is by a utilitarian standpoint, pure evil.

      • pontifex says:

        I think this highlights one of the biggest problems I have with utilitarianism, namely, that it seems excessively teleological.

        So let’s say Africa has 100 million people, and 50% of them have malaria. Then we can just exterminate everyone there and increase the population of healthy people in some other country. And now we have more total utils, and so it’s “good”. And in fact, not doing it is “bad”. Martian logic, indeed.

        Some people in this thread have been saying that allowing people to have negative total utility causes this problem. But actually, it’s a problem in general even if people are assumed to always have non-negative utility. Rather than helping people in poor countries, why not just tile the world with the maximum number of ultra-rich, ultra-happy first world people?

        I apologize if this seems awfully naive. There must have been a lot of philosophical debate over these points? Surely there must be people who extended utilitarianism to address them?

        • Dragon God says:

          Sorry, who told you that having malaria makes life not valuable.
          I’m from Nigeria (My country alone has 180+ million people and Africa in general 1 billion+), if I had malaria, I would still prefer to live; this goes for most Africans as far as I can tell.

          • pontifex says:

            My point isn’t really about malaria. My point is that if all we consider is “reducing total suffering” then we are led to some very bizarre conclusions. There has to be more to morality.

          • holomanga says:

            It doesn’t, but it makes life less worth living (i.e. you would prefer to not have malaria than to have malaria, and, if an evil genie forced you into it, you might sacrifice some time from your lifespan in exchange for not having malaria). This one document by the WHO lists malaria episodes with a weight of 0.191, such that 1 year of having malaria is equivalent to 2.3 months of being dead.

            If you’re a weird utilitarian then killing all these people and replacing them with happier people might end up scoring more points. This causes everyone else in every community to say “no, your moral system is incorrect”, and then they all think very hard about better moral systems that have their own bizarre problems.

            (One solution is that you should only do overtly bad things if it will result in significantly more good, because unethical financial practices and genocide have hard-to-measure indirect harms that sometimes outweigh whatever good there could be done. Another is that death has a very high upfront utility cost, and that for mumble mumble there isn’t the same cost for not deciding to create a person. The solution most people use is to adopt a deontological ethical system and stop thinking about it too hard.)

            Figuring this out is important for things like allocating healthcare expenditure, deciding whether you should have children, euthanasia, programming general strong AI singletons to kill Moloch, and what to do with life in the next ten billion years.

  35. fugasidhe says:

    I’m glad that people are thinking outside the box about morality. But the phrase that comes to mind regarding a lot of this is “Martian logic”. The logic may be sound, but if the axioms are disconnected from reality, or too myopic in scope, the conclusions can be tantamount to madness. I’ll address one area where I have some expertise: the importance of predators for ecosystem health.

    FWIW Driving predators extinct has been tested. It was the official wildlife management strategy of the US Government until about 80 years ago. The extirpation of wolves and grizzlies from most of the US, led to overpopulation and disease among deer, elk, and moose. It also led to a marketed decrease in biodiversity among tree species, and very likely other ecological effects, apparent to field biologists and natural historians, that we have not even figured out how to measure yet. Some of those diseases such as winter ticks do not affect us but are now endemic among moose: you can track them in Maine by how they bleed in the snow. Others, such as Lyme do infect a significant number of humans every year and represent a chronic source of healthcare expenditure.

    What’s more, it’s not even effective! There are other aggravating factors in the above, aggressive logging for one, but it is certain to say that far more wild animals die from habitat loss and pollution than predation in many parts of the US. A recent PNAS study quantified this: even species populations noted “Least Concern” are shrinking at present. Killing predators to reduce prey suffering? May as well kill people suspected of being terrorists and do nothing to reduce deaths by heart disease or automobile accidents.

    Finally, in closing, the worms they’re picking up and saving in the sidewalk? Almost certainly european red worms, an invasive species that has wrecked havoc on soil quality. I’m not saying what they’re doing is wrong… its just very ironic.

    So, to me this sounds a lot like paper clip optimization… it is not a stretch for intelligent people to see that local optimization risks repainting deck chairs on the titanic. A more moderate stretch is to realize that a lack of experience and familiarity with a complex system almost certainly guarantees that one’s “optimizations” will at best be local. As Feynman said “Perhaps it is that the horizons are limited which permit such people the delusion that the center of the universe of interest is man.”

    • Deiseach says:

      european red worms, an invasive species that has wrecked havoc on soil quality

      Fascinating! Can you tell me more, as I had no idea earthworms were a threat in the USA? I looked up a bit online and it seems that there are no native earthworms (? this is astounding to me, as over here we are told in school classes how important earthworms are for soil quality and not that they harm it, on the contrary), but they also seem to consider that in the colder northerly regions they are no threat; warm temperate California is of course another question.

      • Zodiac says:

        Yeah, I remember as a child how I was digging for worms for some neighbours together with some other kids to put them directly into the plots. That probably was just to keep us busy but worms were labeled as very beneficial for soil.
        Actually, I even remember some science-for-kids show that explained how exactly the worms are good for the soil. I’ll have to try to find that again.

      • anonymousskimmer says:

        There are earthworms native to North America, but the glaciers drove them south. They’ve only slowly been moving northward again, but the arboreal biome had already adapted to not having them around. The introduction of non-native species is now changing that arboreal environment very fast.

        I read this NPR article a while back about the issue: http://www.npr.org/templates/story/story.php?storyId=9105956

    • Progressive Reformation says:

      May as well kill people suspected of being terrorists and do nothing to reduce deaths by heart disease or automobile accidents.

      Sorry, this is a little bit off topic and I’m indulging one of my many pet peeves, but it always bugs me when people compare terrorism and auto accidents. These risks are nothing alike, because the first is a fat-tailed risk and the second a thin-tailed one (to use Talebbian terminology, terrorism is an extremistan risk and auto accidents are a mediocristan risk).

      Auto accidents kill a very predictable number each year, and if you take weighted average of deaths in past years, you get a very accurate picture of the number of total deaths you can expect next year. On the other hand, a single extreme event – 9/11 – accounts for the vast majority of the terror-related deaths in the US since basically forever (2983 of the 3024 deaths from 1975 to 2015). This distinction is important because it’s much easier to assess the risk associated with a predictable phenomenon like auto deaths than a lumpy single-event-driven phenomenon like terror; it also means that the “average death toll per year” of terrorism is likely to be a gross underestimate of the future risk. One gigantic event can overshadow decades of “calm”.

      To put it in more concrete terms, (while unlikely) it is entirely possible for terrorists to kill a million people in the US next year (scenario: obtain nuclear weapon, detonate in large city), despite the low average “kills-per-year” in the past; whereas having more than 50,000 auto deaths next year is basically impossible, despite the fact that about 35,000 deaths were recorded in 2016.

  36. BlindKungFuMaster says:

    “The righteous mind” by Jonathan Haidt provides a nice framework to think about phenomena like EA. It seems to me like these people have effectively a moral matrix that has just one (dominating) dimension: The care/harm dimension. Small wonder they occasionally arrive at insane conclusions. Or break into tears. And no, other people don’t automatically become altruists once they have sorted themselves out. They might value very different things a lot more than reducing suffering.

  37. eelcohoogendoorn says:

    And yet once you start thinking about what morality is – really thinking, the kind where you try to use mathematical models and formal logic

    As a mathematician myself, I hope my cringing at a sentence like this carries at least a little authority. Or it likely has more to do with my staunch belief in ethical emotivism.

    I think it is from the start obvious, without any fancy mathematical modelling, that any kind of utilitarianism is going to lead to silliness like concern about zooplankton. Whatever merits you may see in such a theory (making you look non-threatning to other moral actors, is my best guess), accurately modelling how people actually experience this whole morality thing certainly cant be part of the motivation.

    Morality arises in any system of interacting agents. ‘utilitarianism’ is just one of the many silly stories they tell eachother in an attempt to influence one another; to signal their position and to try and build coalitions around it. If you find that it attracts a lot of weirdos, my guess is this might be because ‘appeasing a magical sky daddy’ sounds like an empirically defensible rallying cry by comparison, in terms of how people actually think and behave.

    Any moral theory that does not explicitly acknowledge the different (often self-interested!) values different agents obviously have, is just too silly to consider for me.

    Orwell on Ghandi phrases it well:

    Close friendships, Gandhi says, are dangerous, because “friends react on one another” and through loyalty to a friend one can be led into wrong-doing. This is unquestionably true. Moreover, if one is to love God, or to love humanity as a whole, one cannot give one’s preference to any individual person. This again is true, and it marks the point at which the humanistic and the religious attitude cease to be reconcilable. To an ordinary human being, love means nothing if it does not mean loving some people more than others. The autobiography leaves it uncertain whether Gandhi behaved in an inconsiderate way to his wife and children, but at any rate it makes clear that on three occasions he was willing to let his wife or a child die rather than administer the animal food prescribed by the doctor. It is true that the threatened death never actually occurred, and also that Gandhi — with, one gathers, a good deal of moral pressure in the opposite direction — always gave the patient the choice of staying alive at the price of committing a sin: still, if the decision had been solely his own, he would have forbidden the animal food, whatever the risks might be. There must, he says, be some limit to what we will do in order to remain alive, and the limit is well on this side of chicken broth. This attitude is perhaps a noble one, but, in the sense which — I think — most people would give to the word, it is inhuman.

    I love myself more than zooplankton. I love myself more than chickens. And we are talking orders of magnitude here, by any emperical measure. Moreover, I love myself more than I do love you; and that last bit in particular is of course not supposed to be uttered in polite conversations about morality, no matter how empirically unavoidable it is. Yet the number of chickens I eat and the number of people I eat are every bit as little a cosmic accident, as they are a consequence from fear of a sky daddy, or the result of utilitarian calculation. Take it from the person who makes such decisions every day: it is entirely a consequence of selfish calculation.

    And whatever stories people tell me about morality, I dont doubt for a second it works the same in their minds too.

    • Jiro says:

      Moreover, I love myself more than I do love you; and that last bit in particular is of course not supposed to be uttered in polite conversations about morality, no matter how empirically unavoidable it is.

      Perhaps we can model EA as people who have not figured out that they should not take ideas seriously.

    • Sam Reuben says:

      I agree that the highlighted sentence is worth a bit of a wince, but perhaps for different reasons. I understand the “mathematical models and formal logic” part of the sentence to apply to thinking, rather than to morality, and that’s a dangerous (and in many ways ineffective) philosophical road to go down. Mathematics is a highly sanitized and streamlined, and thus blind and overbearing, mode of thought, and to say that it’s “real” thought is to say that the only valid modes of thought involve stripping away as many important elements as one can manage at the get-go so that reality properly fits to the equation. This is the economics bug, and insofar as it’s been addressed, it’s been addressed by using common intuitions and linguistics-oriented thought to critique mathematical models (and then refine them), not by making the models more mathy.

      I do feel like your critique of utilitarianism goes just a little astray, though. You treat utilitarianism as a descriptive model of what we commonly term “moral behavior,” when it’s instead a normative model. I don’t believe that if you took any one person at that conference and asked them if people all act according to utilitarianism, that they would say yes. They would say people ought to be utilitarian, and that that’s why they themselves are utilitarian, but they wouldn’t say that it accurately describes how people behave.

      Utilitarianism is, at its heart, one of those mathematical models I mentioned earlier, and as such it has that one clear weakness: it tries to sanitize all the confusing layers upon layers of rightness and wrongness contained in human ethical intuitions and turn them into a simple linear axis of pleasure versus pain. I believe this is what you’re commenting on, for the most part, and I do think this is a serious limitation. A perfect utilitarian normative model would be quite logical and thorough, but it would not reflect much of what makes moral thought what it is. This is shown with the whole repugnant-conclusion stuff, or the opposition in this thread to destroying the universe. People have an understanding that, even if the math supports killing people or becoming a comic-book villain, that there are some things you just don’t do, and this informs utilitarianism rather than being one of the model’s mandates.

      So yeah, I guess my advice would just come down to: try and adjust your criticism away from saying that utilitarian doesn’t capture how people act, because that wasn’t ever the goal. Instead, it would be helpful to say how it doesn’t capture how people should act, because that is its goal.

      • Jiro says:

        How about “utilitarianism doesn’t capture how people think they should act”?

      • eelcohoogendoorn says:

        You treat utilitarianism as a descriptive model of what we commonly term “moral behavior,” when it’s instead a normative model.

        I know that they are normative claims to most utilitarians; but ones that I have a hard time taking seriously when even its most vocal advocates cannot follow through on its implications. I am personally more interested in trying to understand morality as a phenomena, than in the ought-claim of the day. And insofar as I am going to commit myself to any moral oughts, id like to do that from a position of understanding of what morality is.

        it tries to sanitize all the confusing layers upon layers of rightness and wrongness contained in human ethical intuitions and turn them into a simple linear axis of pleasure versus pain. I believe this is what you’re commenting on, for the most part, and I do think this is a serious limitation.

        I think the one-dimensionalization aspect isnt actually such a terrible thing; at least on an individual level. I bet you can model the morality of most animals pretty well from a purely one-dimensional selfish-gene perspective. Humans might be a little more complex, and could not be further from any evolutionairy equilibrium, so they are probably full of notions that will not stand the test of time; but the general idea applies.

        The true problem as I see it, is that there are as many different such utility functions as there are actors. Nagivating this complexity, and finding a mode of interaction that doesnt end in mutually assured destruction is what I see as the hard and interesting part of morality; and also the part that utilitarians have a habit of trying to pretend does not exist.

    • Mr Mind says:

      There has been fruitful mathematical exploration of morality, e.g. in iterated prisoner’s dilemma and their spectral graph properties. They allowed to understand for example that altruism is what selfishness looks like when you incorporate the externalities of selfish behavior.
      Utilitarianism also is pretty inescapable, as per VNM-rationality theorem.
      So I think math has very interesting things to say about morality.
      Problems arise when trying to extrapolate from those models: instead of saying “every coherent agent has a utility function” one adds “and this is it.”

      • eelcohoogendoorn says:

        In 1947, John von Neumann and Oskar Morgenstern proved that any individual whose preferences satisfied four axioms

        I had to look up VNM, but I stopped reading right there.

        Yeah, math and formal logic can be useful in exploring the consequences of certain postulates; and the iterated prisoners dillema is a good example of that; although it does not tell us anything that even creatures without a neural system hadnt figured out intuitively.

        But broadly speaking, the real sticking points in ethics are rarely a lack of deductive prowess; of failing to deduce the consequences of the facts or hypotheticals.

        Morality isnt going to be ‘solved’ any more than politics is.

      • Kaj Sotala says:

        Utilitarianism also is pretty inescapable, as per VNM-rationality theorem.

        Utilitarianism as the ethical theory, is very different from having a utility function in the VNM sense. You could for instance have a totally selfish egoist who only cared about his own well-being, and who had a VNM utility function which defined exactly what kinds of personal well-being he valued; but nobody would call that person a utilitarian in the ethical sense.

      • publiusvarinius says:

        Utilitarianism also is pretty inescapable, as per VNM-rationality theorem.

        Kaj Sotala pointed out how each agent having a utility function does not imply anything about the moral theory of utilitarianism.

        I will point out something different: The preferences of most animals (including some humans, think religion) on this planet do not satisfy VNM-continuity in any sense! Cows do have preferences, but no significant capability for probabilistic reasoning, and hence no utility functions.

        • briantomasik says:

          > Cows do have preferences, but no significant capability for probabilistic reasoning

          It depends how you define “significant”, but it seems that mammals (and probably many animals) have brain mechanisms for computing probabilities and expected rewards. Here’s one example from “Coding of Reward Probability and Risk by Single Neurons in Animals” (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3190139/):

          Many neurons in the orbitofrontal cortex (OFC) appear to code reward probability independent of other task-relevant information such as future action, sensory information, or other reward-related parameters. […] van Duuren et al. (2009) investigated rat OFC responses by pairing different odors with 0, 50, 75, and 100% chance of receiving a rewarding outcome (a food pellet). During the course of one trial, rats were trained to sample an odor for 1.5 s, then proceed to a reward delivery port where they waited for 1.5 s until the outcome was delivered. A number of neurons coded the probability of the reward during the waiting phase (before food was delivered) with increasing or decreasing firing rates. A small number of neurons were found to respond to reward probability in this manner during the movement from odor sampling to reward delivery ports and also after the reward was delivered.

    • Dragon God says:

      I have a system called preference utilitarianism.
      Some axioms:
      Animals have 0 moral relevance.
      The capacity to derive utility of every human is normed to the interval -1 <= x <= 1.

      I think it passes your main criticisims.

      • rlms says:

        Every human? A fetus or braindead person in a coma (or, to be pedantic, a generally dead person in a coffin) has the same moral relevance as an intelligent, conscious, moral person?

  38. Kaj Sotala says:

    As someone who works for the Foundational Research Institute, I’m a little worried that people are going to get the takeaway of “FRI are those guys who want to destroy the world”. So to clarify:

    I don’t want to destroy the world. In fact, I have previously actively participated in efforts to prevent the world from being destroyed, and written papers on AI risk. This fall, I’ll also participate in a two-month research program on existential risks.

    I do have – and have had for a long time – the intuition that preventing extreme suffering is the most important priority. To the best that I can tell, much of this intuition can be traced to having suffered from depression and general feelings of crushing hopelessness for large parts of my life, and wanting to save anyone else from experiencing a similar (or worse!) magnitude of suffering. I seem to recall that I was less suffering-focused before I started getting depressed for the first time. I guess you might say that I started caring about suffering as the most important thing, once I properly realized just how bad suffering can be.

    Since then, that intuition has been reinforced by reading up on other suffering-focused works; something like tranquilism feels like a sensible theory to me, especially given some of my own experiences with meditation which are generally compatible with the kind of theory of mind implied by tranquilism. That’s something that has come later, though.

    But none of this means that I would only value suffering prevention: I’d much rather see a universe-wide flourishing civilization full of minds in various states of bliss, than a dead and barren universe. My position is more of a prioritarian one: let’s first take care of everyone who’s experiencing enormous suffering, and make sure none of our descendants are going to be subject to that fate, before we start thinking about colonizing the rest of the universe and filling it with entirely new minds.

    • Jiro says:

      To the best that I can tell, much of this intuition can be traced to having suffered from depression and general feelings of crushing hopelessness for large parts of my life, and wanting to save anyone else from experiencing a similar (or worse!) magnitude of suffering

      I’d think that’s a reason to avoid fighting suffering. Your own experiences have biased you about how bad suffering is. It’s like someone who keeps a year of food in his basement because he had to go without food at times when he was a kid, or checking where your keys are 20 times a day because you once forgot your keys.

      • briantomasik says:

        Some of the life experiences that make us unique we choose to keep as intrinsic moral values, while others we disregard. If we didn’t keep any of the “biases” that our development instilled in us, we might be paperclip maximizers instead. My moral biases are what make me me.

        • Joe says:

          Do you not think there’s any argument that could cause you to revise these intuitions, then?

          • briantomasik says:

            It’s possible but maybe not super likely (unless my values drift with age as they often do for many people). I tend to take it as my bedrock starting point that “extreme suffering is more morally important than anything else”, because this feels completely obvious to me at an emotional level (and has felt that way for about a decade now).

            My views can and do change about what exactly “suffering” is, how to do interorganismal comparisons of utility, how much weight to give to different kinds of minds, etc.

      • Kaj Sotala says:

        You could fairly make an argument that my experiences have biased me about something like “how bad the suffering typically encountered by people in their lives is”, but I don’t think it makes sense to say that it has biased me about how bad suffering can be. If I’ve experienced suffering that’s worse than what the typical person ever experiences, doesn’t that make me better informed about the potential seriousness of suffering than the person who hasn’t experienced the same?

        And while I acknowledge that I can’t accurately judge the amount of suffering that happens in a typical person’s life, that’s not really relevant, because I’m specifically focused on preventing suffering that’s as bad or worse than what I experienced. If the typical person doesn’t experience that, then that’s great – but a lot of people do! Severe depression isn’t an exceedingly rare condition by any means, and it’s just one of the many things that can cause intense suffering. (I wouldn’t actually want to call my suffering “extreme suffering”, because even though it was bad, it was still nowhere as bad as it can get. I’ve never lived in a country torn by civil war where my parents would have been murdered so that I could be forcibly conscripted to be a child soldier, for example.)

        So a better analogy than the guy who keeps food in store because he had to go without food once when he was a kid, would be a guy who acknowledges that he’s mostly safe from a famine and so are most other people in the West. But he also acknowledges that a lot of people in the world still aren’t, so he has decided to take up a career in an organization devoted to ending world hunger, so that people in famine-torn countries could eventually be as well off as he is now.

        • Jiro says:

          Your experiences have biased you to how bad suffering is compared to other considerations, including nonexistence (unless you want to trivially define suffering such that all considerations fall under suffering).

          Also, if you’re going to prevent suffering that is as bad or as worse as what you’ve experienced, that biases you on the torture versus dust specks scale and you may end up reducing the number of people who suffer as bad as you, but increasing the total suffering.

          • Kaj Sotala says:

            How are you distinguishing “bias” from “anything that would allow a person to make any judgment at all on which answer to prefer” (on e.g. questions like torture vs. dust specks)?

    • MuncleUscles says:

      let’s first take care of everyone who’s experiencing enormous suffering, and make sure none of our descendants are going to be subject to that fate, before we start thinking about colonizing the rest of the universe and filling it with entirely new minds.

      Thank you, this is exactly how I feel. It seems to me deeply immoral to keep creating new conscious beings before we can ensure they don’t have to experience the kind of deep abject suffering that depression can be.

    • Progressive Reformation says:

      On the other hand, we might want lots of sentient things so they can feel the joy that life has to offer. I’m not favorably disposed to someone who suffers depression and concludes that “suffering outweighs joy, let’s postpone the creation of lots of sentient minds until we’ve solved suffering”. That line about “reading other suffering-based works” also troubles me, because it sounds you are reading things that reinforce their moral perspective; I’d recommend trying to read things that emphasize the joy of life, since that would be the opposing viewpoint.

      I understand that suffering can be really, unbearably awful; but the flip side is that joy can be absolutely, incredibly wonderful. To experience only one extreme and then use this to agitate for such far-reaching changes (as laid out in your last paragraph) is irresponsible at best. And, I’m sorry but what you (and briantomasik) have written here absolutely reinforces my impression that the Foundational Research Institute are “those guys who want to destroy the world”, or something with a similar moral valence.

      I don’t want to be too harsh on you, briantomasik, or FRI as a whole, since you clearly have good intentions, but it’s very hard to not have an extreme negative response to this stuff (to the point of wondering if there’s some sort of anti-FRI charity I could donate to). So at the very least I want to clearly explain why all of this is poisonous to me. Let me put it like this: if I found out some past incarnation of FRI had tried to prevent my birth (convince my parents not to have me, for example) in order to “save” me from suffering, I’d be really mad at them.

      And a lot of the anger would be because of the reasoning, not just that they tried to prevent my existence. If someone had tried to talk my parents out of having me because she was concerned about their financial position, I wouldn’t be that mad at her. Having a kid is a big decision, and lots of people decide not to for lots of valid reasons. But the idea that they would try to prevent my existence for my own good, because I might hypothetically experience suffering is appalling to me, and it feels like a very short step from there to Omnicidal Maniac.

      • Kaj Sotala says:

        For what it’s worth, I would never try to convince someone that they shouldn’t have children because there was a chance that the children might suffer. Like I mentioned in another comment, I’m currently feeling that I’d like to have children of my own, one day.

        I fear that I might have given a misleading impression of what I meant with the comment about creating new minds. To be clear, I didn’t mean to say anything about the morality of having children here and now. Rather I was thinking more about papers like this one, which argue that every year by which we delay space colonization represents a massive loss of opportunity, because we’re not creating minds which could be created and which could live meaningful lives.

        And again, it’s not even that I would disagree with the paper entirely. I do agree that once we’re in a position to ensure that those lives will actually be happy and meaningful ones, then yes, let’s pursue space colonization and fill the universe with them. And also yes, let’s work to make sure that humanity won’t be destroyed before that, so that we’ll actually have the opportunity to create all those valuable and worthwhile lives.

        It’s just that… I don’t think it makes sense to say that the creation of a huge number of new lives is an overriding moral good, as long as we don’t have the means for ensuring that considerable amounts of them won’t also be experiencing lives that are full of suffering. This is kind of comparable to that I would expect you to raise an eyebrow at somebody who was claiming that one of the most moral things anyone could do, was to have as many children as possible, regardless of one’s ability to support them all. Not because there would be anything wrong with having children by itself, but because it seems a little simplistic to reduce the criteria of moral goodness just to the number of children, with no attention paid to their likely well-being.

        If my position on the urgency of space colonization seems like a rather remote issue to bring up, you’re right that it is. But the reason I bring up such remote concerns is because I’m not sure if my focus on reducing the amount of intense suffering in the world actually leads to that much in the way of substantive moral disagreements in more near-term issues. I still want to have children, I still don’t endorse murdering people, etc.

        It’s just that, when thinking about the kinds of issues that I personally want to focus my energies on, it’s on things that seem most likely to reduce suffering – many of which are things like “developing a better understanding of depression and better treatments for it”, “fostering communities and mindsets that let people achieve their full potential and live the kinds of exciting and fulfilling lives that they would love to live”, etc. For my FRI work, it’s stuff like “thinking about the risks of AI takeovers, how those takeovers could happen, and how we could avoid both risks of extinction and risks of extreme suffering, and rather make sure that the creation of AI leads to a world where everybody gets to live flourishing lives free of extreme suffering”; again not things that I would expect many people with more “mainstream” ethics to disagree with.

        • Deiseach says:

          “fostering communities and mindsets that let people achieve their full potential and live the kinds of exciting and fulfilling lives that they would love to live”

          Which can very easily devolve into pie-in-the-sky ‘let’s have a conference on the topic’ and everyone thinks they’re achieving something by having discussions and setting up sub-committees and having steering committees, and then the participants scold someone who asks “But how about we go down to the local soup kitchen to volunteer?” as being a selfish FirstWorldFirster who prefers to cater to their in-group than to someone thousands of miles away who would love to be a homeless person on the streets of a First World city as it would be a vast improvement on their life. And now instead of feeding the hungry in our own city, which would be terrible selfish in-groupness contrary to the global spirit of reducing suffering, we’ll all go eat an ethical vegan burger at a fancy restaurant and talk about how we’re going to improve the possibilities for people living full and enriching lives by getting the chance to go to Burning Man.

          I am a lot more impressed by someone who rolls up their sleeves and goes to clean the house of an elderly, sick neighbour than by someone [GENERAL YOU NOT PARTICULAR YOU, KAJ SOTALA] worrying about space colonisation being delayed and meanwhile my elderly neighbour can slip on her dirty floor and break her hip because it’s none of my beeswax and my theorising is going to create more utility in three hundred years’ time by this mathematical model. Yes, and when you die in thirty years’ time, how much real suffering versus theoretical will you [GENERAL YOU] have alleviated, unlike the ignorant slob who never even thought about space colonies but visited the sick and those in prison?

          • Kaj Sotala says:

            Which can very easily devolve into pie-in-the-sky ‘let’s have a conference on the topic’…

            Definitely! Fortunately, our stuff has been focused on a much more concrete level, with e.g. workshops where people reflect on the things that were emotionally significant (positively or negatively) to them during the last year and figuring out what kinds of goals they’d like to have for the next year in light of that, brief tutorials followed by pair practice on the kinds of questions and comments that let you most effectively help your friends to make progress on the issues that they’re dealing with, pairing people up with each other so that they can regularly meet/call and keep each other accountable of their life-improvement projects… that kind of thing. (To be clear, this is the kind of thing I do/organize on my free time/on a volunteer basis, not an FRI thing.)

            Dunno how effective or efficient it is in the long run, but people do seem to at least like it a lot, and a few have at least claimed to have gotten major benefits. Like you say, there’s the constant nagging question of “will all of this space colonization theorizing actually led to anything useful”, so stuff like this is what helps me feel assured that once I do die, there’s at least something that I’ve done that probably had at least some concrete impact. (Also a lot of people seem to like my writing: currently I’m hoping that the thing I recently wrote about uncovering some of the original root causes behind my depression would be of help to someone, in case the techniques that worked for me would also happen to work for somebody else.)

          • Bugmaster says:

            The world must truly be coming to an end, because I find myself in complete agreement with my erstwhile epistemological opponents, against my rationalist would-be peers :-/

          • Fuge says:

            Telescopic Philanthropism, Deiseach. So many Mrs. Jellybys, even absurder than her. At least she worried about distant but real people, they might do the same sacrifices over the heat death of the universe or something.

          • Deiseach says:

            Oh Fuge, so correct about Mrs Jellyby! We should all be ashamed of ourselves for taking Dickens’ side on this, plainly she was ahead of her time as a proto-Effective Altruist! 😀

          • Deiseach says:

            Fortunately, our stuff has been focused on a much more concrete level, with e.g. workshops where people reflect on the things that were emotionally significant (positively or negatively) to them during the last year

            Workshops for reflection on “but how did that make me feel“? Are your concrete achievements? Well, yes, they’re concrete and in the real world, that is certainly true.

            I hope I’m not coming across as being nasty, that’s not what I want you to take away from this. You all sound like lovely people and at least you’re not out rioting in the streets. Have fun and I hope at your next workshop you can say I feel good 🙂

          • Kaj Sotala says:

            “How do I feel” is not the relevant point, “what do my emotions tell me about the things that are important to me and how could I align my life to more in line with the things that are important to me” is. It’s tremendously easy for many people to just get caught up with the daily routines of life without stopping to consider whether those routines actually take them in a happy direction.

            The first step in achieving your dreams, is figuring out what your dreams are in the first place / remembering that you even had them.

            And again, I’m not making any claims of how effective use of a time this is. I didn’t bring it up to suggest that I’d be super-effective; I brought it up to help dispel the notion that I spend most of my time plotting the destruction of the universe and persuading people not to have kids, or whatever impression it was that people were getting.

    • sconn says:

      I understand this. I have priorities of my own: specifically, it seems massively unfair to think of curing old age before we’ve given everyone their fourscore and ten. A world where rich people in wealthy countries got to live forever and infants in poor countries have only a 50% chance of reaching adulthood looks like a dystopia to me. That’s partly because I’m a parent and losing a child sounds like the worst suffering I can possibly imagine. And the thought that some people experience it *more than once* boggles the mind. I want to fix that before anything else. Though I don’t see people with other priorities as enemies; at least they’re doing something rather than nothing.

  39. careful says:

    Suffering is inevitable.

    1) accept suffering

    2) strengthen your person, from as young an age as possible, to face and deal with suffering

    3) try to alleviate suffering otherwise

    You cannot run from it. Do not try to hide, take refuge with an obsession with happiness as the start and end solution.

  40. Whatever Happened To Anonymous says:

    I think the EA are doing a great public service in the form of helping progressive people better understand the conservative mindset.

    • SpeakLittle says:

      Can you elaborate on this comment?

      • Whatever Happened To Anonymous says:

        It was mostly a joke, but the idea is that these proposals are so radical, they’re enough to trigger in normally progressive people the same reaction conservatives get when talking about more normal progressive ideas.

    • Progressive Reformation says:

      Thanks, I chuckled at that.

  41. Eponymous says:

    So, um, I appreciate what you guys are trying to do, but I’m starting to think we need to solve morality to make sure we have Friendly Effective Altruists before you guys turn the universe into non-suffering paperclips or something.

    • pontifex says:

      Hey, we’re working on the hard questions! Like what is the minimum size for a chunk of metal, to be considered a paperclip how to maximize the happiness of the currently extant bags of meat.

  42. JulieK says:

    Saving worms is very nice, but is the same trait manifest in random interactions with other humans, e.g. stopping to give directions or pick up something someone dropped, not gossiping, etc.?

  43. potat0 says:

    You description of effective altruists reminds me of Kazimierz Dabrowski’s theory of positive disintegration. (https://en.wikipedia.org/wiki/Positive_disintegration) Specifically, the overexcitability aspect. (https://en.wikipedia.org/wiki/Overexcitability) It would also explain the “weird” ideas.

  44. chernavsky says:

    With regard to cage-free eggs: They might sound good in theory, but they’re actually a bad idea. Cage-free facilities are almost as bad (if not worse) than conventional battery-cage facilities. And cage-free eggs are reported to be more profitable for industry. Animal-rights philosopher / lawyer Gary Francione has made a career out of arguing (convincingly, in my opinion) that welfarist-type measures end up doing more harm than good. Consumers of “humane” animal products end up soothing their consciences, while industry becomes more profitable (and hence more entrenched), and animals continue to suffer. If we are interested in helping animals, we should spend our time engaging in creative, non-violent vegan education.

    By the way, Bruce Friedrich and Gary Francione engaged in a debate a few years back. My thinking on animal rights changed completely after I encountered Francione’s work (which, admittedly, can be a little counter-intuitive at first blush).

    • romeostevens says:

      I strongly agree this is a major problem and was a bit disappointed to see the cage free campaign being cited at EAG as a success when we don’t really know the net effects.

    • alchemy29 says:

      That website sets off my bullshit detectors immediately. There is something about the layout and the banner that says “HUMANEMYTH”. I clicked the front page and it only gets worse.

      I don’t know anything about cage free eggs I admit, but I’d bet that the pictures taken and language used are not at all objective. For example the line about “toxic hydrogen sulfide gas” sounds like a line out of a bad informercial. Perhaps the author would be interested in research efforts at improving ventilation or reducing production? (I doubt it).

      Anyways as I said, I don’t know anything about cage free eggs. It’s possible they are right, however, that website is the last place I would look for decent information.

      • chernavsky says:

        Here’s an article from the New York Times:

        On Thursday, Direct Action Everywhere, an all-volunteer animal advocacy group, released a video of a stealth visit to a cage-free barn in California that produces eggs sold at Costco under its private label brand, Kirkland. The video shows dead birds on the floor and injured hens pecked by other chickens. One bird had a piece of flesh hanging off its beak.

        The video focuses on a hen that Direct Action rescued and named Ella. When the organization found her in the cage-free barn, she was struggling to pull herself up and had lost most of her feathers. Her back was covered in feces.

        “There were birds rotting on the floor, and there was one dead bird that seemed to have lost her head,” said Wayne Hsiung, who helped make the video for the group, which is better known as DxE. “There were birds attacking birds, and the smell was horrible.”

        Cage-free eggs are essentially a marketing scam designed to make consumers feel better about themselves.

        • Dedicating Ruckus says:

          If chickens are free to interact, they will peck at each other. This happens even in our microfarm of seven birds on a quarter-acre meadow. The term “pecking order” exists for a reason.

          More generally, if you have large numbers of chickens in a small space, the conditions in that space will look gruesome to humans. (I have no information about how much objective suffering the chickens may be undergoing.) Economic incentives will inescapably promote this as long as people eat eggs. And you are never going to get the West to go vegan at high enough rates that the egg demand won’t cause this.

          Research into how to do high-density low-cruelty chicken farming would be a useful thing for EAs to pursue. Hijacking that effort to further your own virtue signal about how we really shouldn’t be eating eggs at all will have a positive effect bounded above at zero.

          • sconn says:

            At a low density, chickens don’t peck at each other much. I keep backyard chickens and if any chicken is getting pecked to the point of bleeding, it’s a problem and most likely means your chickens need more space. (That, or you tried to combine flocks, but I don’t think egg producers do that.)

            Debeaking seems cruel, but it’s intended to prevent this, just as removing tails from pigs keeps them from biting each other’s tails. It sounds cruel but it’s less cruel than the alternative.

            Better yet would be giving chickens a lot more space than standard, but you’re not going to be able to get eggs at 50 cents a dozen that way. You can buy them from a small producer whose conditions you know, but you’ll be paying several dollars a dozen and at that point they aren’t the cheap meat alternative they are otherwise.

            My solution has been to raise my own. Chickens are not hard to care for and a small flock of 3-4 will be enough for 1-2 people’s egg consumption. Their feed is no more expensive than feeding a different pet, like a dog. And you can ensure they are actually happy. However, I know most rationalists consider living in the city to be preferable, so …. this may not be an option for everyone.

    • Deiseach says:

      chernavsky, your comment shows why veganism triggers my bullshit detectors.

      The argument at first is “but suffering! inhumane!”

      Then when something is set up as inhumane, it’s “not really they must be lying! anyway it makes it profitable! profitable means it will never end and we want it to end!”

      So the real argument there is a religious one; vegans are making moral decisions and want to convert everyone else to their beliefs on moral grounds. They don’t care if it’s humane or not, they want meat-eating to end, full stop. Talking about suffering is just an easy way to get people to listen to them and help convince them.

      • Wrong Species says:

        I think it’s more of a split between who care about animals for utilitarian reasons and those who care for humanistic reasons. In the former case, they don’t see a problem with raising an animal to be slaughtered, as long as they don’t suffer until they don’t suffer. But the latter believes the problem doesn’t come from pain but denying them agency and using them for exploitation. I can’t really muster up sympathy for that position but it does follow from trying to apply universal principles in a way that has more consistency.

  45. Ash says:

    Ill ask this here because why not – I am someone who has been trying to get involved in any EA-adjacent work for some time. And I do have qualifications! Masters in economic development, previous work doing charity meta-analyses, etc., so good for impact evaluations/research. But after over a year of trying post-graduation (and some time beforehand) I have had no success. Does anyone here who works in some form of international development/poverty relief know something that perhaps I dont, any advice that I am missing? My current strategy is to maybe try to build a “portfolio” of work to showcase my econometrics, but I dont know if thats relevant/what would be best to focus on.

    • briantomasik says:

      This isn’t my specialty, but if you’re trying to work at an EA org, one idea could be to write blog posts, EA Forum posts, etc. about insights you think poverty-focused EAs should know. That’s both directly useful and signals your abilities, enthusiasm, etc. Of course, you’d probably want to ask the EA orgs you want to work for first to make sure they’d find that useful.

      For working in more formal orgs like AMF or Poverty Action Lab or something, I don’t know enough to comment.

    • Scott Alexander says:

      You might want to talk to 80,000 Hours – I don’t know if this is the kind of thing they help with, but it might be. I’ll try to see if there are any other good contact people.

      • Ash says:

        I did the 80,000 career coaching session with them – they were quite great, but they tend to have the focus of “what do you want to do with your career”, which I am already quite certain on, and not really set up for “how do you gain entry into that career”. Partially because I imagine a lot of the answer to that question is “luck + have friends who will help you”, which is not really something I can fix! But I am hoping I can find other ways where focused effort on my part can actually make a difference.

        But thank you for commenting, if you do think of anyone I am happy to help however I can.

  46. sympathizer says:

    Maybe I am putting too much stock in the title of this post, but since is it “Fear and Loathing” it seems to me that many of the very earnest handwringing in the comments here over how wary some people are now regarding these dangerously crazy EA types might stem from taking the post a little bit too literally serious (or a little be too seriously literal). Gonzo Journalism is not objective reporting of events, and may contain things entirely springing from drug-induced hallucinations of the writer.

    • Deiseach says:

      For me personally, it’s not this particular post; from the first time I heard of Effective Altruism I was a little wary and sceptical because there hung over it just the slightest aura of smugness, just a hint of congratulatory self-back patting – we’re gonna be effective, we’re gonna be rational, we’re not gonna fall for those easy sentimental feel-good activities that traditional charities engage in but which have little to no actual effect in the real world!

      And now we’ve got a conference with about a skadillion speakers, way too many to listen to or attend their talks, and people networking about going to each others’ workshops on getting a job in each others’ companies convincing people like them to get jobs in blood-sucking but very, very wealth-producing industries in order to make the maximum profit so they can then donate to alleviate a small amount of the misery those industries caused extracting and exploiting resources and labour in less advantaged parts of the globe.

      And people worrying about “do fleas have feelings?”

      And the ultimate solution to the problem of pain being “destroy the universe”.

      I’m sure they are all very nice, sincere people who in their private lives literally would not hurt a fly (ironically in view of advocating for mosquito-killing charities) but this is precisely the kind of inward-turning up their own backsides trap I was afraid EA would fall into. I don’t think you could get a more “Stuff White People Like” conference if you tried deliberately to make a parody, up to and including basing it in San Francisco (and I know, because Silicon Valley, but that is also part of it).

      EDIT: Well. The “no snark here” didn’t last very long, did it?

      • andrewflicker says:

        Sounds like aesthetic complaints, Deiseach. Which would you prefer- people being earnest and weird and saving thousands of lives, or people being appropriately humble and friendly and reputable and saving a few dozen?

        • Dedicating Ruckus says:

          Alternatively, would you prefer thousands of lives saved and a 0.001% chance they eventually succeed in destroying the universe to minimize suffering, or dozens of lives saved and no universe-destroying plans?

        • Bugmaster says:

          Out of curiosity, is there any evidence to suggest that earnest weirdoes have so far saved more lives than, say, good old boring malaria netting ?

          • anonymousskimmer says:

            The brief biographies on Stakman and Borlaug don’t seem to indicate whether they were weirdos, but they certainly were earnest.

          • Bugmaster says:

            Borlaug is a good example of what I’m talking about. He noticed a problem: people were starving. What was his solution ? It wasn’t “let’s imagine that we could eliminate the human need for food” or “in the future, we’d have infinite rice, so let’s make sure it’s agri-Friendly”, or “let’s call a big conference in San Francico where we can all talk about how great it would be if the laws of physics did not support hunger”.

            Instead, he took actual crops, and actually bred/engineered them for higher yields. He was able to show real, measurable progress every step of the way; his goals were measurable and realistically achievable; and his methods were only a little weird for their time (humans have been breeding plants for thousands of years, after all). He didn’t spend time on rescuing worms, either; or if he did, he didn’t make a big deal out of it.

          • anonymousskimmer says:

            Okay, I’d say that even “weirdos” are actively focused on real-world solutions when their circumstances are appropriate to such actions, and their mental health is conducive to it.

            Whether that’s the case for many in the EA realm at this point is a question. But it wouldn’t surprise me if a minority are.

            Stakman was apparently instrumental in Borlaug’s career trajectory.

        • Fuge says:

          More likely the EAs will save nothing, and enrich a professional class of TED talkers who create an apparatus to give the experience of saving things, but no reality of such. It’s much easier to hold a conference about AI risk than it is to run a soup kitchen, and much more status-bestowing.

        • Deiseach says:

          I’d like the thousands, andrewflicker, but I fear it’s the worst of both worlds: weird and ineffective so not even dozens get saved because they’ve moved on to “vat grown meat and electron suffering if the AI doesn’t turn us all into paperclips first”. Fuge’s point about the TED talks is apposite here; having rooted out within ourselves the impulse to give locally to causes under our noses because now we know that’s the irrational, naive, ingroup favouring, ineffective old-fashioned thing to do, we have turned our attention to the larger world. Since we very few of us are actually going to up sticks and move to Tanzania to work on research into mosquito infestations, we will be relying on information about how and where to make donations. And how better to get information than to go to conferences, talks, etc. about these topics? And so we end up spending time and money listening to a talk and feeling virtuous about the work we are putting in but somehow now we’ve drifted from “mosquito netting” to “do electrons suffer?” and meanwhile the line at the soup kitchen three blocks over gets longer but we know better than to give to local interests only…

          DedicatingRuckus: NO UNIVERSE DESTROYING AT ALL. We’re only guests here, it’s as rude as walking into someone else’s house and smashing all their ornaments because ugh, porcelain statuettes? so tacky! (here, have some tacky statuettes I’d happily see smashed but it would be wrong of me to get a hammer and start swinging).

          • andrewflicker says:

            But, I mean… haven’t the EA crowd *already* saved thousands upon thousands? The “earnest weirdoes” ARE the ones who’ve bought a ton of bed nets, etc. People are acting like there’s some sort of battle with “boring bed-net buyers” on one side and “EA weirdoes” on the other, where the sides are really “EA weirdos that buy bed nets” and “People who give all in their will to the humane society” and “People who never give to charity”.

            (rounding off to make a point, I hope that’s clear)

  47. Salem says:

    Someone said:

    EAs have uncritically accepted a fallacious premise/framing and cut themselves off from the proper feedback loop that would allow them to correct their errors over time. The flip side of taking ideas seriously because they matter in the real world is that the real world sets the standard for and grounds your ideas. If you take “happiness is good, suffering is bad” as your starting point, cutting it off from the concrete questions of whose happiness and why, and from the underlying question of what moral concepts are even for, and why you endorse general benevolence and good will in the first place, then you end up at absurdity. If you then confuse “taking ideas seriously” with “biting every bullet my theory leads me to, evidence be damned”, you end up wallowing in absurdity at EAG.

    Personally, I’d go further. If politics is the public working out of our emotional felt needs, then can we apply the same to an an altruistic movement? Sure, it’s Bulveristic, but when we have EAs in this very thread admitting that their beliefs derive from their personal psychological trauma, it becomes worth exploring. At base, the movement is “Effective Neuroticism.”*

    Rather than engage with EA’s obviously broken arguments, I’d rather ask: why are these people so neurotic? Why does their neuroticism get channeled in this way? And how can we help EA as a movement set aside personal neuroses and approach altruism, and the public sphere generally, in a more effective way?

    Lest this seem an attack, I’ll cheerfully concede that EA is probably better than most other charitable causes. That EA has obvious problems doesn’t mean everyone else gets a free pass. To quote someone much wiser than me:

    It is easy for Republicans to see the higher neuroticism of Democrats, and easier for Democrats to see the lesser empathy of Republicans. It is harder for each side to see its own flaws, or to see how the other side recognizes its flaws so accurately.

    Mutatis mutandis, the same applies here.

    * If you think that’s unfair, then let’s set up a bet on the OCEAN characteristics of attendees at the next big EA summit.

    • Scott Alexander says:

      — If you think that’s unfair, then let’s set up a bet on the OCEAN characteristics of attendees at the next big EA summit.

      You lose

      • Salem says:

        Nothing in that post says anything about the Neuroticism of EAs, merely that they aren’t more mentally ill than the average Less Wronger. But perhaps you have extra data that you weren’t sharing there. Care to link it?

        Incidentally, the idea that the control group for EAs should be Less Wrongers is… well, let’s be nice and say it’s not what I had in mind.

        • Deiseach says:

          they aren’t more mentally ill than the average Less Wronger

          I hope y’all are appreciating my heroic restraint here in making no further comment (given that surveys on here of the SSC readership show we’re all as odd as two left feet) 🙂

    • vV_Vv says:

      why are these people so neurotic?

      There is always going to be some fraction of people who are highly neurotic, and they will tend to congregate into movements and ideologies that suit their psychological needs.

  48. Don_Flamingo says:

    “When the King of Persia, in all the insolence of his pride, spread his army over the vast plains and could not grasp its number but simply its measure, he shed copious tears because inside of a hundred years not a man of such a mighty army would be alive.”
    Seneca, On the Shortness of Life XVII
    I always thought Seneca was calling him out as an example of becoming weak and womanly (in his words), if one is spoiled by a life of comfort and power. But apparently, he really warned about that lifestyle turning one into one of those hippy, effective altruist types.

  49. Salentino says:

    I was planning on writing a comment that these people sounds almost exactly like bodhisattvas, and then I got to your last line. It is almost hard to believe that there are humans with that much compassion. I mean how can they even walk down a street? And yet, that is precisely what the Buddhist notion of bodhisattva is. And also the notion of working through your problems until you can then help others is a very profound notion, with some nuance that I only started to appreciate as the years passed.

    Thanks for a great article, Scott.

    • Deiseach says:

      It is almost hard to believe that there are humans with that much compassion.

      I suppose the Jains would count as the type of extremely compassionate people? At least in the philosophy, no idea how the average Jain lives their life after having two millennia to make the usual accommodations with the world that affects all religions.

  50. andagain says:

    He responded with the official party line, the one I’ve so egregiously failed to push in this blog post. That effective altruism is a movement of ordinary people.

    Surely if Effective Altruism means anything at all, it means, not necessarily getting people to give more, but getting them to give more efficiently?

    • Scott Alexander says:

      That used to be a big part of the party line, but then some people did research and found that giving money is less effective than working directly (eg going into politics and changing government policy, going into science and discovering world-changing technologies) so now the party line doesn’t really focus on how to give money so much.

  51. Eli says:

    In Life’s name and for Life’s sake,
    I assert that I will employ the Art which is its gift in Life’s service alone, rejecting all other usages.
    I will guard growth and ease pain. I will fight to preserve what grows and lives well in its own way; and I will change no object or creature unless its growth and life, or that of the system of which it is part, are threatened.
    To these ends, in the practice of my Art, I will put aside fear for courage, and death for life, when it is right to do so —
    till Universe’s end.

    Entropy is running.

  52. cuke says:

    Thank you for this post. I needed this after this past week.

    I find it reassuring that the folks doing EA work are kind and empathetic people. I did various kinds of directly-addressing-world-suffering type work for twenty years inside of various non-profits long ago and found that many of the folks doing that work were narcissists, highly defensive, and habitually unkind (not all of them, but way more than I wanted to interact with). I went on to become a psychotherapist partly so I could address suffering one person at a time in an environment where kindness was highly valued.

    Here’s one thing I’m wondering about… a portion of my clients always are people self-described as “highly sensitive” or otherwise are people with anxiety and depression whose anxiety and depression seem highly correlated with their high degree of empathy (and relative lack of defenses). I know research has lots to say about these traits, but for the moment I’m interested in the sociology of how highly sensitive and empathetic introverts can find their tribe.

    These clients’ lives would be so improved by being around the kind of folks you describe in the EA community. And then too, the kind of chronic mental suffering you see with anxiety, OCD, and depression tends to reinforce a focus on oneself (because the self is causing problems), while paradoxically doing things for other people, getting out of the focus on one’s own suffering, is often part of the way out of this mental suffering.

    These highly sensitive clients I work with usually have a couple of really strong latent talents, but because they often experience the world of people around them as hostile and uncaring, they don’t have ways that feel accessible to them to share the gifts they have with the world. And so their suffering turns back on itself even more.

    I’m a Buddhist so when I think of “hacking consciousness” I think mainly of how Buddhist psychology is about that very thing, as well the way it so centrally addresses how to alleviate suffering. So, reading your post makes me interested in what people might be doing at the intersection of EA and mental suffering/mental health. And it also makes me wonder how the clients I work with who have this highly empathetic and introverted style would flourish if they had more access to both the kindness and the focus on alleviating others’ suffering that you describe at this conference. (I’m not in a major city, but a small city, and I see no EA meetups here)

    If there are EA people focused on mental health/mental suffering, I would like to connect with them.

    • Kaj Sotala says:

      Depends on how you define “EA people focused on mental health/suffering”; my current day job at an EA organization is mostly doing unrelated things, but if you look at my comments elsewhere under this post, you can see that I feel like things like depression are among most important things in the world to tackle, next mainly to things like risks of extinction. (Also your characterization of sensitive and depression-prone introverts resonates.)

      I’ve also considered studying to become either a more academic psychologist, to help more rigorously evaluate and develop various cures for depression, or a therapist to help people more directly. Though realistically I don’t feel like I would have the energy to both work and study at the same time.

      • cuke says:

        Kaj, thanks for your reply.

        The other thing I wonder about with respect to EA perspectives is how suffering is understood in that framework.

        From my perspective, suffering among humans is a subjective experience. Buddhists take a pretty radical stance on this in saying more or less that all suffering is mental suffering. There may be pain in the body, disease and death, but suffering is about what the mind does with our experience, whatever it is. There’s no consistent correlation between physical states and mental states.

        So to put it cartoonishly, to prioritize extending life if people’s lives are still full of despair may be not meeting the goal if the goal is reducing suffering. I need to read more about what EA folks are optimizing for when they assess interventions. Adding days or years to life is not an inherent good if the mind of the life being maximized is filled with subjective suffering. How do EA folks think about this?

        (I don’t mean here to argue in any way against mosquito nets or the value of saving lives.)

        There’s no shortage of big problems we have to solve, and my feeling is there’s room for everyone in that. That’s why I cringe a bit reading Scott’s take that his being a psychiatrist could be seen as not an effective way to be an altruist. I think to myself that the measuring stick is broken if that’s a plausible conclusion.

        I spent a lot of time in my 20s and 30s with folks who (and this included me too) took very seriously the idea that our efforts needed to be applied in the MOST strategic way at the BEST scale and to precisely the RIGHT problems. My takeaway from that experience is that this attitude breeds a lot of arrogance and is also not very sustainable for the long haul.

        • Kaj Sotala says:

          How do EA folks think about this?

          Depends on the EA folks. 🙂 Somebody like the animal charity people as well as the Foundational Research Institute are quite focused on reducing suffering, but not everybody takes into account quite as much. Some (like people working on developing world interventions) use e.g. QALYs for evaluating their interventions, where QALYs give less weight to years of life spent in situations that are thought to reduce one’s quality of life, but can’t usually go negative.

    • Salentino says:

      I really liked your comment. May I ask which Buddhist practice you do, or which group within Buddhism you associate with? I’ve always been drawn to Buddhism, as I see it as a technology of the mind, but the religious aspects have turned me away (and exposés of guru abuse don’t help). The information over at Daniel Ingram’s blog describing the reproducible steps and stages of development are inspiring too https://www.dharmaoverground.org/web/guest/dharma-wiki/-/wiki/Main/MCTB

      I think what’s been holding me back is finding a community of like-minded people near where I live.

      • cuke says:

        More or less by chance I started in the Vipassana tradition. My sense is that communities of practitioners vary more by local culture or who the teacher or leaders happen to be (here in the U.S. anyway) than the lineage might explain. You might steer away from a particular lineage because it seems more religious but then the teacher/community near you might not be that way at all.

        Some people like to dip their toes into a variety of traditions and others dive head first into one, maybe by going on a longer retreat where they get immersed in the teachings and practices. The way in is just whatever way feels possible for you at any given moment. When you get down to the core Buddhist teachings and practices, they are much the same, or aiming at much the same things.

        Tons of people go towards Buddhism for the same reasons they might go to church (faith beliefs, devotionals, spiritual community) and other folks are just interested in training the mind so as to reduce suffering. In my experience, both kinds of people can get what they want. Either way, getting benefit from it requires commitment like anything else, and so I think most of us hesitate at the door knowing it’s going to require effort and will include some discomfort.

        I wish you well with all that!

      • Kaj Sotala says:

        You didn’t ask me, but I would very much recommend the book The Mind Illuminated, both for its concrete and clear practical meditation instructions and roadmap as well as its entirely secular, drawing-upon-cognitive-psychology theoretical model of what exactly meditation does and why you’d want to practice it. I had read a number of works on meditation before, MCTB one of them, but none of them were as clear and concrete on what exactly to do; reading TMI gave me the concrete tools to get over a number of issues that had kept my practice stalled for years.

        E.g. one of the points in the book was that one of the goals of mindfulness, is to train the mental processes responsible for maintaining your peripheral awareness – your background sense of everything that is going on around you, but which is not in the focus of your active attention – to observe not only your physical surroundings, but also the processes going on in your mind. By doing so, the mental processes responsible for habit formation start to get more information about what kinds of thought patterns produce pleasure and which kinds of thought patterns produce suffering. Over time this will start reshaping your mind, as patterns which only produce suffering will get dropped. (I wrote slightly more about this here.)

        This has been a super-valuable point to focus on and intentionally practice, makes total sense to me in light of the modern science of habit formation, and was something I’d never seen clearly expressed in any of the other meditation texts I’d read.

        • jplewicke says:

          It’s great to see an intersection between the pragmatic dharma community and the rationalist community. I became interested in meditation and Buddhism due to a circuitous route that started at Less Wrong. I first got into it from reading David Chapman‘s commentaries on rationalism and meaning, and then quickly switched over to being much more interested in his discussions on the value of serious Buddhist practice. Since he was frustratingly vague on actual practice instructions, I then got into MCTB earlier this year and have been keeping a practice log on the Dharma Overground and on the streamentry subreddit, which has been a great practice community.

          I would be extremely curious to know what Scott’s personal take is on pragmatic dharma, MCTB, and the possible neurological things that are going on. He’s alluded to some previous meditation experience in this 2011 Less Wrong post and had a recent post named after MCTB, so it’s clearly something he’s thought about.

    • [Thing] says:

      … a portion of my clients always are people self-described as “highly sensitive” or otherwise are people with anxiety and depression whose anxiety and depression seem highly correlated with their high degree of empathy (and relative lack of defenses). I know research has lots to say about these traits, but for the moment I’m interested in the sociology of how highly sensitive and empathetic introverts can find their tribe.

      These clients’ lives would be so improved by being around the kind of folks you describe in the EA community. And then too, the kind of chronic mental suffering you see with anxiety, OCD, and depression tends to reinforce a focus on oneself (because the self is causing problems), while paradoxically doing things for other people, getting out of the focus on one’s own suffering, is often part of the way out of this mental suffering.

      These highly sensitive clients I work with usually have a couple of really strong latent talents, but because they often experience the world of people around them as hostile and uncaring, they don’t have ways that feel accessible to them to share the gifts they have with the world. And so their suffering turns back on itself even more.

      I’m a Buddhist so when I think of “hacking consciousness” I think mainly of how Buddhist psychology is about that very thing, as well the way it so centrally addresses how to alleviate suffering. So, reading your post makes me interested in what people might be doing at the intersection of EA and mental suffering/mental health. And it also makes me wonder how the clients I work with who have this highly empathetic and introverted style would flourish if they had more access to both the kindness and the focus on alleviating others’ suffering that you describe at this conference.

      Well, that all sounds like an unreasonably spot-on description of me and the central problem of my existence. I’ve looked into secular Buddhism as a set of self-help strategies (helpful!) and also as a common interest around which to socialize (less helpful!). I haven’t looked into EA specifically to try and find my tribe, but I’ve looked into rationalism more generally (which is how I wound up posting comments here, among other things). I think it’s a little more promising than Buddhism, in terms of the sort of personality it attracts, but the EA connection has, if anything, scared me off a little. I guess that’s mostly because of scrupulosity concerns such as Scott mentioned in the OP in relation to his career choice, as well my own personal aversion to systematic theories of meta-ethics, which, well, … that just opens up a whole huge philosophical can of worms that I don’t want to have to argue about because it would be so much work.

      Anyway … not sure where I was going with this … I guess it would be fair to say that I sort of model people as the emotional equivalent of electric eels, who will zap me (or be zapped by me) if I get too close. The sort of hyper-intellectual niceness with which people in rationalist-adjacent communities speak to each other definitely helps alleviate my anxiety some (probably by reassuring me that I won’t be the only weirdo doing that) but I’m not sure they can ever be the solution to the “world of people is hostile and uncaring” problem for me.

  53. dianelritter says:

    I went over to effectivealtruists.org and I was very disappointed. You could spend your whole life helping very poor people with their multitudinous problems, but, at the end of the day, all you would have would be a bunch of slightly less badly off poor people. You wouldn’t have moved humanity forward in any way. You would have missed your chance to help build an unmanned space ship to the nearest earth-like planet, or figure out how to end aging, or how to triple the IQ of the average human being, or terraform Mars, or do basic research in any one of a hundred different fields, or even just work to free up information on the web, like working to shorten online copyrights down to only 10 or 15 years!!

    I saw they also linked to some charities that are working to prevent nuclear war, and global warming, and unfriendly AI, but that’s all negative stuff: keeping the world from getting worse. Where is the stuff to make the whole world better off? Stuff that makes the future happen.

    • Scott Alexander says:

      If superintelligent AI isn’t unfriendly, then it changes the game so profoundly (hopefully in a positive way) that it’s not worth worrying about controlling the future ourselves beyond that point.

      Everything EA is doing fits within that context. There are some anti-aging groups, and there are some people trying to triple IQ (though none stupid enough to talk about it in public – do you want them to just put out a welcome mat for every pitchfork-wielding mob that passes their way?). But they’re doing that within the context of there only being a century at most before some kind of unpredictable singularity-like event makes everything hopelessly unpredictable. The anti-aging people think maybe they can stop aging in less than a century, and so save a few people who wouldn’t quite make it to the singularity. The triple-IQ people think that maybe they can triple IQ in less than a century, and that’s going to make the people directing our way into the singularity smarter, in a way that might produce a slightly better chance of avoiding disaster.

      But trying to colonize other stars right now is insane. First of all, it’s insane because the Future of Humanity Institute did a pretty strong study and found it’s statistically near-impossible that there are any life-bearing planets anywhere near us. More important, it’s insane because the amount of time it takes to build a starship is probably longer than the amount of time it takes to build a superintelligent AI, after which it becomes obsolete. And even if you get to Alpha Centauri a few years before the hedonium shockwave (or whatever) begins, it’s just going to take the hedonium shockwave 4.3 years to catch up with you, and who cares whether you’re on Alpha Centauri at that point?

      • Dedicating Ruckus says:

        If you make singularitarianism a prerequisite for EA, you’ll cut off a great number of your potential recruits.

        Personally, I don’t expect a singularity will happen, and consider it entirely rational to continue to make plans with a time horizon of longer than a century. I’m reasonably sure this is the supermajority position among humans.

        • Scott Alexander says:

          It’s not a prerequisite, it’s just what everybody who thinks about it long enough and seriously enough has ended up converging on. Maybe we’re all wrong, in which case you would be welcome to join and do something different.

          • Naclador says:

            I consider it much more likely that e mess up the training of any superintelligent AI we might create so badly that it will do something very far from our best self interest.

            But I still have hope that global deminishing returns on complexity will bring about the collapse of our current unsustainable economic system (read Tainter) before we reach superintelligent AI, and from that point on we can be happy if we still have solar powered calculators.

          • vV_Vv says:

            Selection bias.

            False consensus effect.

            People who thought about it long enough and seriously enough and did not converge on what you converged are not in your social circle. Your social circle is mostly pre-selected on that point.

      • Bugmaster says:

        But trying to colonize other stars right now is insane.

        And trying to hack the qualia / kill all predators / destroy fundamental particles isn’t ? If there’s one takeaway I got from your article, it’s that the EA community is rapidly abandoning the “Effective” part of their moniker. So, why not interstellar travel ?

        • Scott Alexander says:

          I think there’s an important difference between “believe absurd-sounding things because the reasoning took you there” and “believe absurd-sounding things that aren’t justifiable under any system”

          • Naclador says:

            “believe absurd-sounding things that aren’t justifiable under any system”

            Which is not true for star colonization. Consider a scenario where we launch our colony ship to Alpha Centauri, and ten or twenty years later humankind on Earth destroys itself by nuclear warfare, or turns Earth to Grey Goo or some such. In such a scenario it is entirely reasonable to lauch the colony ship.

          • Bugmaster says:

            Sorry, I’m not sure if I understand exactly what you mean. Are you saying that interstellar travel is less realistic than destroying the Universe (or even eliminating all predator species) ? That seems a bit backwards; after all, we already have unmanned probes that have traveled outside of our Solar System, and the Universe is still here, and so are coyotes.

            Or are you saying, “these ideas might sound weird but they’re totally plausible” ? In this case, I might agree if you could show some justification for such beliefs. For example, you could show that destroying all predator species is not only technically possible, but is also likely to lead to positive outcomes. This would require an extraordinary amount of evidence, of course, but in principle it could be done.

            But I don’t want to strawman your position; perhaps you meant something else ?

      • Eponymous says:

        I think the original point still stands. If you think a singularity is going to happen in <100 years, then the most effective thing to do is probably to work on whatever you think the most likely/underfunded route to singularity is, to help it be safe and happen as soon as possible. This probably dwarfs most actual EA projects by impact.

        And even if you think a near singularity is likely, you have to be extremely confident in that to completely neglect other positive long-term planning options that matter a lot if you're wrong.

        • Scott Alexander says:

          I think the original point still stands. If you think a singularity is going to happen in less than 100 years, then the most effective thing to do is probably to work on whatever you think the most likely/underfunded route to singularity is, to help it be safe and happen as soon as possible. This probably dwarfs most actual EA projects by impact.

          Here’s a list of some of the talks at the conference:

          – Working In AI
          – Open Philanthropy Project On AI Safety
          – AI Safety
          – AI Safety Panel
          – MIRI
          – MIRI Update Q&A
          – Working On AI x-risk Inside And Outside Academia

          The EAs interested in the far future are really, really working on hard on this. The whole movement isn’t literally dropping everything to just focus on AI risk, both because there are [some number of decades] between now and then, and because some people are pretty normal and want to deal with normal-person stuff like poverty. But the “work on the most likely route to singularity” advice isn’t exactly something they haven’t thought about.

          What I don’t think there are is people who are weird enough and long-term focused enough to want to think about starships, but also sufficiently willing to ignore singularitarian concerns to pursue the starship goal directly.

          • Naclador says:

            If you believed superintelligent AI was most likely to be malevolent, it would be reasonable to do everything in your power to frustrate AI development and also put some effort into interstellar colonisation projects.

          • Bugmaster says:

            @Naclador:
            I completely agree, which is why — contingent on Scott’s assessment being accurate — I’m now firmly opposed to the EA movement.

          • kokotajlod@gmail.com says:

            Naclador: No it wouldn’t. This is a common misconception so I’m going to go into some detail to explain why:

            (1) If we get a malevolent AI, it won’t matter how many lightyears away we are. If we get a friendly AI, it won’t matter how many lightyears away we are. Building spaceships is really really stupid unless you think there’s a high probability that AI won’t happen on Earth for thousands of years.

            (2) Frustrating AI development for the sake of prolonging our existence before the singularity is small potatoes if you care about the big picture; if you don’t care about the big picture there are far better things to be doing. Frustrating AI development for the sake of giving us more time to think about AI safety, to increase the probability that AI is non-malevolent… is intractable (because of the massive economic incentives and zero political unification) and might backfire (because if AI safety means throttling AI progress, the entire AI industry and all the AI scientists will turn against AI safety), compared to directly promoting AI safety research which is tractable, neglected, and much less prone to backfiring.

          • rlms says:

            @kokotajlod@gmail.com

            Your first point assumes that malevolent AI would expand in a sphere at approximately light speed. I don’t think it is at all obvious that that would happen, or even that a malevolent AI would want to expand at all.

            In general, I also don’t think it is at all obvious that a singularity in the next century or so is inevitable or even likely.

          • Naclador says:

            @kokotajlod

            Well, our different standpoints here clearly depend (A) on how likely we think superintelligent AI (SIAI for brevity) is, and (B), what SIAI would be able to do (or willing to do).

            I think there is a good chance that humankind eradicates itself on Earth without the help of AI, and before SIAI comes to pass. Therefore, I’d consider it wise to build a second home in a different solar system. For a nulear war devastated Snowball Earth scenario, a nanotech Grey Goo scenario, or a collapse of complex society back to stone age scenario, a settlement on a planet at Alpha Centauri would prove to have been a rational choice.

            Regarding (B), even a malevolent SIAI might decide to just optimize utility within the solar system. I do not think it is absolutely certain that self-preservation is necessarily the prime interest of any SIAI, so all this “What if there are other SIAIs out there” reasoning might not be mandatory for any imaginable SIAI.

      • vV_Vv says:

        But trying to colonize other stars right now is insane.

        I agree, but I think there are many other productive pursuits that you could put your efforts and resources into.

        I don’t believe that helping individuals in Malthusian societies by giving them cash, malaria nets, etc. is going to really improve the world, because in these societies the population is close to the carrying capacity, and by injecting more resources you’ll increase the carrying capacity, then the population will quickly expand to fill the capacity and you’ll have just created more people living as bad as before, if not worse: the child who didn’t day of malaria grows up and makes four children who then die of malaria, because there is always going to be more people than malaria nets. You’d have to be a total utilitarian biting the bullet of the repugnant conclusion in order to say that this is a good thing.

        You would need do apply social engineering in order to make these societies non-Malthusian, but, as you noted, social engineering fails more often than not.

        As for the other EA pursuits that you mentioned, qualia hacking sounds like an excuse to take drugs, and the others are just navel gazing.

        • Unirt says:

          In short-to-medium term, this may not be a concern, because humans are for some reason not perfect Malthusians. When you reduce child death rates, birth rates tend to go down. When the aided societies become rich enough to start paying retirement pensions, birth rates go down rather strongly.

          Of course, this is probably because people don’t have direct instincts to have as many kids as possible; instead we have “proxy instincts” of sex drive and worrying about our retirement, which were perfectly good for fast reproduction before birth control and retirement pensions. It’s likely that in the long term we’ll evolve more direct instincts of many children, so the Malthusian future will arrive, but not, I would think, very soon.

    • Deiseach says:

      Yeah, imagine helping someone not get kicked out for non-payment of rent when instead you could be working on colonising Alpha Centauri!

      Damn it, I’m sure I remember that there is a poem somewhere about time is where we all live and the only place we can.

      We live now. People are in need now. Yes think about the future of the race and all the rest of it, but this is the only place the Drowning Child argument resonates – not that the child is in front of you so that you see their suffering, but that they are drowning now and you can pull them out. Walking past to concentrate on designing a better life buoy is wrong. Not because designing a better life buoy is wrong but because ignoring the need right now in front of you is wrong.

      This is now this is here and if you can afford to turn up your nose at “slightly less poor people” then maybe you’ve never been in the position of utter desperation where five quid now right now as a loan or a donation or simple charity would mean you can eat today, instead of getting the brush-off from someone about “yeah but I’d be much better occupied working on the Revolution when someday in the future we’ll overthrow the unjust system of capitalism that oppresses you and keeps you poor and our descendants will have Fully Automated Luxury Space Communism”.

      Do both! Give to the beggar then continue on to work on the Great Day of Revolution! What is stopping you? You don’t have to help everyone, no-one can do that, but helping some people even a little is better than “I helped possible imaginary future people in my imagination and that was way better considered theoretically”.

      Ignoring the actual need now in favour of theorising about possible future need is mental jerking-off and while it may give you a quick burst of pleasure it is ultimately unproductive.

  54. Random Poster says:

    Here is a discussion about EAs trying to reduce the suffering of humanity.

    (OK, so it doesn’t use EA to mean “effective altruist”. However, I think I would be equally horrified by the prospect of being forcibly helped by either kind of EA.)

  55. ruthgracewong says:

    Hey Scott, did you end up attending the panel I moderated with Hayley Cashdollar about Logistics at Scale? Curious to know your thoughts if yes!

    • Scott Alexander says:

      I actually didn’t make it to that one (I saw Hayley’s name in the program). Sorry!

  56. registrationisdumb says:

    Reading about effective altruists kinda makes me think that the most effective way I could improve the world is to rejoin the Church and become an evangelist. Clearly some people need a set of simple, arbitrary rules so that they don’t commit horrible atrocities and ruin this planet and life for the people on it.

    • sconn says:

      Unfortunately I haven’t noticed that being part of a church helps people not commit atrocities. A quick scan of history will tell you the same. If there’s a religion out there that has a 100% no-atrocity rating, that might work. Quakers? Jains?

      • Nabil ad Dajjal says:

        You don’t need an 100% non-atrocity rating, you just need to avoid the 100% atrocity rating of 20th century secular millenarism.

        Communism justified itself by similar logic to what most EAs use: utilitarianism plus a coming post-scarcity singularly. And their willingness to explore the effectiveness of destroying the universe doesn’t exactly inspire confidence that they won’t repeat the same mistakes.

        Redirecting that mad certainty into a non-existent metaphysical domain where it can’t hurt anyone might not be a bad thing.

  57. Space Ghost says:

    So…thanks for writing that. I have two questions for the people there:

    1) Do you really not understand why people hate “you” (by which I mean, these sorts of earnest, rationalist nerds)?

    2) Do you think you’re going to win?

    • shar says:

      All rage comes from a narcissistic injury. So the question, “why are these guys so angry?” should be reframed: “what is it about [earnest, rationalist nerds] that is a threat to their identity? … When you find yourself hating someone (who did not directly hurt you) with blinding rage, know for certain that it is not the person you hate at all, but rather something about them that threatens your identity. Find that thing. This single piece of advice can turn your life around, I guarantee it.

      • Nornagest says:

        I would take The Last Psychiatrist with several grains of salt when talking about narcissism. It’s TLP’s hammer, and although maybe not everything in those essays looks like a nail, there does seem to be a suspiciously large number of wooden statues of Hindenburg lying around.

        • shar says:

          I understand, but this particular point happens to ring true for me. I can’t say that I’ve ever been pissed off at a stranger I read about on the internet for any reason I’d care to defend. Maybe extreme cases like your Ariel Castros, but William MacAskill’s no Ariel Castro.

      • Space Ghost says:

        I mean, Scott did a whole post about how early attempts to improve a *forest* totally fucked it up, because people didn’t understand it. But now we’re talking about fixing problems by killing all predators or hacking qualia or destroying the universe. To quote Mitchell & Webb… “Are we the baddies?”

        • Vanzetti says:

          Right, so, people who talk about solutions to difficult problems are the baddies. People who hate and ridicule them are the good guys. Sure.

          • Naclador says:

            No, people who talk about solutions without even being able of properly defining the problem, much less of seeing the unintended consequences of their ultraradical “solutions”, are the baddies.

            People trying to convince them to maybe try something more locally restricted and potentially reversible before changing the world forever are the goodies.

          • Vanzetti says:

            I was replying to Space Ghost post. I don’t know what you are talking about.

          • Naclador says:

            I am talking about the same things as Space Ghost, the mad notion that we might be able to run ecosystems without predators to reduce wild animal suffering.

        • shar says:

          Oh I can easily understand a range of negative responses to EA*: indifference, bemused condescension, annoyance because you’re missing the real issue here-let-me-show-you. But hate, that’s strong medicine.

          Like, open borders is another wacky blue-sky scheme with dramatic failure modes and negligible odds of ever happening, but I’d consider that a strange reason to personally loathe Robin Hanson.

          * Which I’ve never taken part in, though I have done malaria research and so all props to GWWC for their bednet initiative.

        • Unirt says:

          Calm down people, it’s not like the EAs are going to destroy the universe or the predator species tomorrow; they are just exploring the far-off possibilities theoretically. It’s a perfectly reasonable thing to do. One of those may prove useful, even though most don’t.

      • Deiseach says:

        When you find yourself hating someone (who did not directly hurt you) with blinding rage, know for certain that it is not the person you hate at all, but rather something about them that threatens your identity.

        If someone wants to blow up the universe to alleviate suffering, that’s reason enough for me to think they directly hurt me, or intend to do so. I agree that “hate” is too strong a word to use here, but we’re also not talking “this really makes me want to become one of you”.

        To quote Guardians of the Galaxy:

        Drax the Destroyer: I just saved Quill!
        Peter Quill: We’ve already established that you destroying the ship I’m on is not saving me!

        Rocket Raccoon: Why would you want to save the galaxy?
        Peter Quill: Because I’m one of the idiots who lives in it!

        • shar says:

          I agree that “hate” is too strong a word to use here

          Well, that was my point. Also, not to go too far out on a limb defending the FRI guys but they do explicitly say they’d wait until intelligent life had naturally run its course before blowing up the universe…

    • Bugmaster says:

      FWIW I don’t hate Scott and his EA buddies personally. I just think they are tragically mistaken about their goals, methods, and overall assessment of reality; but that’s no reason to hate someone. I have plenty of theist friends, and I don’t hate them, either. Why should I ?

    • careful says:

      Define your terms, please.

      Who is ‘you’, who is ‘people’, who is “these sorts”, ‘win’ what?

  58. Patrick Merchant says:

    Morality wasn’t supposed to be like this. Most of the effective altruists I met were nonrealist utilitarians. They don’t believe in some objective moral law imposed by an outside Power. They just think that we should pursue our own human-parochial moral values effectively.

    Would it be fair to call this a strain of nihilism? I don’t mean that as an insult, it certainly isn’t behavioral nihilism. But it still seems somewhat bleak to me, which might be why I have existential panic attacks whenever I talk to effective altruists for too long.

    My personal philosophy on the matter is a kind of agnosticism, informed by St. Augustine’s story about trying to understand God; doing so is something like trying to fit the ocean into a shallow hole. Likewise, since the universe is immense and human intelligence is comparatively tiny, I can only ever understand limited chunks of the universe at a time. Plus, I’m trapped inside of it, and there’s no reasonable way I could ever look past the veil of the Big Bang and see how it came to be in the first place.

    Taking all of that into consideration, I don’t necessarily accept the model of the universe that paints it as having no underlying cosmic meaning/moral order. Nihilist models of reality might be rationally coherent and self-consistent, but I don’t think they can ever be definitively proven. Conscious life might be intrinsically meaningful, it might not be, but in the absence of a smoking gun, I’ll stick to pursuing human moral values/intuitions (and cross my fingers that they aren’t cosmically meaningless).

    I don’t think that I’m articulating this position very well. It’s similar to what Jordan Peterson believes, but I think he expresses it 1000 times better than I ever could. tl;dr I don’t think Lovecraft was necessarily right.

    Anyway, I hope none of this sounds like I’m dismissing effective altruists based on some abstract disagreement. I think they’re pretty clearly a force for good in the world.

    • careful says:

      “I don’t think that I’m articulating this position very well. It’s similar to what Jordan Peterson believes, but I think he expresses it 1000 times better than I ever could.”

      For other readers, highly recommend his lecture series, ‘The Psychological Significance of the Biblical Stories’. Whatever your religious (non-)convictions, you will get something out of it, agree or not:

      https://www.youtube.com/playlist?list=PL22J3VaeABQD_IZs7y60I3lUrrFTzkpat

    • ADifferentAnonymous says:

      If you like or tolerate EY’s writing, you may be interested in his efforts to un-bleak that worldview here.

  59. apaperperday says:

    Scott Alexander, writing about your shame at being ‘low-impact’ is the most ridiculous thing I’ve read in a while, and its been a strange few weeks. I had to make a damn wordpress account to say this — but there are over 300 comments above attesting to the impact. Hell — you have a sponsored blog where you discuss things you think are important or worth thinking about. One which drives conversations across the country. You’ve got a bad case of impostor syndrome my friend. I know the symptoms very well. Take care of yourself.

    Edit: apparently old comments are above mine. not below.

  60. Naclador says:

    One thing I need to put to discussion out of curiosity:

    What about the old idea / theory / prejudice that pleasure / happiness does only make sense in contrast to suffering? It is basically about the hedonism treadmill: We have a baseline level of happiness, and if we live in positive conditions that make us more happy at start, out happiness will drop back to baseline as we get used to the new conditions. We draw happiness from GETTING better off, not from BEING better off. So my point is: If we minimize suffering for all creatures as much as possible, the net effect on happiness might even be negative as all our happiness levels drop to baseline.

    What it the EAs position towards this notion?

    • careful says:

      The wah-wahing around a Totem of Happiness as far as I can tell was one of many things created to try to deal with suffering, rather than them being intrinsically in opposition.

      One could make a case that becoming at one with your suffering is what baseline happiness IS, and that the wah-wahing around the Totem of Happiness is self-defeating, or at least is trying to push together the same poles of two magnets.

  61. Maznak says:

    Funny thing, the “destroy the Universe via vacuum decay to prevent immesurable suffering” is my theme for a scifi story I am in the middle of writing right now. And I am sure this was really my own idea.

    • anonymousskimmer says:

      Suffering is an emotional state we are capable of being aware of. How is knowledge bad such that anyone would want to destroy the universe (or even take their own life*) to avoid experiencing that knowledge?

      That elimination of experience seems to me to be the ultimate evil.

      Is the sole purpose, the end-point, of life simply to evade suffering? Everything, every experience, every means, subordinate to this end?

      Well then, if it is, then wouldn’t it be easier to just eliminate the ability to experience suffering through the appropriate genetic engineering or brain surgery?

      (* Granted, I can understand the impulse to end a perceivably never-ending monotonic emotional state. But these states are always particular to individuals, not populations.)

      • Maznak says:

        Well I agree with you totally – it just seemed like a nice theme for a scifi story – and now I am sort of shocked that someone is considering this seriously. I just hope that they don’t succeed anytime soon.

        • Kaj Sotala says:

          I just hope that they don’t succeed anytime soon.

          Given that they explicitly said that it should only be done once the universe was getting old enough that it couldn’t support any intelligent life anymore, I’d say that there’s still a while.

          • rlms says:

            Once you jump from normal morality to “destroy the universe to stop electrons suffering when intelligent life stops existing”, I think the distance to “destroy the universe to stop electrons suffering immediately” is very small.

  62. The Big Red Scary says:

    I looked at the Impossible Burger webpage, which addresses the question of safety for human consumption. The idea is quite interesting, but I am curious if they have addressed the question of safety to the environment at large in case their genetically modified yeast gets out into the wild. Are there good reasons to expect either 1) the yeast will never get out into the wild or 2) if it did get out into the wild, it wouldn’t become yet another invasive species wreaking havoc on the ecosystem?

    Many years ago I had a little experience with the reverse problem, raising non-native plants in a hothouse, which required sanitization procedures to prevent native fungi from infecting the hothouse. Inevitably and eventually the sanitation procedures would fail and we’d have to dispose of all of the hothouse plants and use fungicide to clean out the hothouse before starting all over again.

    • anonymousskimmer says:

      I don’t know anything about their particular yeast strain but it is technologically easy in multiple ways to cripple an organism such that it is incapable of competing in the wild. (Engineering them to have multiple auxotrophies is a very simple way, but by no means the only way.)

      Your last paragraph demonstrates how hard it is to introduce a foreign species into an already robust Darwinian biome (and the microflora is a robust Darwinian biome).

      Even if they did get out into the wild they’d be directly competing with their own unengineered cousins. The engineering was done to produce a product at no benefit to themselves, the cousins can reproduce without having to produce that product, the cousins are thus at an energy advantage (as well as being far more prevalent in sheer number of organisms – they are already maximally [+/-] filling the available niche leaving no room for the potential invader).

      Please note that it is possible to engineer an organism to have a fitness advantage over their wild-type cousins, but doing so would take effort and would likely not help in using them for the creation of the product. So it’s highly unlikely that this yeast strain has been so engineered.

      • The Big Red Scary says:

        Thanks. I didn’t know the word auxotrophy. Are there any known examples of organisms “engineered” for fitness getting out into the wild? I am almost completely ignorant of microbiology, which is why I am asking, but am generally worried about unforeseen consequences in any intervention. It is good to hear that people do think these things through.

        (About my example: the fungus didn’t cause problems for the thriving of the plants; it was more of a regulatory issue for export.)

        • anonymousskimmer says:

          Glad to shed some light on my discipline wherever I can!

          Outside of domesticated and semi-domesticated animals, not that I’m aware of, but I just work in Synbio, I don’t study this issue.

          The easiest way I can think of this happening is by modifying commensal organisms (which create products which directly aid another organism that then aids them), into being even more commensal (and thus preferentially helped by the crop plant). This is something I worry about with respect to bacteria/crop engineering. But even this has various tricks which can be used to minimize the chance of escape.

          It is always good to be wary of unforeseen consequences.

  63. Edward Scizorhands says:

    I’ve skipped a lot, but on artificial meat:

    1. I’m willing to buy it if I can buy it easily and will pay a small (but not significant) premium. “Buy easily” means at my normal grocery store, not at Whole Foods. Or online, if such a thing is possible. (I’ve never done anything like Omaha Steaks.) What’s the state of the art on this?

    2. My wife isn’t going to put up with a bunch of nonsense so you really really need to make it indistinguishable from meat, especially in the cooking part.

    3. I wouldn’t mind if initial versions were 2/3 real meat and 1/3 non-meat in order to satisfy points 1 and 2. If the end goal is really to get meat-eaters to reduce their impact, this should be an acceptable compromise, especially to get things started, but I strongly suspect the market is bifurcated into the two extremes that despite each other.

    3.a. I can easily see how to have a mix of meat for ground meats or sandwich meats, but would it work for things like pork chops, chicken breasts, or steaks?

    • rlms says:

      Option 3 already exists for e.g. cheap sliced meat (I remember seeing some “chicken” that was 40% chicken and 60% pea starch IIRC). According to this possibly dubious source, the same goes for Subway’s chicken (the rest is soy).

  64. Dragon God says:

    Well, this as only cemented in me the notion that I don’t want to be associated with that EA movement.

    Maybe I’ll just take a pledge to donate (10-25)% of my income to Science and Education causes (once I start earning).

    If EA involves debating whether predators should be killed to save prey, saving worms on the road, etc, then I want absolutely nothing to do with EA—this is not even considering the stranger branches within EA.

    I am fundamentally opposed to animal moral relevance, animal rights and animal personhood; I’ll also be against having my money used to reduce animal suffering.

    • Vigil says:

      EA is a big tent. Not everyone in it thinks animal suffering is a problem. And you can certainly decide what to do with your money :).

  65. Christopher Hazell says:

    I’ve seen people touch on this but I want to get there directly, and ask this question:

    “What the fuck does any of this have to do with me?”

    Believe it or not I’m not talking about the question of whether subatomic particles can suffer; I actually want to shunt all that to the side. The pitch that I get here which is used to sell EA to the rest of us is basically “EA is about Silicon Valley venture capitalists curing malarial African children.”

    Great. Good for them. I’m not a member of either of those groups.

    I am not a malarial African child, and, in fact, in any sane moral calculus, I am better off than an African child suffering from malaria. I am also not a silicon valley venture capitalist. I can’t help noticing that that 80,000 hours website starts its pitch with “So you’re somebody with a type A personality who is well on their way to graduating near the top of the class in an ivy league school”. Is there a site about earning to give that starts with “So you’re six credits short of graduating from art school and you often spend entire days in your room dealing with depression in between shifts at your part time, minimum wage job…”?

    It’s not that I feel guilty about not being able to earn to give, or, at least that’s not the main feeling I want to talk about.

    It’s more that people like me are all over the world; not secure or smart enough to feel like we can be big name, Bill Gates types, not in such dire straits that we really seem like helping us is that important in utilitarian terms.

    One thing that concerns me about EA, and about the broader Rationalist Community, is that it seems like an enclave of powerful people discussing how to best manage me and mine. One of the reasons for modern anomie is the feeling that you are being helplessly buffeted by historical forces over which you have very little control. I want to raise this question: What if that feeling remains even when the historical forces have a logical justification? If, for example, you’re a trucker who is clearly going to be disrupted right out of your job in the very near future and replaced with a robot truck, perhaps the problem isn’t just that you lost your job, but that it suddenly seems like you have no control over, or relevance to modern life?

    I see very little evidence of EA even considering this problem. “Bill Gates will pay for the research to make really good Soma so that he doesn’t care if he’s irrelevant to modern life!” is not at all comforting. “We’ll help him learn to code” actually IS somewhat comforting, if you have good programs and can get past the “I’m too old and stupid to understand computers” part.

    As the middle class deteriorates, depending on them as a source of charity is going to get harder and harder, because the classic charitable pitch is “Since you, like most of your community, are living a life of stability and prosperity, you should share your wealth with people who don’t have it!”

    If you in fact lack stability and prosperity, that pitch can seem hectoring, or even cruel. But more important, even if you buy it, it means that if you want to build up stability and prosperity, you must go somewhere else.

    EA seems to be heading towards a consensus that a small group of philosophers should be directing a small group of billionaires to spend money in “effective” ways. But what if one of the problems is that people don’t like to be managed by distant bureaucracies whose inner workings they can’t understand and whose goals they did not participate in choosing? What if that remains an incredibly disconcerting, humiliating position to find yourself in even if the bureaucracy is carefully run by good people?

    If you’re a normal, 100 IQ person (Or slightly below), do you have anything to contribute to the world or your community? Or is your proper destiny to simply let Silicon Valley manage your life until genetic engineering can cure your sad condition?

    • Christopher Hazell says:

      PS – The focus on AI above exacerbates those concerns for me. What if the problem is that being that closely managed causes humans to feel infantilized, and then leads to all kinds of negative psychological effects, independent of how well you are being micromanaged?

    • lkbm says:

      There’s a lot to your comment, and I’m not sure the core thesis. I think I agree with a lot of it, but it doesn’t seem like it’s about EA as much as the dire economic situation in the US right now. The less wealthy echelons of American society are struggling–true. But that’s not EA’s doing. (And EA tends to acknowledge that people who can’t give shouldn’t.)

      SV has decided the poorer echelons are irrelevant? Seems somewhat true, and is a problem, but also doesn’t feel EA-related to me. SV is eating the economy? Sure, but I’m buying bed nets, not Silicon Valley global domination.

      When I started donating to Givewell’s recommended charities, I was working for a non-profit, at a school. My brother, another EA, still works in the public schools. Richer people will give more, sure, but being able to find the best way to improve the world with what I have to give is just as relevant to me, and is what EA is centrally about.

      I haven’t had any discussions with other EAs that made me feel as if my meagre contributions are irrelevant because they’re small. I get that exclusively from the non-EA skeptics, who try to convince me that donating my money isn’t worthwhile. EA is about convincing me that yes, my little bit can make a difference, and cheer me on with every additional bit I can contribute, and telling me how to get the most bang for my buck.

      I’m not sure what use EA is for people who aren’t able to contribute, but I also don’t know what use they *could* be. It seems like objecting to Firefox because it doesn’t matter to people with no Internet access.

      Sorry if my response missed the mark on what you’re saying.

      • Christopher Hazell says:

        You’ve touched on one issue that I have, which is that there has been some stuff I’ve seen here on SSC, where it seems like one branch of EA thinking is tending towards a conclusion along the lines of “As wealth becomes increasingly concentrated in fewer hands, it makes the most sense to try to curry the favor of a few ultra-rich people and direct them towards beneficial projects”

        I think that’s not great, but even I don’t see that being the main conclusion of EA, so much as a pitfall to be avoided.

        I’m not sure what use EA is for people who aren’t able to contribute, but I also don’t know what use they *could* be.

        Well, I think it’s worth asking how a statement like that fits into the “We’re going to save the universe!” tenor of the conference Mr. Alexander was reporting on.

        But I’m actually not really talking about that, either. It’s more that I find that EA’s assumption about how charity works is very… unidirectional. It sort of works on the following assumption: You find a group of people who have really satisfied the lower portions of Maslow’s pyramid, and who thus have a surplus of energy and a need for more emotional/spiritual/whatever satisfaction. You help them learn to give their surplus resources away to people who are in very desperate trouble.

        The thing is, as the middle class shrinks, so do the number of people who have reliable, secure access to those lower portions of the Maslow pyramid, but at the same time, things are not so catastrophically awful in the US that the struggling middle has reached the point of needing, or in many cases even wanting charity.

        What I’m saying is that, more and more you are going to get people saying things like “I’d really like to come work in that soup kitchen, but I can’t afford a babysitter.” Or, perhaps more abstractly, “I donate 10% of my income to charity, and I know they’re good ones, but I still feel really rootless and depressed.”

        Basically, once you get away from the upper middle class and the rich, you start to hit groups of people who have both surplusses of some things as well as needs or desires they’re finding it difficult to meet on their own. It’s fine to say, “If you can’t come to the soup kitchen, don’t worry about it” but it would be better to be able to say, “Hey, we’ll find a way to arrange reliable free babysitting for you once a week so you can come volunteer.”

        I don’t think you’re going to be able to save the world if you don’t recognize or work at that issue. Career coaching, for example, is one way to do this, but like I said, the first time I looked at that career website and their pitch my immediate thought was “Oh, this is for, like, really smart, driven people who can already get whatever they want. It’s not for a depressed art student with high school level math knowledge and a minimum wage income.”

        • Vigil says:

          Two points I would make:

          1) Your 10% (or 5, or 1, or 0.1!) really can make a huge difference to people’s lives in absolute terms. This was true at the start of EA and is still true now.

          2) It seems like maybe you’re looking for/proposing the need for a community that can help to support you in your charitable giving. I agree! Some EA communities in areas with a decent density of EAs are acting as a social community, some aren’t. I think EA should do more of this. In other places it’s harder due to sparsity of the EAs. But certainly a worthy goal.

    • Deiseach says:

      I can’t help noticing that that 80,000 hours website starts its pitch with “So you’re somebody with a type A personality who is well on their way to graduating near the top of the class in an ivy league school”.

      If I wasn’t already depressed… ._.

      It does make it more and more that EA is not for “ordinary” people.

      I’m not sure what use EA is for people who aren’t able to contribute, but I also don’t know what use they *could* be.

      That is part of the problem; ordinary Joe and Mary might like to help improve the lot of humanity, might be concerned, might welcome having a guide to the best way to give and to spread the good word, and then they get the message “well that’s nice but you’re way too poor to be any use”.

      Not to be doing nothing but carping, I do think it is great that something like EA exists and that it gets well-off people (and let’s not beat about the bush here, if Joe and Mary Ordinary are too poor to be any good, you guys are well-off by comparison to the majority of your fellow-citizens, even if you are only starting on your high-earning career) thinking about how to improve the lot of common humanity. That’s very laudable.

    • Naclador says:

      Hi Christopher,

      I think you raise some very important issues here. In fact, I did not realize how heavily elitist the EA movement really is until I read your post.

      But I think this is really the point that is going to shape most of the US’ near future. Trump won the elections, not because many people honestly believed he would make a good president, but because “the Elites” (TM) (which is media, academia, Democrats, even celebrities) told the electorate what to do and the people thought they’ve had enough of being bullied into making bad election decisions. And as much as Scott portrayed Hillary as the less risky choice, I cannot at all agree with that. Hillary has been warmongering for all her political career, while Trump said he wanted to normalize relations to Russia, which should be a high priority project for any sane person given the dire consequences of nuclear war. While top level Democrats and Silicon Valley people tell the former working class (and those are the people who produced the ressources to get Silicon Valley running in the first place, don’t forget that) that they are not needed any more, that they are too stupid and backwards to decide for themselves how they want to live, Trump said he wanted to get back their manufacturing jobs, told them they were important to him (I do not say that he meant it, just saying what got him elected). Sanders would have been a very good choice in my opinion, but apparently the DNC cheated him off the candidacy.

      And now I am going to break a taboo: I say it is not the best way to improve the world to let a tiny minority of the population get rich as f***, and then beg them to give a few crumbs to a good cause. EAs main goal should be to decrease inequality to the point that every person is significant again. I don’t mean a communist “the same to everyone (exept party members)” equality, there can still be millionaires (but possibly no more billionaires). I think a so-called democracy should not enslave itself to the whims of a perversely rich oligarchy, then go begging them to do a little good with all their money now and then. How did they get that rich in the first place? Why do you Americans always think they have deserved the wealth they have accumulated? This seems to be a superstition deeply rooted in American mindset. I’ll tell you something: No one person deserves to command over a billion dollars, no matter how clever and hard-working he or she might be.

      You can go on telling the unemployed class that they are unimportant and of no use to anyone, risking civil war, stick them into FEMA camps or shoot and bury them, but I do not think this is what most people would call an altruist approach. Or you have to put them to work again, give them a fair share of the economic cake and let them decide where their future should lead them. And if their main goal is not eradicating all predators or destroying physics, their word should count as much as yours, even if you have all the fancy academic titles and they only their gut feel. That is what democracy is all about. Otherwise, stop the pretense and call it an oligarchy ruled by the rich and counselled by the smart.

      • Vigil says:

        You can go on telling the unemployed class that they are unimportant and of no use to anyone, risking civil war, stick them into FEMA camps or shoot and bury them, but I do not think this is what most people would call an altruist approach.

        EA does not do this. Maybe society as a whole does. Even at its most elitist, EA is about moving money from the wealthiest in order to benefit everyone.

        Is this in some sense a “band-aid”? Maybe. EA could redirect itself towards lobbying for egalitarianism, but the movement tries hard to stay out of politics so as to not dissuade half the population from engaging in works that a wide variety of people across political spectrums agree are good (and that would likely quickly become 1/4, 1/8 etc. if it took many political stances).

        Working towards a more egalitarian society is, in my view, a noble and necessary goal, though. I just don’t agree it’s the place of EA to make it happen.

        • Christopher Hazell says:

          I don’t know if this conversation is still current, but:

          “Even at its most elitist, EA is about moving money from the wealthiest in order to benefit everyone.”

          I’m trying to explain why I think that’s a problem. Basically, you have a paradigm where the world is divided into two groups, who mostly don’t interact: The givers, who provide the money, expertise and leg-work, and the receivers, who get the benefit of what the givers give.

          On a tactical, project by project basis, these people have to interact, but at the strategic level I would say EA seems to be coming to the conclusion that there is really no need to move people from the receiver to the giver category. Concentration of wealth is not really a particular problem, because EA is about finding ways for concentrations of wealth to be directed towards charity. Who has the wealth doesn’t really matter, as long as it’s well managed and well directed. Bill Gates giving $1 Billion to charities is just as good as ten million people giving $100 to the same charity. Maybe better, because you don’t have the chaos of ten million different goals and moral systems cluttering up your spending.

          What I would argue is that one reason for modern malaise is the sense that your life is being managed by incomprehensible forces that you can’t possibly control, petition, or even meaningfully participate in.

          I don’t think EA really grasps this. Getting the babysitting together so a housewife or janitor can go help in a soup kitchen isn’t just about the people they help, it’s also about giving that janitor or housewife a sense of control, of community, of participation, of communication, of being connected to and mattering in the world.

          If you get them the babysitting and then just go, “We don’t really need your help in the soup kitchen. Honestly we don’t really need your help with anything; you make $30,000 a year and you only have a GED. So I guess go see a movie or something?”

          Then that’s, well, bad.

          I also have two other objections:

          1. Messaging: Reading about how people believe that they’re on the verge of creating AIs powerful enough to destroy life in, well, the universe, and then having them go, “Whoah, what do you think I am, some kinda miracle worker?” When you ask about how low income people can get babysitting or learn to code is… It’s not good optics.

          2. Strategic: AI risk is apparently a big deal in the EA community. Mr. Alexander has been explaining that all the best AI researchers take it super seriously. So… there’s no problem, right? The best minds in AI will make sure that what they build is carefully designed and not deployed before its time.

          Oh, what’s that? They’re having trouble getting funding, and those AI researchers aren’t actually the ones making decisions about what gets funded or where research money is directed?

          The concentration of decision making power into fewer and fewer hands is interfering with a lot of your broader strategic goals. The same kind of dynamic happened with nuclear weapons and global warming, and if you keep ignoring it it is going to keep happening.

          I don’t think this issue is too big or unachievable compared to, say, overturning the laws of physics.

  66. keranih says:

    His tactic was to advance research into plant-based and vat-grown meat alternatives, which he predicted would taste identical to regular meat at a fraction of the cost, and which would put all existing factory farms out of business.

    Well, vat-protein might put “all existing factory farms out of business”, once it is available at a fraction of the cost.

    But…before it does so, it will put all the “non factory farms” out of business, particularly the inefficient, costly organic farms which are making their money off near-vegan altruists.

    Leaving entirely aside the on-going lack of definition of “factory farm”, much less a basic understanding of the comparative rates of domestic animal suffering under various production systems – people have a variety of preferences in goods. Cost continues to be an overriding driver. Even in America, which provides food at globally historically unbelievably low cost, constant supply, and high safety, cost is the primary driver of consumer choices in food. (Convenience is a close second.) This is not true for all subgroups of the population, which is why the EA/rationalist types can imagine that anywhere near a plurality of the nation agrees with their own priorities.

    Right now, “vat-ethical” food is neither cheap enough nor good enough to be a replacement for much of anything. Given how much turkey continues to struggle to replace pork, (which tofu hasn’t even come close to) we can expect a long slog towards a cheaper & better substituent for animal protein. While I agree that there is a tipping point where the average consumer will drop natural meat and go for the manufactured product, based just on price, I don’t think this tipping point will come fast. Before the product is accepted widely, it will have to be accepted among the target demographic (EAs and the like) who are currently keeping organic meat afloat at grossly inflated prices. If “vat-ethical” food can compete with conventional food, it can certainly undercut organic.

    Which means that the conventional “Factory Farm” products will remain competitive long after the organic farms have largely given up and sold out to become houses and suburbs.

    Leaving all that aside, I am deeply disappointed in Deiseach, who has failed to bring up Isaiah 11:6-7 in any form in response to the idea of deconflicting predators and prey. Because I think a meditation on those verses is extremely important for an understanding of the “bring an end to physics” concept and why it’s not inherently stupid.

    Conflict, competition, revulsion, and predatation is folded into the fabric of Creation. We see this in the motion of elections, in bio-webs, and even in our own bodies. There is a molecule – transferrin – which is highly conserved across mammals, which is activated by increased heat. The role of this molecule is to bind up free iron in the bloodstream, so as to make molecules of iron unavailable to to bacteria who are attempting to multiply In a fever, this molecule becomes more active, starving the pathogens of needed nutrients. Forget the inability of leopards to chew cud (which what goes on when young goats recline together) the bear and the cow grazing together is only an gross, mega-fauna example of the deeply present competition between all organisms.

    The passage in Isaiah speaks of the effects of the coming of the Kingdom of God, when “They will neither harm nor destroy on my holy mountain; for the earth will be full of the knowledge of the LORD, as the waters cover the sea.” The transformation – the removal of conflicts, of repulsions and of preditation – will come about *after* a complete penetration of the whole of the world with ‘the knowledge of the Lord’.

    This is the effect that ‘weird EA’ is trying to bring about. And it will take as complete a re-writing of Creation as indicated in scripture.

    I think the socially polite thing to do in this situation is to wish them luck.

    • engleberg says:

      @keranih -‘we can expect a long and tough slog toward cheaper and better substitute for animal protein.’

      Yes, seems likely. But we’ve been making cheaper substitutes for real food as long as we’ve been preparing food. A synthetic grease Ducks Unlimited can’t tell from duck fat would sell great. Bakers have been putting sawdust in bread since the Egyptians. Pink slime, enough said. Kentucky fried tofu with creatine powder would sell. Isaiah might bitch, but the Preacher would understand.

  67. Steve Sailer says:

    “The lake-fringed monumental neoclassical architecture represents ‘utilitiarian distribution of limited resources’”

    In “I Am Charlotte Simmons,” Tom Wolfe stops the flow of narrative at one point to deliver a Jovian thunderbolt of an insight:

    “He stood in the lobby, just stood there, looking up at the ceiling and taking in its wonders one by one, as if he had never laid eyes on them before, the vaulted ceiling, all the ribs, the covert way spotlights, floodlights, and wall washers had been added … It was so calming … but why? … He thought of every possible reason except for the real one, which was that the existence of conspicuous consumption one has rightful access to — as a student had rightful access to the fabulous Dupont Memorial Library — creates a sense of well-being.”

    I get a sense of well-being just remembering that I could go to San Francisco and visit those buildings in Golden Gate Park.

    • Steve Sailer says:

      The funny thing is that the very existence of the Palace of Fine Arts in Golden Gate Park probably couldn’t be justified rationally based on the calculus of effective altruism, but just thinking about it cheers me up, even though I probably haven’t seen it in person since I went to see the Queen in 1983.

      When I arrived at the outlandish Hyatt Regency in San Francisco

      https://94xd213ur0icvxo-zippykid.netdna-ssl.com/wp-content/uploads/2013/06/fig2-atrium.jpg

      for a business trip in March 1983, I saw on the news that the President of the United States was hosting the Queen of England at a state dinner in Golden Gate Park. Feeling a sense of well-being from the extreme conspicuous consumption of the Hyatt Regency, for which I was rightfully entitled, I ran downstairs and told the cabdriver, like the hero of Pussycat Pussycat to “Take me to see the Queen!”

      “Which one?” he replied. “This town is full of them.”

      I explained that I wanted to see the Queen of England in Golden Gate Park. So he dropped me off at the edge of the park in a crowd of IRA protestors. Eventually, a motorcade appeared, and excitement mounted. Finally, Queen Elizabeth and Prince Philip roared by doing that weird vertical rotating wave they do. The crowd went nuts. The IRA protestors jumped up and down cheering in excitement at the hated monarch, then looked shamefacedly around at their cohorts.

  68. englishwithelliot says:

    Fantastic writeup. Thanks so much. You’ve found a new follower