A Maximally Lazy Guide To Giving To Charity In 2019

[Sorry for the interruption; we will return to our regularly scheduled Adversarial Collaboration Contest tomorrow.]
[Epistemic status: I’m linking evaluations made by people I mostly trust, but there are many people who don’t trust these, I haven’t 100% evaluated them perfectly, and if your assumptions differ even a little from those of the people involved these might not be very helpful. If you don’t know what effective altruism is, you might want to find out before supporting it. Like I said, this is for maximally lazy people and everyone else might want to investigate further.]

If you’re like me, you resolved to donate money to charity this year, and are just now realizing that the year is going to end soon and you should probably get around to doing it. Also, you support effective altruism. Also, you are very lazy. This guide is for you.

The maximally lazy way to donate to effective charity is probably to donate to EA Funds. This is a group of funds run by the Center for Effective Altruism where they get experts to figure out what are the best charities to give your money to each year. The four funds are Global Health, Animal Welfare, Long-Term Future, and Effective Altruism Meta/Community. If you are truly maximally lazy, you can just donate an equal amount to all four of them; if you have enough energy to shift a set of little sliders, you can decide which ones get more or less.


If you have a little more time and energy, you might want to look at the charities suggested by some charity-evaluating organizations and see which ones you like best.

GiveWell tries to rigorously evaluate charities that can be rigorously evaluated, which usually means global health. They admit that they have to exclude whole categories of charity that try to change society in vague ways, because those charities can’t be evaluated as rigorously. But they do a good job of what they do. Most of their top charities fight malaria and parasitic worms; this latter cause is interesting because these worms semipermanently lower school performance, concentration, and general health, suggesting that treating them could permanently improve economic growth. You can donate directly to GiveWell (to be divided up among their top charities at their discretion) here, or you can look at their list of top recommended charities for 2019 here.

Animal Charity Evaluators is the same thing, but for charities that try to help animals, usually by fighting factory farming. You can donate to ACE’s Recommended Charity Fund, again to be divided up among their top charities at their discretion, here, or see their list of top recommended charities for 2019 here.

AI Alignment Literature Review And Charity Comparison is a report posted by LW user Larks going over all the major players in AI safety, what they’ve been doing the past year, and which ones need more funding. If you just want to know which ones they like best, CTRL+F “conclusions” and run it through rot13. Or if you’re too lazy to do that and you just want me to link you their top recommended charity’s donation page, it’s here.

Vox’s report on the best charities for climate change lists ones that claim to be able to prevent one ton of carbon emissions for $0.12 and $1, compared to the $10 you would get on normal offset sites. Their top choice is Coalition For Rainforest Nations (but see criticism here), and their second choice is Clean Air Task Force.

You might also want to check out ImpactMatters (a version of GiveWell focused on literal First World problems), Let’s Fund (a site that highlights charities, mostly in science and technology, and runs campaigns for them), this post on the Effective Altruism forum about which charities people are donating to this year, and this list of what charities the charity selection experts at the Open Philanthropy Project are donating to.


And if you’re not actually lazy at all, you might want to check out some interesting individual charities that have been making appeals around here recently (others can add their appeals in the comments if they want).

The Center For Election Science tries to convince US cities (and presumably plans to eventually work up to larger areas) to use approval voting, a form of voting where third party candidates don’t “split the vote” and you can vote for whoever you want with a clear conscience. They argue this will make compromise easier and moderate candidates more likely to win. They’ve already succeeded in changing the ballot in Fargo, North Dakota, and as the old saying goes, “as Fargo, North Dakota goes, so goes the world.”

Happier Lives Institute wants to work directly on making people happier, but they realize nobody really knows what that means, so they’re doing a lot of meta-research on what happiness is and what the best way to measure it is. Aside from that, they seem to be working on cheap mental health interventions in Third World countries.

Machine Intelligence Research Institute works on a different aspect of AI alignment than most other groups; this comic explains the technicalities better than most sources. They are secretive and don’t talk a lot about their work or give a lot for people to evaluate them on, so whether or not you donate will probably be based on whether they’ve won social trust with you (they have with me).

Charter Cities Institute is trying to work with investors and Third World governments to create charter cities, autonomous cities with better institutions that can supercharge growth in the Third World. For example, a corrupt Third World country where doing business is near-impossible might designate one of their cities to be administered by foreign judges under an open-source law code, so that enterprise can take off. Think of it as a seastead, except on land, and with the host country’s consent (they’re hoping to profit off the tax revenue). David Friedman’s son Patri is leading another effort in this direction.


Finally, if you’re really skeptical and don’t believe any charity can accomplish much, you might want to consider GiveDirectly, which just gives your money directly to very poor people in Africa to do whatever they want with.

This entry was posted in Uncategorized and tagged . Bookmark the permalink.

181 Responses to A Maximally Lazy Guide To Giving To Charity In 2019

  1. SolveIt says:

    Typo thread: “GiveWell tries to rigorously charities that can be rigorously evaluated” is missing an “evaluate”.

  2. Said Achmiz says:

    > maximally lazy

    Hmm. I cannot but take exception with what can only be called abuse of language. Surely, anyone who bothers even to read this blog post is unworthy of the label “maximally lazy”?!

    Here, then, is my…

    Really Maximally Lazy Guide To Giving To Charity In 2019:

    Don’t.

    • Aapje says:

      Your comment is too long to read. 1 line summary plz.

    • Scott Alexander says:

      Surely anyone who reads four whole letters cannot be called maximally lazy, so the real best advice is do.

      (and then you’ve got to figure out how to do it, hence the blog post)

    • Dacyn says:

      I think it’s only supposed to be maximally lazy relative to being a guide to giving charity in 2019, which arguably yours isn’t. So sure, people who read the post are not maximally lazy (in an absolute sense), but it was never claimed that they were.

    • This type of clever but dismissive comment adds nothing substantive to the discussion, and I hope fewer such comments get made in the future. I’m not saying this is a horrible comment, or that it should be banned – but I don’t see any value in it, and it doesn’t belong here – “If you make a comment here, it had better be either true and necessary, true and kind, or kind and necessary,” ( https://slatestarcodex.com/comments/ ). To support norms, I wanted to note this publicly.

      Is this true? Perhaps it’s true that lazy people won’t read this.
      Is this necessary? If so, they also won’t read the comment. At best, it’s a clever side-comment.
      Is this kind? Not particularly – it’s dismissive of the post in what I think is an non-helpful way.

      • Eric T says:

        I don’t know maybe we should add a fourth prong – Is it funny? This comment thread was fun to read and everyone seemed to be having a good time.

  3. psa says:

    There’s also Foundational Research Institute (FRI).

    • Cliff says:

      They are discussed in the AI charity roundup, although the author says that “they adopt what they refer to as ‘suffering-focused’ ethics, which I think is a quite misguided view” without further elaboration. I donate to them.

  4. sty_silver says:

    Props for not shaming laziness! I strongly relate to the strange-seeming position that is willing to sacrifice quite a lot to donate money but has zero motivation to actually research charities. It’s extremely valuable to have organizations that do this for us and are so trustworthy that we can just take their word for it – and I definitely believe GiveWell is one of those (with the important disclaimer that they only search over a particular class of interventions).

  5. Lanrian says:

    One extremely lazy way to donate, that you forgot to mention, is the donation lottery. With really high probability, you don’t have to do anything at all, but you don’t lose any expected value from it (insofar as you’re risk neutral with respect to donations). And if you do win, well, maybe that will give you enough of a kick to at least figure out how to set the EA fund sliders.

  6. Orpheus says:

    And not a single charity dedicated to the arts. Nice. Tell me, in the rationalist vision of the future, will people only read Harry Potter fanfics and academic articles about AI alignment?

    • Nicholas says:

      They’ll presumably get around to supporting the arts after a few of the world’s real problems are solved….or there’s some kind of art crisis that threatens humanity.

      • Orpheus says:

        And when is that going to be? Like, about ~2025ish right? I am sure we will solve EVERYTHING by then.

    • Aapje says:

      @Orpheus

      -1 for assuming that people that think that enough charity (and tax) money is going to the arts, want zero spending on the arts
      -1 for implying that this suggestion to a subculture is a diktat for society at large
      -1 for claiming that a world without subsidized art will only have fanfics and academic articles about AI
      -1 for asking a question that is really an accusation

    • thisheavenlyconjugation says:

      How many dead children are the arts worth?

      • Orpheus says:

        How many dead children is AI alignment worth?

        • thisheavenlyconjugation says:

          Depends if it’s real or not. But the arts are definitely not real, so…

          • Eric T says:

            I think that’s silly. Art makes people happy, happiness is good under almost any moral system we can throw out. There’s no need to pretend that Art isn’t real to make the point that it might not be more important than people dying of Malaria.

          • Michael_druggan says:

            Please stop, you’re making us look bad.

            Art is real and has real utility but it just so happens that it is currently well funded to the point that the marginal dollar invested in it brings as much b utility as the marginal dollar invested elsewhere

          • Andaro says:

            “Art makes people happy, happiness is good under almost any moral system we can throw out.”

            I don’t understand this argument. Presumably if art makes people happy, people will buy art with their own money. Or at least tickets to museums etc. If you’re concerned with poor people not being able to afford the art that would make them happy, you can give money to poor people and let them decide if they want art.

            I understand people donating to art because that’s what they want to do, intrinsically. I don’t understand the utilitarian argument for it. Especially if you consider opportunity cost (again, shouldn’t you utilitarians be working on tiling the universe with optimized happiness-generating systems?)

            I think there’s too much tax-funded art because it’s high status, so politicians want to spend other people’s money on it, even though that money could be more useful to us directly. The only really good argument is things like copyright not working properly, so some art might have positive externalities. But I don’t think this justifies the tax-funded spending we have today.

          • I think that’s silly. Art makes people happy, happiness is good under almost any moral system we can throw out.

            The fact that something is good is not sufficient to make it a suitable object of charity.

            Eating pistachios is good. But when I eat pistachios, all of that good goes to me, so I have a sufficient incentive to pay people to produce pistachios for me to eat. No reason for other people to subsidize my consumption.

        • AlexOfUrals says:

          Obviously “however many children are going to die from unaligned AI” – 1. Which is, to the first approximation, all the children we’re going to have at the point we have AGI.

    • Orpheus says:

      -1 for assuming that people that think that enough charity (and tax) money is going to the arts, want zero spending on the arts

      I didn’t.

      -1 for implying that this suggestion to a subculture is a diktat for society at large

      I didn’t.

      -1 for claiming that a world without subsidized art will only have fanfics and academic articles about AI

      It’s called comedic exaggeration.

      -1 for asking a question that is really an accusation

      Fair enough.

      • sty_silver says:

        -1 for assuming that people that think that enough charity (and tax) money is going to the arts, want zero spending on the arts

        I didn’t.

        That makes it even worse. It means you knew you were making a pointless inflamatory comment and chose to make it anyway.

        • Orpheus says:

          I didn’t assume the contrary, either. All I know is that Scott and other EA-types promote these strange AI institutes and encourage people to donate millions to what is basically speculative fiction at this point, all while ignoring the greatest achievements of our civilization and how they can be added to.

          • Cliff says:

            encourage people to donate millions to what is basically speculative fiction at this point

            Please re-think the mindset that allows you to dismiss so much hard work by so many very smart and passionate people as speculative fiction, without any deep understanding of it

      • Aapje says:

        @Orpheus

        After this clarification, I reported your initial comment for trolling.

      • Anonymous McPseudonym says:

        @Orpheus

        This is clear trolling and has no place here. It is neither necessary nor kind, and you just clarified you’re aware it isn’t true.

    • eric23 says:

      Which arts are actually worth supporting and not currently supported?

    • Nancy Lebovitz says:

      Art charities are pretty much for established art. If you wanted to help people doing something weird but possibly valuable, what would you do?

    • NoRandomWalk says:

      @Orpheus,
      am curious – are you supportive of art in the abstract, or particular art you personally find meaningful?
      For example, if you like a particular style of electronic dance music, but not the opera, would you support a program taking students to an odesza concert only, or to a production of madam butterfly as well?

      I think most people like people around them, and very specific artistic pursuits. And it’s somewhat possible to get them to care about people not around them, and a lot harder for them to care about things they don’t recognize as art (it would be about as hard as to get someone who likes pleasure to donate money to help masochists experience pain).

      I don’t think public calls for charity to ‘art’ work well because of too varied differences in taste? The focus was on ‘good charities’. What I think is a good art charity depends more on my preferences than the bureaucratic efficiency of the organization.

      • Orpheus says:

        Both. Art in the abstract is important because many great works of art are only recognized long after they are created. Educating people on this point is also a good use of charity funds.

        • Aapje says:

          Lots of art is forgotten, though. What would you educate people on, anyway? That a banana taped to a wall is art?

          Why (or why not)?

        • Art in the abstract is important because many great works of art are only recognized long after they are created.

          Do you think that recognition of such works is more likely through charitable organizations or individual customers/patrons?

    • J Mann says:

      I would be interested in an EA approach to art donation, just to see what it would be like.

      IMHO, if you want to give to the arts, give to the arts. Scott probably doesn’t have much useful to tell you about which art charities are better than others.

    • caryatis says:

      I imagine most people here already spend money on the arts. I buy books, music, theatre tickets, museum admissions. But I wouldn’t consider it EA because…it’s not. It’s discretionary spending with a side effect of supporting art.

    • Some Troll's Serious Discussion Alt says:

      Seeing how much money is avalible to throw at taping a banana to a wall, it seems like The Arts are bloated with cash not suffering a shortage.

      They need taxes not subsidies. Is there such a thing as anti-charity?

      • Evan Þ says:

        “The Arts” as a whole probably are, but there’re still pockets of specific arts that aren’t. For example, I know someone who regularly supports their local classical symphony orchestra. I’ve gone to hear it, and it’s very worthy of support even if my personal charity dollars are going to other causes.

      • Nancy Lebovitz says:

        The top end could be overpriced while there are still undercompensated artists.

    • AlexOfUrals says:

      I kind of can see where someone is coming from when saying something along the lines “Those 50 guys in Africa must die in pain so we can house one homeless person in SF”. But I really can’t imagine how can someone genuinely argue “Those 100 guys in Africa must die in pain so we can have a new mural in SF”.
      (To be clear, I can totally imagine how someone might be thinking along these lines, but not saying it out loud)

      • Michael_druggan says:

        Really? I see them both as unimaginable.

        • AlexOfUrals says:

          Well, to me the position of putting much more weight on suffering right in front of your eyes, than on suffering far away, seems sooooomewhat defensible – not that I’m the one to defend or endorse it, and the higher the disproportion the less defensible it is, but still. While the position of overtly weighting suffering and death of people far away lower than another decoration in your already-well-decored life looks like a badly overdone evil overlord character from a teenage sci-fi movie, or something like this.

      • eric23 says:

        I find them similar. Housing a homeless person makes their life more pleasant. A mural makes thousands of people’s lives a bit more pleasant. What’s the difference?

    • What arts do you think require charitable donations to flourish?

      The art I like most is poetry, and I can’t think of any poets in recent centuries whose work depended on charity, although if you go back farther patronage played an important role. Painters can sell their paintings, musicians tickets or CD’s — I suppose buskers would be an example of an art form supported by a sort of charity.

      • J Mann says:

        I’m a fan of programs designed to foster interest in arts among children and teens – theater programs, visual arts, etc.

        A lot of art programs have a substantial donated component – I guess if you want other people to enjoy more art than they would at the market clearing price, subsidizing their art experience is a way to go about it.

        • Aapje says:

          These programs often seem to focus on a rather sick part of the art market, which is heavily producer-driven (where many artists make things that very few people enjoy) and/or driven by novelty & virtue signalling, which typically makes this art become anachronistic very quickly. The political element in particular makes for art that people feel themselves forced to enjoy, in the moment, but that people will abandon as soon as those politics lose their relevance.

          Tax-based subsidies just feed this sickness and makes things worse. It allows producers who make things that few people enjoy, to force people who don’t enjoy it to pay them. It allows an elite to offload a large part of the costs of their novelty-seeking and virtue signalling on others (typically less wealthy people, who more often have to pay full price for the art they enjoy).

          Of course, people are free to donate their own money, but most of this seems to be the elite paying for stuff that the elite enjoys. I see a lot of wailing and gnashing of teeth in my country how commoners/people of color are not participating in the elitist art. In the latter case, many seem to have concluded that one cannot “foster interest” unless people of color make stuff that people of color enjoy, yet somehow commoners merely need to be ‘educated,’ even as they are largely excluded from this art scene. It’s rather typical of these times that class consciousness is lacking, even as class oppression is strong.

      • Nancy Lebovitz says:

        As I understand it, poets in the US make their livings from teaching rather than from their poetry. I don’t have an opinion about whether charity would lead to better poetry.

    • blacktrance says:

      Given the low median quality of professional art, this would be an improvement over the status quo.

    • Said Achmiz says:

      You know, I was just discussing this comment thread with some folks on the internets, and it occurred to me in the course of that conversation that there is a potential kind of “art charity” that I could definitely see myself donating to:

      Some sort of… political lobbying organization, or some such… that would campaign to excise required “art appreciation” and “music appreciation” programs from public school curricula.

      I think that would do a lot to make people enjoy art more! Free schoolkids from the teeth-gnashing suffering of officially-imposed, committee-dictated, government-mandated “appreciation” of art and music… why, they could get me to donate just by appealing to my sense of spite against the “appreciation” classes I was forced to take, even aside from the aforementioned salutary benefits.

      “Artists Against Art” was suggested as a name. We’ve got some folks here who have experience in political organizing, haven’t we? Let’s get this thing going. I pledge half of my next year’s charitable giving to such a thing, were someone to set it up!

    • Scott Alexander says:

      I donate to five artists I like monthly on Patreon, but I don’t consider it charity, I consider it supporting something I like. I encourage others to do the same, but I don’t claim to be able to tell you what art to like.

      • I think that raises an interesting distinction.

        It’s charity if you donate to someone to whom you feel no obligation.

        It isn’t charity if you have no legal obligation, but feel a moral obligation. That applies to both of my Patreon donations, going to people who provide material I value and feel I ought to pay for.

        Just as, to take a stronger case, it isn’t charity if you borrowed money from someone and repay it, even though he has no way of proving the debt and collecting.

    • phi says:

      It seems like even if there were no art charities, some people would still do art while working a day job to support themselves. Given this, wouldn’t fighting malaria be a better use of charity money?

      • Aapje says:

        Plenty of art is financially viable, although that often means that the artist is not free to do what they prefer to do, but has to obey the market to get a decent salary (like most people). The pop art movement recognized this, by pointing out that a lot of advertising is art (like the famous Campbell’s soup can).

        IMO, one of capitalism’s big benefits is that it makes people compromise to produce things that other people enjoy much more, than their own loss in enjoyment due to the compromise. I don’t see why art is different. I don’t see a major market failure in art that charity can help fix (the opposite seems more the case), nor a major wrong that is remedied with more charity.

    • Michael_druggan says:

      The arts not being an EA cause doesn’t mean that Effective altruists think they have zero value it means the marginal utility of a dollar invested in them is lower than other causes. It might be that the efficient allocation would be to invest slightly less than we currently do in artv and slightly more in global health. Since I can only direct a tiny portion of the money being donated with my own donations in order to move closer to this efficient equilibrium I should donate 0 to art.

    • holomanga says:

      Au the contraire! EA Funds (recommended in this post) made a grant to Miranda Dixon-Luinenburg to write original fiction!

      • Aapje says:

        How is this effective altruism, rather than an example of cronyism, where you subsidize a friend because you like them?

        My quick research suggests that Miranda is a ICU nurse, who seems good at organizing things, which she did/does for EA and/or related organizations on a volunteer basis. Her writing seems to consists of fantasy fan fiction and two minimally read books (one has three ratings on Goodreads, the other, none).

        The expected outcome of that subsidy seems likely to be a minimally read book on X-risks, where that same subsidy could be given to an established author to educate themselves and write about the topic, which would almost certainly be more effective in reaching people.

    • Decision-making is about marginal benefits. No-one is discussing taking all funding away from other areas, and it’s not as though there is no charitable funding for non-fanfic literature, not to mention music and art. But that’s almost irrelevant given that most art is made within a commercial world. Most of what is done, even within the world of high art, isn’t supported by charity – they charge money for goods, like books, and services, like performances.

      Or are you saying arts are currently underfunded *relative to the suggested charities*?

  7. Bellum Gallicum says:

    Charitable giving where you don’t personally know the recipients is akin to littering in my opinion.

    If you have too much money give it to the lowest paid person you work with or invest it in a business that you understand. Otherwise you’re creating problems that you don’t have to deal with just like litter.

    • thisheavenlyconjugation says:

      No offense but that sounds really stupid (hey, you said you didn’t want to be given charity!)

    • eric23 says:

      Creating “problems” like not dying of malaria? If only we all could have that problem.

    • sty_silver says:

      Charitable giving where you don’t personally know the recipients is like littering iff you are an ethical egoist. In that case, even though your money has great positive effect on the world, it doesn’t have any effect on you personally, and if you value everyone else at 0, you’ve gained nothing. Conversely, if you do value other people, it’s extremely valuable, and almost certailny more valuable than giving money to someone you know personally.

      • Garrett says:

        I don’t think this is an element of ethical egoism, either. That viewpoint holds that charitable giving isn’t a good in and of itself (ie., you shouldn’t get brownie points for doing it), but doesn’t view it an inherently bad. It treats charitable giving more like you’d treat your entertainment budget. If you feel warm and fuzzy when you give money to charity, you should do so because it makes you warm and fuzzy, not because it is a “good” thing to do.

    • NoRandomWalk says:

      I am very interested in your perspective, having never thought about it.
      Could you please elaborate?
      What kind of problems do you think 3rd world charities do (or might?) create?
      Or is it a more general objection, that because we don’t observe the secondary impacts we should be so uncertain that we should disregard the apparent greater first order benefit? Which is more at the level of personal intuition, and difficult to discuss rather than just state.

      • Bellum Gallicum says:

        People who haven’t run dynamic systems think dumping random energy into them is helpful.
        If you’ve spent as much time trying to uplift people and run positive output systems as I have you understand money dumped from above is never the solution.

        What do you do for a living? Could you imagine people adding random inputs or individuals to your system and then expecting positive results and praise?

        And for the peanut gallery, thanks for calling me a stupid egotistical person who wants people to die from malaria, very classy.

        “The population of sub-Sahara Africa has grown from 186 million to 856 million people from 1950-2010. … By 2050, Nigeria is projected to outpace the population of the United States by about 30 million people. These staggering numbers are the results of decades of change in childbearing and mortality patterns.Oct 29, 2015
        https://blogs.worldbank.org/africacan/7-facts-about-population-in-sub-saharan-africa

        • Michael_druggan says:

          But the whole point of EA is that we don’t just dump money randomly. We look at specific ways of doing it study their effects and see which one gives desired outputs.

        • RandomName says:

          Dumb it down for me, what are you implying by posting statistics about population growth in Africa? Because my immediate (apparently uncharitable and unclassy) interpretation is you’re saying “Actually, more people DO need to die of malaria in Africa, because there’s too many Africans”.

          • Aapje says:

            I can’t speak for Bellum, but a more charitable interpretation is that focusing on saving lives just causes the Malthusian trap to be reached sooner, causing a quicker collapse.

            Note that the link that Bellum gave doesn’t just show population statistics, but argues that the combination of better healthcare with high fertility is creating a temporary opportunity where there are relatively few older people, but lots of people of working age, allowing for growth that will be harder to achieve once lots of people live to old age, putting a burden on society.

            I personally think that key to prosperity in Africa is to reduce fertility and increase investment in the kids they do have. High fertility produces GDP growth in absolute numbers, but harms GDP per capita.

          • Bellum Gallicum says:

            You’re suggesting that unlimited population growth in developing nations leading to constant civil wars, famine, new epidemics and refugee crisis are the outcome charitable giving was trying to achieve?

            https://www.scientificamerican.com/article/world-should-prepare-for-11-billion-or-more-people/

            I don’t think they’re actually evil, just naive and divorced from consequences. But I’ve never lived in Liberia maybe it’s a paradise NGO’s have created. Have you tried living there and starting a business to employ locals? That would be virtuous in my opinion.

          • You’re suggesting that unlimited population growth in developing nations leading to constant civil wars, famine, new epidemics and refugee crisis are the outcome charitable giving was trying to achieve?

            Has famine increased? My understanding is that calorie consumption per capita in poor countries has trended pretty consistently up for the past seventy years or so, and that pretty nearly all modern famines are political, happening when one group of people is doing its best to keep food from another.

            The biggest African civil war ended about fifty years ago, and the last of the Hutu/Tutsi conflicts about twenty-five years ago.

            And, globally, the rate of extreme poverty has dropped about five-fold over the past fifty years.

            Why would you connect refugee crises to population rather than to the conflicts that produce them? There are lots of examples of similar conflicts in the past.

          • Bellum Gallicum says:

            I’m suggesting spending 4 Trillion dollars to achieve this

            https://www.dw.com/en/wars-killed-5-million-african-children-over-20-years-says-study/a-45299472

            was not as productive as people imagine, and the money would be better spent in your own community. Do you have a job? does anyone their struggle? Why not help them. Or invest in a business that employs people where you live?

          • @Bellum:

            Your link might be evidence against decolonization—post Leopold, I don’t think there was any killing on that scale in the African colonies.

            But your claim, if I understand it, is that the killing was due to population increase. Where is the evidence for that? More generally, how do you explain the fact that the actual history of what happened, subsequent to widespread claims in the sixties along your lines, was the precise opposite of the predictions, with conditions in the third world trending up instead of down?

            Are you going with Hegel’s claim that, if the facts contradict the theory, so much the worse for the facts?

        • Anatid says:

          If you’ve spent as much time trying to uplift people and run positive output systems as I have you understand money dumped from above is never the solution.

          Can you share some examples of what goes wrong, to give some intuition for this?

          I am inclined to send some money to GiveDirectly on the grounds that giving poor people money seems like it ought to let them improve their lives. I definitely believe that if someone gave me a bunch of money it would improve my life. What sorts of things do you expect to go wrong here?

          When you say

          Could you imagine people adding random inputs or individuals to your system and then expecting positive results

          I have some sympathy, but one reason GiveDirectly sounds pretty good to me is that it’s just giving people money and letting them figure out how to spend it, since presumably they have the best sense of what they actually need. Does that mitigate the problems with giving charity to people you don’t personally know, or do you still think it’s going to turn out badly?

        • eric23 says:

          What do you do for a living? Could you imagine people adding random inputs or individuals to your system and then expecting positive results and praise?

          Yes. That sounds like when I get a new customer, I get paid more, they get the product they have paid me for, and everyone wins.

          • Bellum Gallicum says:

            Sales are not a dynamic system, I meant running computer networks, electrical grids, farms, or manufacturing lines. where you’re actually creating tangible goods that we rely on to eat live indoors and cloth ourselves.

            In the world of creating feelings for example.
            Sales, acting, poetry, academia you can act anyway you want and it works because it’s all feeling and emotions but build an engine like that, and you get smoke fire then an explosion.

            Networks and engines don’t have feelings they have parameters and don’t care about apologies.

  8. caryatis says:

    I read the comic and still have no idea what MIRI is doing.

    • Bugmaster says:

      AFAICT, they are talking about strategies for recruiting more people who, at some point, might consider various approaches toward eventually developing a strategy for advancing the theory of AI alignment. Because, as everyone knows, the Singularity is imminent and will kill us all any day now.

      • FeepingCreature says:

        (note: this is not a belief that MIRI actually holds)

        (note: miri also do ongoing foundational AI safety research, not just outreach)

        • Bugmaster says:

          Have they published any of this foundational AI safety research ? Had this research been peer-reviewed (ideally, by machine learning specialists) ? Is any of this research applicable to AI as it exists today (which is already being abused in some cases) ?

          • thevoiceofthevoid says:

            You first two objections seem valid to me; your last less so. Their goal is not to prevent misuse of current AI, but to develop frameworks for safely building a general intelligence in the future, and they’re perfectly clear about the long-term focus of their research.

          • Bugmaster says:

            @thevoiceofthevoid:
            I acknowledge that your objection is fair; but, on the other hand, I feel even more apprehensive about donating to an AI safety organization whose goals do not include AI safety in the near future. That just sounds odd to me. We already have machine learning today; it is already quite obviously unsafe (in the hands of malicious or simply careless people); and it is almost unimaginably simpler than an AGI would be. Why can’t it at the very least serve as a test bed for all of this far-reaching AI safety research ?

          • thevoiceofthevoid says:

            @Bugmaster
            The short answer is that the kind of work necessary to make modern machine learning algorithms safer is likely orthogonal to the kind of work necessary to make an true agentic General Intelligence safe, and the fact that both of them are referred to as “AI” is a bit of an unfortunate namespace collision. It’s like trying to make a bicycle safe versus trying to make a car safe. They’re both “wheeled vehicle safety” and there may even be some small overlap in useful strategies (say, “don’t use cheap materials that break easily for the core structure”), but it’s clear that they’re fundamentally two different problems, and we shouldn’t criticise car safety engineers for failing to make bikes safer.

            Of course, you might still think that safety for today’s ML systems is more important than abstract AGI alignment research; MIRI has clearly made the opposite assessment.

          • MugaSofer says:

            MIRI has certainly published some research papers. I don’t know enough about the field to judge their worth or anything.

            Checking their website and filtering for “journal article”, it says they used to publish a fair amount of peer-reviewed stuff but seem to have stopped submitting to journals in 2014? The discussion of AI risk Scott linked also says that they announced this year they’re going to be putting most of their research under an NDA. Which … seems pretty dubious, although to be fair I assume OpenAI and their ilk do the same. [EDIT: sty_silver downthread says some MIRI researchers are publishing AI papers independently at the same time as working there, make of that what you will.]

          • nimim.k.m. says:

            @thevoiceofthevoid

            The short answer is that the kind of work necessary to make modern machine learning algorithms safer is likely orthogonal to the kind of work necessary to make an true agentic General Intelligence safe

            Is there a long answer that anyone outside the social network of MIRI has evaluated?

      • meltedcheesefondue says:

        To repeat a comment:

        One thing they are doing, is part funding me ^_^

        You can find some of my work here:

        https://www.lesswrong.com/users/stuart_armstrong

        https://arxiv.org/search/cs?searchtype=author&query=Armstrong%2C+S

        And, on a personal note, in my interactions with the MIRI researchers, they’ve been doing good and valuable work.

    • sty_silver says:

      The standard way of modeling agents is to consider them an indivisible entity that is separate from the environment. Obviously this is not actually the case; you are in fact part of the environment. The embedded agency research line is Miri studying what happens if you model the agent as part of the environment. It turns out this leads to all sorts of new problems.

      This is not everything Miri does, I think, it’s just one line of research. Although it seems to be incredibly fundamental, so it’s possible that the idea here is that it actually subsumes everything else Miri does, too, even the stuff they did before they started on embedded agency (which I believe is only a couple of years ago).

      It also has an interesting parallel to their work on decision theory, in that Causal Decision theory has the same flaw as classical agent models, namely assuming your own decision and decision making process is an indivisible entity separate from the rest of the world. It’s a bit of a recurring theme.

      • caryatis says:

        Thank you. And the decision theory stuff connects to saving us from evil AI?

        • Evan Þ says:

          Yes – if you program an AI with an incorrect decision theory, it could make evil or at least suboptimal decisions even if you mean for it to be good.

        • sty_silver says:

          I think the main problem with decision theory is that we want one that is stable under self-improvement.

          The most relatable thought experiment here is the one where you’re out in the desert alone, in danger of starving, and someone drives by with a car. For some reason, they’re not interested in saving you, but if being compensated enough, they will do so anyway. Say they do it for 10000$. Now you don’t have that 10000$ with you, but you promise that you will give it to them once they’ve saved you. However, there is no enforcement mechanism, so the other person just has to take your word.

          If you are a causal decision theorist (and also completely selfish, in this analogy), you would not pay your savior, because, by the time you’re back in civilization, you no longer have any reason to pay 10000$. If the other person knows that you think this way (because they are a good judge of character / can read your source code) they know this in advance and will not save you. This is obviously a terrible outcome.

          So the key idea is then, if you are a causal decision theorist but also very smart, you might foresee that you are going to encounter this such a situation in the future. In that case, you would be interested in modifying your decision theory now so that you become a functional / updateless decision theorist and will perform better in such problems. More concretely, you might do this if you’re an AI with the ability of rewriting your code and you think there is even just a small chance that you will encounter any problem of this class in the future. (Being an AI might make these problems more common because other agents might literally be able to read your code.)

          Making theorem-level proofs about the behavior of an AI that will self-modify is extremely difficult, and we might fail to do it in either case. But if the AI doesn’t even keep its decision theory, it’s probably hopeless. So as a necessary but nowhere near sufficient security guarantee, we would want it to have a decision theory that is optimal so that there is no motive to ever change it. Functional decision theory does appear to be optimal (except in problems that simply punish you for having a certain decision theory, which can be trivially used to show that no theory is optimal), and that’s why there’s a motive to study / formalize and eventually get AI people to implement it.

          As far as I understand this stuff, anyway.

          • Aapje says:

            Humanity already has built-in solutions to this issue, including: shame, guilt & reputation.

            We expect people to make (substantial) sacrifices to save others and if they don’t, we shame or even punish them. Similarly, we want the hero to be rewarded (in particular by the person he saved, in a way that matches the ability of that person). Instead of merely helping others to avoid this punishment or compensating the hero to void shaming, people internalize this morality, based on what is normal in their (sub)culture.

            Based on reputation, including a reputation stereotypes, we also predict the extent to which people will uphold a quid-pro-quo. There is some room to exploit reputation stereotypes, but also opportunity to gain personal reputation by being holier than thou. So the incentive is not just to exploit stereotypes, assuming that personal reputation is strong enough (this is actually an issue with tools like Tinder, where reputation is weak, making exploitation much more beneficial).

            So then a logical solution, if possible, is to make the AI have something like these same mechanisms, where it feels shame for breaking social norms, is punished mildly or severely for breaking them, gains personal reputation (and special treatment) for being a good bot, etc.

            Note that humans also have a selfish element to prevent them being exploited (too much). This may also be necessary for the AI, although to a lesser extent than for humans (since presumably, we are OK with using AI as slaves).

            Building a fully internal morality into advanced AI is almost certainly going to fail. Perhaps not so much because of the impossibility of such a thing (although this may definitely be true), but primarily because it will clash with human morality.

          • Cliff says:

            The reality is that human-level ethics is not even remotely good enough. What would be the result if we took a random person and gave them super intelligence? Quite possibly very bad things. People are constrained by their limited power. Humans who have a lot of power don’t always behave well.

          • sty_silver says:

            I agree that humans have built-in solutions to these things. I think the approach of trying to install shame into an AI is utterly disastrous and a complete non-starter, for several reasons. One is that it would be extremely difficult, but the most important is probably that we’re trying to have AI be predictable, and giving it something like shame would be highly counterproductive to that end.

            Function Decision Theory seems comparatively simple and easy to formalize.

        • Haru says:

          Embedded agency is, in a way, about making the AI aware of itself. (Or, at least, for correctly reasoning about the behaviour of an AI that is aware of itself).

          The current frameworks for formally describing agents are: game theory if you’re describing several agents and the world, and sequential decision processes (reinforcement learning / planning) if you are describing one agent and the world. In these models, the agents themselves exist independently of other agents and the world they’re in. There cannot possibly be actions that the agents can take, that can change the agents’ programming or algorithm. This is thus useless if you want to reason about self-modification, accidental or intentional, for good or nefarious purposes.

          For example, suppose you have some AI in a computer, that wants to take over the world. The very smart, but current-framework-described AI, might reason as such: “I want to start computing some nanotechnology designs to e-mail to some poor human schmuck. I need more computer memory. Huh, there’s this `evil_AI_core’ process running, it’s using a lot of memory. Since nothing I can do can change my algorithm, I might as well overwrite all the memory it is using with the nanotech design program.” And at this point, the AI stops running, because it overwrote its own memory with another program. (Thus the AI was not so smart.)

          Other ways the self-modification scenarios can go are the ways @sty_silver described.

          I’m unsure about whether researching embedded agency now is is a good thing or not On the one hand, reasoning about the AI’s behaviour if it may self-modify is very much necessary to be confident it is safe. On the other hand, it is also required to make world-taking-over AIs in the first place!

    • holomanga says:

      MIRI’s current approach is something called deconfusion. The principle is that we don’t know how proper reasoning works (in the sense that Bayes’ theorem tells us how learning based on evidence works, and the MDP formalism tells us how a special case of acting in an environment works), and if we don’t know that, then it’s very hard to think sensible thoughts about AI alignment. MIRI is trying to figure out how this works, to mathematical levels of precision.

      The main angle on this is embedded agency, because a lot of the problems being faced with understanding how reasoning works are related to the fact that the AI is a deterministic computer program running on a physical computer that can’t do the nice things current ideas of ideal reasoning involve like simulating all possible worlds to calculate the conditional probabilities.

      • Aapje says:

        Recognizing causal mechanisms is merely one part of decision making. A probably even more important part is to choose good goals.

        I don’t want an AI that is extremely efficient at killing all the Jews.

    • meltedcheesefondue says:

      One thing they are doing, is part funding me ^_^

      You can find some of my work here:

      https://www.lesswrong.com/users/stuart_armstrong

      https://arxiv.org/search/cs?searchtype=author&query=Armstrong%2C+S

      And, on a personal note, in my interactions with the MIRI researchers, they’ve been doing good and valuable work.

  9. TaikoNerd says:

    Thank you for highlighting the Center for Election Science! CES is currently running a campaign to get St. Louis, MO, to also switch to approval voting. Their preliminary polling suggests that people are very open to reform. (“As Fargo and St. Louis go…”)

    Maybe this is the sort of crowd where everyone is already familiar with the subject, but… there was a good CGP Grey video about the problems with first-past-the-post voting that alternative voting systems are trying to solve.

    • NoRandomWalk says:

      Will watch! Thank you. As a non representative single datapoint, I am aware of a claim that voting systems want to have certain properties, that no voting system can have all of these, and that first-past-the-post may not be the subjective best for reasons I don’t know.

      • Evan Þ says:

        Arrow’s Impossibility Theorem?

        • NoRandomWalk says:

          That’s the one!

          • Michael_druggan says:

            It doesn’t apply to approval or range voting. It is specifically for voting systemsv where you rank the candidates.

          • As noted by Michael, it doesn’t apply to other voting systems, but other similar proofs do – see Holmstrom’s theorem, for example.

            But given that all systems are imperfect, it’s still rather hard to argue that first past the post is the best we can do.

          • But given that all systems are imperfect

            Putting it that way suggests that there is a definition of perfection that they fail, and I don’t know one. Different voting systems produce different outcomes, but what would it mean for a voting system to be perfect?

        • drocta says:

          I think another theorem might be more important in this context, and that is Gibbard’s 1978 theorem . It shows that any “straightforwards” game form (straightforwards meaning, regardless of the preferences of the players over different outcomes, a (player/participant/voter)’s optimal choice doesn’t depend on the other player’s choices) ) is a probability mixture over forms where either one player chooses the outcome with no other player influencing it, or there are only 2 possible outcomes, and (essentially) which of the two outcomes happens is a matter of a vote between the two (though the different participants may have differing non-negative vote weights).

          This theorem still applies to approval and range voting.

          If your voting system is not (equivalent to) a probability mix of things of these forms, then when voting, there is always a reason to take into account what you believe the other voters are going to vote for.

          (Caveat : Just like there are some results (or so I’ve heard) about maybe arrow’s theorem not being especially in the limit as number of voters goes to infinity, assuming certain assumptions about voter preferences and behavior, there might be something similar here? I’m not sure though. )

      • TaikoNerd says:

        Yes — Arrow’s Impossibility Theorem says that no single voting system can always give the optimal result. (But that’s not to say that some systems aren’t much better than others.)

        • Michael_druggan says:

          And it doesn’t apply to approval or range voting. It only applies to systems using ranked ballots

        • Yes — Arrow’s Impossibility Theorem says that no single voting system can always give the optimal result.

          I don’t think “optimal result” is the right way of putting it, since that depends on some definition of optimal. What the theorem says is that no ranked preference voting system can have all of a series of what seem like desirable features:

          Non-dictatorship

          Independence of irrelevant alternatives

          Pareto efficiency

          Unrestricted domain.

          For details see the wiki article.

    • Cliff says:

      Fairvote is an organization that works to promote ranked-choice voting and has been fairly successful in getting it adopted (i.e. Maine adopted it for federal and state elections)

      • Michael_druggan says:

        Ranked choice here means IRV which is unfortunately a terrible system barely better than first past the post. Range and approval are vastly mathematically superior.

        • Evan Þ says:

          Why is approval voting mathematically superior? It, perhaps naively, seems much worse to me.

          For example, consider an alt-1860 election with three candidates:

          * Lincoln who opposes expanding slavery
          * Bell who refuses to say anything that isn’t a direct quote from the Constitution.
          * Breckenridge who loves slavery and wants it to be legal everywhere.

          With any system of ranked voting, I’m free to rank Lincoln at the top and Breckenridge at the bottom. But with approval voting, people who think like me would have to make Bell indistinguishable from either Lincoln or Breckenridge. If we all choose “approve” on Bell to keep Breckenridge out, we could end up with President Bell even if a majority of voters would prefer Lincoln.

          I think situations like this crop up frequently in real-world elections, so it seems to me this is a fatal problem with approval voting.

          • Michael_druggan says:

            https://rangevoting.org/

            This site goes into a lot of detail.

            In your example it’s not necessary a bad thing lincoln didn’t get elected. More people approving Bell means he represents the desire of the nation as a whole better. Using Lincoln as an example is bad for your intuition because it makes you biased

          • Scott Alexander says:

            I sympathize with Evan’s point here, though; being able to communicate that you prefer Lincoln to Bell seems better than not being able to do so.

            I was more convinced by https://www.electionscience.org/library/approval-voting-versus-irv/, although I don’t follow some of the more technical bits.

            I still have a higher-level concern that although compromise is good in many cases, approval voting seems to make it hard for anyone except a compromise candidate to win. It’s possible if my mind hadn’t been fried by years of FPTP voting I would realize that’s what democracy should be, but it still seems like it has some potential downsides.

          • Cliff says:

            More people approving Bell means he represents the desire of the nation as a whole better.

            ….does it?

          • Cliff says:

            I was more convinced by https://www.electionscience.org/library/approval-voting-versus-irv/

            I think I’m sold by this. My remaining reservation is about the viability of approval voting. I think it’s more “absurd” on the surface than RCV is. RCV, although complex mechanically, is fairly straightforward to explain and a logical add-on to the current system, and we have seen more success with RCV- although I don’t know the relative resources put into pushing for each.

          • Aapje says:

            @Cliff

            More people approving Bell means he represents the desire of the nation as a whole better.

            ….does it?

            This completely depends on the scenario. Imagine three candidates:
            – Bob, who wants to kill all leopards
            – Mary, who wants to kill half the leopards
            – Scott, who wants to kill no leopards

            1 person prefers Scott, but really doesn’t want all leopards to die, so they approve of Mary as well. 5 people prefer Bob, but really don’t want no leopards to die, so they approve Mary as well. Note that the compromise position that values each opinion equally is for 1/6th of the leopards to be allowed to live.

            However, Mary wins, causing half of the leopards to live, rather than 1/6th. Not very representative. Although, if the voters for Bob recognize that they are the majority, they can decide not to approve of Mary, causing Bob to win. This results in 0 leopards living, completely ignoring the desires of the one dissident voter. There is a vague crossover point where strategic voting is helpful, producing uncertainty in the optimal voting strategy, which IMO, is itself an undesirable feature.

            Of course, one may argue that you are all missing the forest for the trees. The real problem may be winner-take-all voting. This inherently condenses a large variety of viewpoints down to a single one. If there are two main groups, but also smaller groups that agree with the one group on some occasions, but the other group on other occasions, these groups are arguably not represented well by a system that forces these other groups to back one of the others permanently, rather than opportunistically, for each decision.

            Furthermore, arguably an important aspect of politics is to have your concerns be heard and thereby making the majority consider you. IMO, an important part of populist ressentiment is a feeling of not being heard/considered.

          • The real problem may be

            The real problem is that a group is not a person and doesn’t have preferences.

            If you are a utilitarian you can ask what voting system results in maximum utility, but that depends on the particular circumstances, since a dictatorship by a perfectly wise utilitarian has to produce outcomes at least as good as any other alternative, assuming that preferences are over outcomes not the system itself. And if the preferences are over systems, and we don’t know a priori what they are, then any system at all could be optimal.

          • Aapje says:

            People have different preferences that can conflict, so no single person can represent everyone. Of course, you can have a theoretical completely average dictator, but only if you assume that power doesn’t corrupt.

          • Of course, you can have a theoretical completely average dictator

            What does that mean?

            My point isn’t that there are practical difficulties to having a voting system that represents the will of the people, it is that there is no such thing as the will of the people.

            I offered the closest I could think of to a believable definition of such a thing — maximizing total utility — and showed a reason to think that there could be no voting system that was the best way of doing that under all circumstances.

            Before people argue about which voting system is best they first need a definition of “best,” and I don’t think anyone else here has offered one.

          • Aapje says:

            A dictator who for any topic, favors what the average person favors.

        • TaikoNerd says:

          Yes — it’s pretty much universally agreed in voting-nerd circles that first-past-the-post is mediocre, but there’s a lively (and mostly friendly) debate about which system, specifically, should replace it.

          Personally, I chose to split the difference, so I donate to both FairVote (instant-runoff voting) and CES (approval voting) 😉

  10. On the charter city idea, it’s worth noting that a very old example of this idea would be Jewish communities in Christian and Muslim societies. A very common arrangement was for the ruler to allow the community to enforce Jewish law on its own members.

    The one charity I regularly donate to is the Institute for Justice, a libertarian public interest law firm which litigates against eminent domain abuse, licensing restrictions to prevent competition, civil forfeiture abuse, and the like. Like one or two of the charities Scott mentions, whether that is effective altruism depends on your beliefs about what the world should be like.

    • Dacyn says:

      whether that is effective altruism depends on your beliefs about what the world should be like

      Whether it is effective seems to depend not only on your beliefs about what the world should be like, but also how effective IfJ is at doing what it sets out to do. Is there any data on that?

      • I think IJ has won two Supreme Court cases over the past year, and they have another in process. Given how hard it is to get the Court to accept a case, that’s pretty good evidence of effectiveness.

        On the other hand, they lost Kelo.

        • Michael_druggan says:

          That’s very small amount of evidence. We’re trying to be quantitative about this right. We need to put a utility value on the outcome those court cases, and compare it to the utility a similar amount of money donated to malaria or parasitic worms. Changing politics is hard so I can easily imagine a world where I totally agree with their politics and yet I calculate them to be not worth it. I haven’t done the analysis so I don’t b actually know. I’m just saying what would need to be done before they could possibly be claimed as an EA cause

        • @Michael:

          I was responding to the “but also how effective IfJ is at doing what it sets out to do” part of the previous comment. How effective doing what it sets out to do is in improving the world is a harder question.

    • The Big Red Scary says:

      “A very common arrangement was for the ruler to allow the community to enforce Jewish law on its own members.”

      Presumably you have in mind, among other things, the Ottoman millet system, which allowed for Christians, Jews, and Muslims to be judged by their own laws. For a more modern example, for some years the Province of Ontario allowed Christian and Jewish religious authorities to arbitrate in family disputes. Eventually, it was decided to end this practice rather than to let Muslims practice it as well:

      https://www.theglobeandmail.com/news/national/mcguinty-government-rules-out-use-of-sharia-law/article18247682/

      • The Millet system would be one example.

        My favorite case is the situation in Spain where, as best I can tell, repeatedly informing to the authorities against a fellow Jew was a capital offense under Jewish communal law and the sentence was presumably enforced by the authorities. That seems bizarre, but it’s at least my reading of my sources.

        It looks as though most diaspora Jews were under Jewish law until sometime in the 18th century.

  11. LGS says:

    I can’t understand why anyone who claims to support EA principles would donate to MIRI. Just because they’re your friends doesn’t mean giving them money is the effective thing to do.

    As a matter of policy, MIRI refuses to publish the research they claim to do. This means there is no way to evaluate them. Claiming to do world-saving research while refusing to publish it is a red flag for crackpottery, by the way.

    • sty_silver says:

      The no-publishing policy is recent. They’ve published research for many years. From what I’ve heard, they debated for months whether or not to publish their Logical Induction paper and eventually decided to do so, but now finally decided to stop, because a) they don’t want to speed up capability research and b) publishing is not actually part of an efficient workflow.

      I have absolutely zero confidence in my ability to judge whether this is a smart decision and I’m extremely skeptical of anyone who claims they do.

      I do think the papers they did publish are plenty impressive, and I’ve only read a few.

      There are also some researchers (or at least one, anyway) at Miri who still publish. The no-publish thing might mostly cover their embedded agency stuff.

      • LGS says:

        Looks like my other reply was eaten by the spam filter (or maybe my browser never sent it).

        I have absolutely zero confidence in my ability to judge whether this is a smart decision and I’m extremely skeptical of anyone who claims they do.

        Why should it be so hard to evaluate? Not publishing research means there’s (1) no error checking by the community, (2) no public accountability as to whether MIRI is actually doing research, (3) no outside feedback mechanisms, leading to groupthink. And what are the possible gains? I guess just that MIRI gets to pretend the stuff they’re doing is so significant that it’s dangerous?

        b) publishing is not actually part of an efficient workflow.

        Speaking as a mathematician: writing up your work is a pretty important part of an efficient workflow, as it forces you to explain your work to others (itself a clarifying process) and leads to feedback from others. The publishing step, i.e. submitting the work repeatedly to different conferences/journals who want the formatting just right, is indeed kind of a waste of time; but throwing a pdf on arxiv is easy and productive. Even if I was the only mathematician on Earth I’d write things up, if only for my future self so that I don’t forget my own insights.

        (A good analogy to writing stuff up might be refactoring your code; this is good practice even if nobody else reads your code.)

        • sty_silver says:

          Obviously, if you begin by making fun of the idea that their work could speed up AI development, then in that narrative, the reason no longer applies.

          As a mathematician, I also write up stuff; I’m writing a summary of something I’m working through right now even though no-one forced me to. That does not mean it’s an optimal workflow. In fact I know I’m nowhere near an optimal work flow. Writing up detailed summaries is a hack to deal with failure modes that otherwise occur, it’s not a global optimum.

          And (perhaps more importantly, though I think both are important) there is a gigantic difference between writing a summary in a way that will be readable to you and producing the Logical Induction paper.

          • Bugmaster says:

            At the very least, there’s good evidence to suggest that failing to publish their research alienates at least some of MIRI’s donors, such as LGS and myself. I think it might be a good idea for MIRI to run some kind of a study to estimate how much money they’re losing, vs. how much money they’d need to spend on publishing papers — in addition to all the other benefits that LGS mentioned.

            Of course, it could also be the case that they have all the money they need for now, in which case the point is moot.

          • LGS says:

            @sty_silver:

            Obviously, if you begin by making fun of the idea that their work could speed up AI development, then in that narrative, the reason no longer applies.

            Have you read their work? You’re a mathematician, go take a look at the Logical Induction paper (their best single output as far as I’m aware). It’s about a computable strategy for assigning probabilities to logical statements, but the strategy works by simulating all possible algorithmic adversaries, and the guarantees apply only in the limit (where the logical induction agent has simulated the possible adversaries for an arbitrarily long amount of time).

            OK, now go read some recent neural nets paper, and see if you can say with a straight face that there’s any chance at all the Logical Induction paper will assist the billion-dollar industry that is practical machine learning. I don’t think it’s possible to honestly worry about this.

            And (perhaps more importantly, though I think both are important) there is a gigantic difference between writing a summary in a way that will be readable to you and producing the Logical Induction paper.

            OK, that’s fair. But there are still the other advantages (feedback and error-checking from the community). For example, here’s some feedback for the Logical Induction authors: you don’t even mention in your desiderata anything about computational efficiency, even though it’s clear that all resource-bounded agents (which means all agents in our universe) will wish to use a computationally-efficient strategies for assigning probabilities to logical statements. Why not study whether such strategies can be computed in P, at least for beating adversaries that are also in P?

          • sty_silver says:

            Have you read their work? You’re a mathematician, go take a look at the Logical Induction paper (their best single output as far as I’m aware).

            Yes, I’ve read the logical induction paper in detail (although not the appendix where the hard proofs are). I found it extremely impressive.

            Does another paper exist that tackles the same problem? If not, then the case for worrying seems pretty straight-forward. Just reading a random paper doesn’t seem like it will be useful.

      • eric23 says:

        Publishing in a journal is a giant pain because of all the rounds of revision you need to do to satisfy the quirks of each of the reviewers. I can half-understand an organization not wanting to do that. But there’s no excuse for not publishing their conclusions as-is in a PDF on their own website.

      • Reasoner says:

        I do think the papers they did publish are plenty impressive, and I’ve only read a few.

        What did you find impressive? I read Categorizing Variants of Goodhart’s Law and it seemed like an exercise in abusing the term “Goodhart’s Law”. I read Risks from Learned Optimization in Advanced Machine Learning Systems and while I think the general sort of questions that paper explores are important, I thought their analysis had a lot of issues. The Occam’s razor paper seems uninteresting because I don’t think using IRL on its own is a good idea anyway. Based on the abstract, I don’t understand why the delegative reinforcement learning paper is supposed to be AI safety work rather than capabilities work. If you haven’t solved the alignment problem, a “trap” according to the agent’s values might be e.g. having a human switch it off. So having it avoid traps could be a bad thing if it’s not aligned. Smells like capabilities work.

    • Cliff says:

      Extensive discussion at the link

    • Scott Alexander says:

      I think there’s a balance between evaluate-ability and risk tolerance.

      If you want things that are completely evaluate-able, like I said, GiveWell has plenty of those. But that would preclude donating to charities like the Election Science one (since it’s hard to evaluate how much good would come out of better elections), or the Charter Cities one (since it’s very new and doesn’t have a track record), or a lot of other potentially good charities.

      So I think instead of taking a provability approach, you take a Bayesian approach, where you admit that the prior that any charity is good is pretty low, and evidence-based evaluation can raise that prior, but other things can too, and then you multiply that prior by the importance and tractability and neglectedness of the cause and you try to come up with some kind of answer.

      Part of where I’m coming from here is that a lot of EAs now think that billionaire foundations are doing a good job of funding all the charities with extremely-well-proven-efficacy, and so one thing individuals can do in a world where they have so much less money than the billionaires is fund places that they have hard-to-communicate special knowledge might be good even if those places can’t prove it. I try to split my giving about 50-50 between very-rigorously-proven-to-be-good places and more speculative places that I think I’m in an unusually good position to speculate on (this is not actually best practice, plausibly it should be 100% speculative, but I do it anyway).

      I don’t give MIRI that much of my speculative budget, but I do give them a little. I tried to be honest about this (that this is based on personal information, and not on proven efficacy) in the post.

      • LGS says:

        The word “Bayesian” is not a get-out-of-justifying-decisions-free card. Yes, other evidence might lead you to believe MIRI is a good charity. But it’s your responsibility to spell out that evidence, and also to disclose any conflicts of interest or cognitive biases that may be affecting you.

        How many friends of yours are affiliated with MIRI? If it turns out that MIRI is truly crackpot, would that be personally upsetting to you – are you emotionally invested in their success?

        (Sorry if I’m accusing you of biases you don’t feel you have; for all I know I’m totally off base here. But I just can’t imagine how anyone unbiased can come to the conclusion that MIRI is an effective charity, so my Bayesian reasoning leads me to believe you probably have some strong biases.)

        • Scott Alexander says:

          “But it’s your responsibility to spell out that evidence, and also to disclose any conflicts of interest or cognitive biases that may be affecting you.”

          Do you think I need to do that more than the “they are secretive and don’t talk a lot about their work or give a lot for people to evaluate them on, so whether or not you donate will probably be based on whether they’ve won social trust with you (they have with me)” which I put in the post?

          MIRI aren’t, like, my closest friends in the world or anything. There are two people at MIRI who I probably talk to on the order of one short conversation per month, I enjoy Eliezer’s writing, and they played a big role in our community’s history. This is similar to some of the other charities on here – I used to date someone who worked at ACE, I used to date another person who works at CEA which manages EA Funds, Charter Cities Institute advertises with this blog, and a housemate of mine works for Vox whose climate stuff I linked. I’m trying to avoid letting that influence my decisions, without any claim that I’m perfect at that, and I feel like the Epistemic Status note successfully conveyed the degree to which this was a quick “here’s what I’m thinking” rather than an attempt to rigorously justify everything.

          I also tried to get around this problem by linking the results of formal evaluations by respected experts first, and only adding the charities I’m personally interested in at the end, not in bold, specifically saying they required more of your time and thought. If you’re specifically going past the generally-respected ones to look at the random ones I described as “making appeals around here”, I feel like you know what you’re getting into.

          • LGS says:

            Do you think I need to do that more than the “they are secretive and don’t talk a lot about their work or give a lot for people to evaluate them on, so whether or not you donate will probably be based on whether they’ve won social trust with you (they have with me)” which I put in the post?

            I suppose not, or at least, not from the perspective of being fair to your readers. From my personal perspective I’m still *very* curious what Bayesian evidence you claim to possess that is so hard to share and which overcomes all the crackpottish signs MIRI gives off. It feels like “Bayesian” and “hard to communicate” are covering for the fact that you have no principled reason to back MIRI other than that they are your friends and/or are central to LW/EA history/community.

            I could be wrong, but everyone keeps refusing to explain why I’m wrong, all while funneling scarce EA funds into a secretive organization which claims to be doing research without actually publishing research, and whose only claim to competence is a single work (“Logical Induction”) which, while decent, is still not as good as stuff Scott Aaronson publishes every year.

          • Haru says:

            @LGS:

            In fairness, they have more publications than that. Some of the highlights from the last couple of years are: S Armstrong and S Mindermann. “Occam’s Razor is Insufficient to Infer the Preferences of Irrational Agents.” NeurIPS 2018, and K Grace et al., “When Will AI Exceed Human Performance? Evidence from AI Experts.”, arXiv 2017.

            The quality of the papers seems in line with a pretty good, though not top, machine learning lab (professor + grad students). The quantity is pretty small in comparison, but they’re not that many people (I think) and not optimising to publish, so perhaps that’s OK. I’ve checked Scott Aaronson’s recent publication record and that’s a lot more impressive, but that’s perhaps not surprising, since Aaronson is a very good outlier.

            I think their greatest contribution so far is raising awareness among technical AI researchers about the issue of AI alignment. Though that was probably in great part mediated by Nick Bostrom’s “Superintelligence”, and Stuart Russell’s advocacy.

            In my assessment, good arguments to support MIRI mostly come from invoking the neglectedness of the AI safety problem; and within it the neglectedness of looking at decision theory and embedded agency in particular as a way to solve it. The first is still a good argument but slowly becoming less so, given the evidence that the machine learning community is rediscovering the value alignment problem and trying to solve it. Embedded agency is an unsolved problem that few people are working on, and one that we’ll need to solve to rigorously reason about the behaviour of (literally) self-aware artificial agents. So the second one still stands.

            To contribute resources to the AI safety problem, I am a little bit more optimistic/less pessimistic about the effectiveness of, for example, AI impacts. They try to do forecasting, something that technical AI researchers are unlikely to start doing. Funnily enough, the way to donate to them is to donate to MIRI and mark the funds as “For AI Impacts”.

            Disclaimer: I don’t donate to MIRI or AI impacts and would not confidently recommend you donating to them, I need to look into it more. I am a computer science academic that believes AI safety is important. I donate 10% of my income to causes I deem effective, but not AI-related.

          • Scott Alexander says:

            LGS – I know a few people from MIRI, and they seem very smart and very honest, so when they say they’re doing important work I believe them. I know a few other people peripheral to MIRI in the same situation who say the same thing about them. Obviously this is only as believable as my own smartness and trustworthiness evaluation ability, but to me that’s more than nothing. I don’t have any other secret knowledge.

          • sty_silver says:

            In fairness, they have more publications than that. Some of the highlights from the last couple of years are: S Armstrong and S Mindermann. “Occam’s Razor is Insufficient to Infer the Preferences of Irrational Agents.” NeurIPS 2018, and K Grace et al., “When Will AI Exceed Human Performance? Evidence from AI Experts.”, arXiv 2017.

            These seem like very strange picks to me. They are Miri-adjacent, and I guess technically among their output, but none of the core staff has been working on them.

            I would point to the functional decision theory paper and the Logical Induction paper as a taste of the real thing.

          • Bugmaster says:

            @LGS:

            It feels like “Bayesian” and “hard to communicate” are covering for the fact that you have no principled reason to back MIRI other than that they are your friends and/or are central to LW/EA history/community.

            While this is probably true to some extent — Scott is only human, after all — I think it’s a bit uncharitable. AFAICT, the main driver of support for MIRI is the pervasive belief that the Singularity is coming very soon (relatively speaking), perhaps in 50 years at most, and that by default it will be Unfriendly and thus destroy humanity in one fell swoop. Given the severity of the threat, any expenditure of resources to avert it seems justified, and MIRI is pretty much the only charity who is explicitly dedicated to the task.

          • Haru says:

            > These seem like very strange picks to me. They are Miri-adjacent, and I guess technically among their output, but none of the core staff has been working on them.

            Noted, thanks. They’re listed as publications on their page though!

            > I would point to the functional decision theory paper and the Logical Induction paper as a taste of the real thing.

            Logical induction is pretty great. I didn’t mention it because LGS already noted it. Other reasons I didn’t mention it are that it’s from 2016, and I didn’t count that as “last couple of years”. This is a mistake on my part 🙁

            Functional decision theory, I’m not so impressed by, and I’m also not as confident researching it is a good way to reduce AI risk. It’s also been in development for a long while, E Yudkowsky’s “Timeless Decision Theory” working paper is from 2010. One expert reviewer from academic philosophy, in the field of decision theory likes it, but thinks it has to be argued for more clearly. Relevant quote:

            So far, I have explained why I’m not convinced by the case for FDT. I haven’t explained why I didn’t recommend the paper for publication. That I’m not convinced is not a reason. I’m rarely convinced by arguments I read in published papers.

            The standards for deserving publication in academic philosophy are relatively simple and self-explanatory. A paper should make a significant point, it should be clearly written, it should correctly position itself in the existing literature, and it should support its main claims by coherent arguments. The paper I read sadly fell short on all these points, except the first. (It does make a significant point.)

            It’s good that they’re thinking about this, but unclear whether if you want to reduce existential risk from AI you should support MIRI in particular.

        • sty_silver says:

          Are we just going to pretend that Miri has never published anything?

          • LGS says:

            I’ve skimmed/read all their publications and wrote up a summary. I’ve tried to link but my comment gets deleted when I do.

          • Eric T says:

            If I can try to Steelman LGS for a sec here, I think there is a real concern in supporting an organization that doesn’t produce some kind of a metric for measuring the errr… Effectiveness of our Altruism. I think the argument they did good things in the past shouldn’t be ignored, but as any investor will tell you “Past performance is no guarantee of future results.” I don’t think there is a problem saying that an organization that refuses to give you a current metric to evaluate them by is one you wouldn’t like to donate to.

            Furthermore, I think LGS is, admittedly rather rudely, right in pointing out that a lot of us in the EA/LW/SSC/Whatever the hell we are community probably have a strong bias towards MIRI because they are our Tribe. We like to talk about and think about these issues, so a group that does the same is inherently more appealing to us, and I think confronting that will make us better thinkers.

            But to attack my own steelman attempt for a bit, it’s not like MIRI is some weird freaky cult in the middle of the desert spending money on sports cars and building weird monoliths. All evidence does support the idea that the people who work at MIRI are very concerned about this goal, and you can measure some things that they do, especially their outreach. I think it’s a bit disingenuous to just label MIRI as a cult because they’ve decided to withhold their research for a bit.

          • Scott Alexander says:

            LGS: I can’t find it in the spam filter, if you email it to me I’ll post it.

          • sty_silver says:

            If you’ve read their papers and think they’re unimpressive, that’s fine. That’s a much better reason to be suspect than the fact that they’re not publishing anymore.

          • Bugmaster says:

            @Eric T:
            I agree that MIRI is not “some weird freaky cult in the middle of the desert … building weird monoliths” (although, admittedly, that would be kinda cool). However, IMO their acceptance of the imminent UFAI threat is based on premises so flimsy that it skirts the border of religious faith. I do believe that the people who work at and/or donate to MIRI are entirely sincere, but so are the Mormons who come to my door on Sunday to give me the good news…

          • sty_silver says:

            @Bugmaster:

            In their latest post discussing timelines that I’ve seen, Miri’s response was basically “the expect consensus seems fine except that we think one should assign a higher probability to very long timelines (> 100 years) than they do.”

            Expert consensus refers to the surveys that have been run. The median answer from those is roughly “50% chance human level AGI happens in 35 years.”

          • LGS says:

            Instead of linking, I will try just copy/pasting my summary of MIRI’s research. Note that I did this around January 2019. The context was a debate about whether MIRI publishes technical work. Below, I’m primarily evaluating the work as a mathematician or theoretical computer scientist would; our papers are typically ~30 pages long and contain substantially difficult proofs.

            2018 & 2019 (only one listed paper!):

            Categorizing Variants of Goodhart’s Law.

            10 pages, no theorems, no code. Verdict: not a math or CS paper.

            2017:

            Incorrigibility in the CIRL Framework.

            9 pages, no theorems etc. Verdict: not a math or CS paper.

            Toward Negotiable Reinforcement Learning: Shifting Priorities in Pareto Optimal Sequential Decision-Making

            17 pages. Does have lemmas/theorems, but only one-line proofs. Verdict: likely not publishable as a math/CS paper.

            A Formal Approach to the Problem of Logical Non-Omniscience

            This appears to contain a subset of the Logical Induction paper. I’m not counting it as a paper to avoid double-counting (I counted the Logical Induction thing as part of 2016 publications).

            When Will AI Exceed Human Performance? Evidence from AI Experts

            Not a math or CS paper, but isn’t trying to be; it is a survey of AI experts.

            Forecasting Using Incomplete Models

            This one should count as a math paper (at least as far as 10-second glances go). Count so far: 1

            Cheating Death in Damascus

            18 pages, no theorems or proofs or experiments or anything. Not a math or CS paper.

            Functional Decision Theory: A New Theory of Instrumental Rationality

            34 pages. No theorems or proofs of any kind, no code or experiments. Verdict: not a math/CS paper.

            Total for 2017: 1

            2016:

            Formalizing Convergent Instrumental Goals

            9 pages, has theorems but only one-line proofs. Verdict: not publishable as a math/CS paper.

            Parametric Bounded Löb’s Theorem and Robust Cooperation of Bounded Agents

            By my normal criteria this passes for a math paper: it does have proofs that are not one-liners. It is 15 pages long. My guess, glancing at it, is that this result is not very significant and basically just reproves Lob’s theorem using standard techniques in a slightly different setting, but I didn’t actually read it, so I must count it. Count: 1

            Logical Induction

            Counts as a math paper. Count: 2

            Logical Induction (Abridged)

            This is a subset of the Logical Induction paper. Doesn’t count.

            Inductive Coherence

            7 pages, one-line proofs. Likely not publishable as a math/CS paper.

            Asymptotic Convergence in Online Learning with Unbounded Delays

            15 pages but the proofs have some meat to them. Counts as a math paper by my criteria (as usual, I did not evaluate significance). Count: 3

            Optimal Polynomial-Time Estimators: A Bayesian Notion of Approximation Algorithm

            80-page paper with some lengthy proofs. Count: 4

            A Formal Solution to the Grain of Truth Problem

            A 9-page paper with mostly one-line proofs. Verdict: likely not publishable as a math/CS paper.

            Safely Interruptible Agents

            Similar to above.

            Defining Human Values for Value Learners

            10 pages, no theorems/proofs/experiments/etc. Not a math/CS paper.

            Quantilizers: A Safer Alternative to Maximizers for Limited Optimization

            7 pages, one-line proofs. Verdict: not publishable as a math/CS paper.

            Alignment for Advanced Machine Learning Systems

            20 pages, no theorems or code. Not a math/CS paper.

            total for 2016: 4.

            2015:

            Proof-Producing Reflection for HOL: With an Application to Model Polymorphism

            15 pages, no theorems/proofs/experiments. Verdict: not publishable as a math/CS paper.

            Vingean Reflection: Reliable Reasoning for Self-Improving Agents

            10 pages, no theorems/proofs/experiments. Verdict: not a math/CS paper.

            Reflective Variants of Solomonoff Induction and AIXI

            6 pages, one-line proofs. Verdict: not publishable as a math/CS paper.

            Reflective Oracles: A Foundation for Classical Game Theory

            6 pages. It does have one half-page proof. Verdict: likely insufficient as a math/CS paper.

            Asymptotic Logical Uncertainty and the Benford Test

            5 pages. It does have some half-page proofs, but come on, 5 pages. Verdict: likely insufficient as a math/CS paper.

            The Asilomar Conference: A Case Study in Risk Mitigation

            64 pages, but no theorems/proofs/experiments/etc. Verdict: not a math/CS paper.

            Leó Szilárd and the Danger of Nuclear Weapons: A Case Study in Risk Mitigation

            Again, long document but no proofs or code or anything. Verdict: not a math/CS paper.

            An Introduction to Löb’s Theorem in MIRI Research

            This appears to be a summary/survey paper and not original research. Also, does not contain proofs or experiments.

            Aligning Superintelligence with Human Interests: An Annotated Bibliography

            Contains a list of references. Useful but not original research.

            Formalizing Two Problems of Realistic World-Models

            8 pages. Does not contain theorems/proofs/experiments. Verdict: not a math/CS paper.

            The Value Learning Problem

            6 pages. No theorems/proofs/experiments. Verdict: not a math/CS paper.

            Questions of Reasoning under Logical Uncertainty

            7 pages. No theorems/proofs/experiments. Verdict: not a math/CS paper.

            Toward Idealized Decision Theory

            13 pages. Again, does not appear to contain theorems/proofs/experiments. Verdict: not a math/CS paper.

            Concept Learning for Safe Autonomous AI

            4 pages. No proofs/experiments. Verdict: not a math/CS paper.

            Total for 2015: 0.

            Henceforth I will only list the papers that try to be a math/CS paper by containing some equations or some one-line proofs at the very least.

            2014:

            Robust Cooperation on the Prisoner’s Dilemma: Program Equilibrium via Provability Logic

            16 pages, one-line proofs. Verdict: probably insufficient as a math/CS paper.

            UDT with Known Search Order

            7 pages, short proofs. Verdict: likely insufficient as a math/CS paper.

            Non-Omniscience, Probabilistic Inference, and Metamathematics

            50 pages. All the proofs I saw looked trivial, but I might have missed something and Paul Christiano is a serious author. Verdict: undecided, leaning towards not publishable as a math/CS paper. Count: 0.5

            Procrastination in Probabilistic Logic

            2.5 pages, one-line proofs. Verdict: not sufficient as a math/CS paper.

            Program Equilibrium in the Prisoner’s Dilemma via Löb’s Theorem

            5 pages, one-line proofs. Verdict: not sufficient as a math/CS paper.

            Botworld 1.1

            Has code, but not really experiments. I don’t think this is sufficient for publication but I’m not qualified to evaluate non-math papers. Verdict: undecided, leaning towards not publishable as a math/CS paper. Count: 1

            Corrigibility

            8 pages, easy-looking proofs. Verdict: likely not publishable as a math/CS paper.

            Total for 2014: 0.5+0.5.

            2013:

            Definability of “Truth” in Probabilistic Logic

            7 pages. Has some half-page proofs. Verdict: likely insufficient as a math/CS paper.

            Computable Probability Distributions Which Converge on Believing True Π1 Sentences Will Disbelieve True Π2 Sentences

            4 pages, one-line proofs. Verdict: insufficient as a math/CS paper.

            Recursively-Defined Logical Theories Are Well-Defined

            3 pages, one-line proofs. Verdict: insufficient as a math/CS paper.

            The Procrastination Paradox

            4.5 pages, easy-looking proofs. Verdict: likely insufficient as a math/CS paper.

            Tiling Agents for Self-Modifying AI, and the Löbian Obstacle

            40-page mathematical discussion, but doesn’t seem to prove any original theorems. Verdict: likely not publishable as a math/CS paper.

            Total for 2013: 0.

            2012 and earlier:

            Convergence of Expected Utility for Universal Artificial Intelligence

            (2009). 5 pages. One page-long proof. Verdict: likely insufficient as a math/CS paper.

            Total for 2012 and earlier: 0.

            Total across all years: 5 “counted”, 2 “half counted”, around 18 short papers with easy-looking proofs that are not individually publishable in math/CS but may still amount to non-zero technical contribution. It looks like the only published paper out of the ones that passed my 10-second glance test is the (conference version of the) Logical Induction paper. A bunch of those 18 short ones are also published (in the proceedings of some AGI-specific conferences).

            (The following is my conclusion after completing this exercise in January 2019):

            Now that I’m done, I think the most generous-to-MIRI way of counting would be to only count from 2016-2018, reasoning that they declared only in 2014 that they are switching focus from awareness to research, and that it took them until 2016 to hire/start the research process. This gives them up to 5 publication-worthy math papers in 3 years. If we say that their short papers can be merged to produce some more publication-worthy math, that would put them in the neighborhood of 2 papers a year.

            2 papers a year is maybe around what one decent researcher in theoretical computer science does; however, the papers in theoretical computer science have like 3 authors on average. So really you may need 3 full time researchers to get to MIRI’s 2016-2018 output, if we count generously and if all the math is actually good (I did not evaluate it).

            MIRI’s budget in those years was in the low millions, which is way more than the salary of 3 researchers, but I guess they also need managers and office space and random overhead I’m forgetting. So if you are sufficiently generous their output doesn’t look that terrible, but certainly lower than you could do by hiring people from a CS theory department. (I have my doubts that it still looks passable after a full review of the math in those papers, but whatever.)

          • sty_silver says:

            Okay, so this is what you meant by skimming their work. I absolutely don’t think this is a meaningful way of measuring the value of their output. It shows that their output is atypical for standard math papers. I think it tells you roughly zero about its value. Especially since we know they are capable of doing mathy stuff if they want to. And since their stated mission is deconfusion.

            Without trying to be contrarian, I would find it suspect if their format just happened to align with the standards in academics, which are obviously highly motivated by the desire to show off and nowhere near optimal. I’m sure I’ve read fewer papers than you, but more than half of those that I did read have made me angry at the authors for not even trying to optimize their results for readability. This has generally been more true the more important the paper is. One of the worst papers I’ve ever had to read was the one that introduced the now state-of-the-art in homomorphic encryption.

            I’m yet to read any paper outside of Miri that I was anywhere near as impressed by as the Functional Decision theory one, which by your metric doesn’t even count.

          • MugaSofer says:

            >Without trying to be contrarian, I would find it suspect if their format just happened to align with the standards in academics, which are obviously highly motivated by the desire to show off and nowhere near optimal.

            Those standards have been developed over time, and have an enormously positive track record.

            Claiming that MIRI has developed an entirely new form of research that’s much more effective than normal research is a pretty striking claim! It seems like it should require evidence, not simply be given a high prior probability.

          • sty_silver says:

            Those standards have been developed over time, and have an enormously positive track record.

            I agree that academia as a whole is extremely successful. I don’t think this allows us to draw any conclusions about the efficiency of the process. There is no separate field that does things differently which we can compare it to. In general, fields just tend to stumble into some nash equilibrium that is often suboptimal.

            Think of the US government. It’s been a functional democracy for many decades; very impressive. But that doesn’t mean that it’s hard to improve it. Just changing from the 2 party system to something else would probably be an improvement. And there are lots of other examples of systems that are highly successful in an absolute sense but still inefficient. Academia is by no means the worst.

          • LGS says:

            @sty_silver, I disagree that MIRI’s output is more readable. In fact, for the most part it seemed like they chopped up their work into a million 5-10 page papers instead of giving me some 50-page manuscripts that are easier to navigate (with a table of contents and clear sections and so on). That’s why I had to skim all their papers so quickly: there’s so much junk/padding in their “publications” that it’s hard to find the meat.

            I do maintain that the Functional Decision Theory paper is not a math/CS paper. Whether it’s a good philosophy paper, I’m not qualified to evaluate. But people always say MIRI does “pure math” or “rigorous” work, and philosophy is not rigorous pure math. I’m evaluating their technical contributions only. A paper with no proofs, code, or experiments has no technical content. Seeing this, I didn’t even read it in detail. Glancing at it now, I’m going to guess – based on the fact that there are no formal definitions or theorem statements – that I will find FDT to not be well-defined if I actually read the paper as a mathematician.

            Especially since we know they are capable of doing mathy stuff if they want to.

            Some people at MIRI are capable of doing “mathy stuff,” yes. Others might not be (I won’t name names). In any case, it’s always harder to do the mathy stuff than to pontificate about philosophy, so I’m not impressed with claims that MIRI is merely *choosing* to not be rigorous and to wax poetic instead of providing proofs.

            And since their stated mission is deconfusion.

            I don’t think “deconfusion” is a research agenda that’s bound to be productive. Do you have some examples of people who did “deconfusion” (without any technical math output or experiments), and then after a few years declared themselves to not be confused anymore and were widely recognized as correct? In other words, is there any historical precedent for “deconfusion” being a thing, rather than just meaning that they’re stuck at the brainstorming phase indefinitely?

            If I were confused about something – say, how to correctly define some notion – I’d play around with different flawed definitions and show what they imply. The act of proving things about the idiosyncrasies of the various definitions would hopefully teach me more about them, guiding me to the correct version I’m looking for. In other words, successful deconfusion should still take the form of proving things. It should not devolve into meditation and brainstorming, at least not for years on an organization-wide scale.

      • eric23 says:

        Isn’t this just a case of Pascal’s mugging? Claim that your cause is asymptotically important, and a numerical calculation requires everyone to give to you?

  12. outis says:

    For maximum optimization: to whom should I donate to maximize the number of lives saved per dollar? Or, put another way: what is the minimum dollar amount whose impact’s expected value is saving one human life?

    • Carl Milsted says:

      The death rate of humans is…100% You cannot save any human life unless you invent immortality.

      What you can do is extend human lives. And/or you could make some lives better.

      While this may seem like nit-picking, it explains why people freak out more about mass shootings, terrorism, etc. vs. cancer or heart attacks which kill far more people. The former kill people earlier, more person-years lost per death. A heart attack which kills someone at 80 who would otherwise die at 84 is rather different from a school shooting which kills 9 year olds.

      The charities which fight parasites probably have more life years/buck than charities which fight some disease which hits later in life.

      But that’s the short term view. If you cure childhood diseases in a society which is used to high childhood mortality, then you cause a population explosion which leads to war/famine later on, unless you do something about expectations/folkways. Exporting feminism to a society which is starting to get better medicine may not save many lives near term, but it might prevent future misery.

      In other words “number of lives saved” is not necessarily an optimum metric. Reality is messier.

      • Anatid says:

        If you cure childhood diseases in a society which is used to high childhood mortality, then you cause a population explosion which leads to war/famine later on, unless you do something about expectations/folkways.

        I get very uncomfortable when people raise this as an objection to saving lives in poor countries. David Friedman likes to talk about how the population of Africa has gone way up in the last 50-100 years and yet famine is much *less* of a concern than it once was. So I think it is much more important to save what lives we can today, rather than worry about uncertain and unlikely negative effects from overpopulation in the future.

        Said another way, in 2019 I don’t think any country in the world would say, “horrible as it sounds, I wish more of us had died in childhood so that our country wouldn’t be so overcrowded.” Things are nowhere near bad enough that we should allow children to die to prevent overpopulation.

        Even if you think that increased population will cause large costs from possible famine, emissions, and so on, I think that any reasonable utilitarian calculation would find that the benefit of saving one life is currently much much higher than the cost of having one more person.

        [For example, for the specific case of emissions, I googled some numbers. US emissions are ~20 metric tons of CO2 per person per year. I found EPA estimates for the social cost of carbon of $10-100 per metric ton. So we could say a US citizen’s emissions cost $200-2000/year. Meanwhile government agencies seem to use values for human life in the range of $50k-200k/year.]

      • outis says:

        Ok, fine. Can I at least get an upper bound? IIRC mosquito nets were a contender for the most EA of EAs. How many dollars prevent one death for malaria using mosquito nets?

        • MugaSofer says:

          Givewell has a spreadsheet of this stuff. It gives the following results:

          AMF $1,690/life

          Malaria Consortium $1,844/life

          Hellen Keller International (vitamin supplements to chilren) $2,303/life

          Note that Givewell’s calculations do not take into account miscarriages; it’s my understanding that malaria causes a lot of miscarriages, and so if you care about saving fetal lives these numbers are lower.

      • If you cure childhood diseases in a society which is used to high childhood mortality, then you cause a population explosion which leads to war/famine later on, unless you do something about expectations/folkways.

        What evidence supports this claim in the modern world? We have, partly through improvements in medicine, produced a very large increase in population over the past few centuries, and during that period famine has become strikingly less common.

        I can’t think of any large famines in the past century that were not due to political causes, some people deliberately keeping food from getting to other people. I’m pretty sure there have not been any in the post-WWII world.

      • Edward Scizorhands says:

        it explains why people freak out more about mass shootings, terrorism, etc. vs. cancer or heart attacks which kill far more people.

        People freak out about intentional deaths much more than unintentional deaths because the intentional deaths are intentional.

        • Aapje says:

          Intentional deaths also tend to kill people at younger ages. Most health-related causes of deaths are strongly correlated with old age. You have to die of something.

          Also, I disagree that people don’t ‘freak out’ over cancer and heart attacks.

  13. felix says:

    Is giving to Donor Advised Funds a form of laziness you’re trying to nudge people away from?
    If not I’d suggest checking them out, I’m sure they’re the maximally lazy option for some kinds of lazy people!

  14. Carl Milsted says:

    Here’s a good general rule from someone who has been on the other side of the transaction: limit the number of charities you support!

    It is far better to give a hundred dollars to one charity than twenty five each to four. If you spread out your donations, the follow-up calls for more will eat up all the funds you donated! Charities often ask for small donations just to build a pool of people they can hit up for real donations.

    Charities which have some big/corporate donors can look very efficient because of this phenomenon.

    If the organization is small, you can be the big donor which raises said charity’s efficiency.

    • DinoNerd says:

      Is there any way I can evaluate which charities will refrain from punishing me for donating – and wasting my donation dollars – by spamming me with farther donation requests – and farthermore passing my name on to every other charity that asks for it, who promptly do the same thing?

      I currently have two charities undergoing “extinction” – and I’m wondering how much time, and how much paper, they’ll waste before finally removing me from their lists. One lied to me, claiming I had been a “member” and telling me to “renew my membership”; they were also prone to over-the-top sob stories that repeated a little too often; I’m not sure whether I’ve ever donated to them. The other actually passed my usual screens, and was on my personal regular annual donation list – until I started counting the appeals.

      A third – that I still support – kept telling me to renew my annual membership – months and months before it was due; I took to binning their appeals on sight, and wound up donating late in consequence. A single well-timed reminder, and a lot fewer reports on what my money is doing – would work a lot better for me, and save them money besides.

      Normally I check the “cost of fund raising” metric on whatever charity evaluation site I’m using, and ignore any where this number seems too high – but frankly I’m beginning to think the choice is between “outrageous/scam” and “merely stupid/bad”, with no good options.

      • Edward Scizorhands says:

        I would pay money to someone to start recording this.

        I know one charity in particular that I regret ever seeing. They pimped my address out to everyone, because I was dumb and paid by credit card.

        At one point I told off a DWB cold-caller directly. “You make me feel like a sucker for ever supporting you.” At least I don’t think they have sold me out, and they never called again, just respectful mailings a few times a year.

  15. EMP says:

    I don’t want to venture too far into culture war territory here, but upon investigation one of the organizations that received funds from an Effective Altruism fund was Encompass. I will quote their website:

    “Why does Encompass only support people of color?”

    “Because the racial composition of the professional farmed animal protection movement is largely white, people of the global majority sometimes need spaces without white folks to reflect, heal, and build. To be clear: for the health and longevity of our movement, a dedicated global majority space isn’t a nice to have—it’s a must-have.”

    “We will need to look within our movement honestly and be committed to the necessary culture change required to make racial inclusion a cornerstone principle. This will be a long-term project and will be uncomfortable at times, but it’s vital if we plan to grow and thrive.”

    On the surface, this seems like the exact opposite of effective altruism. 50,000 dollars were spent on basically what boils down to funds for some bizarre form of woke segregation. Is this really the most effective way of spending 50k?

    As a side note, I also disagree with the EA movement in principle. I am not a utilitarian—or at least—not a strictly egalitarian one. I think that the suffering of certain people can be prioritized over the suffering of others. Though that is another story.

    • MugaSofer says:

      “Global majority space”, that’s an interesting new term.

      The holding of controversial racial views or even policies is not inherently incompatible with effectiveness.

      However, their “about” page seems to clarify that their entire goal is “to make the farmed animal protection movement more effective by fostering racial diversity and inclusivity.” That’s nice, but I would be very surprised if this has empirical backing as an intervention to help animals rather than a goal-in-itself, so (assuming it doesn’t) what the actual fuck?

    • thisheavenlyconjugation says:

      This is movement building investment, the idea is that 50k now leads to >50k in expectation in future donations.

    • MichaelStJules says:

      On the surface, this seems like the exact opposite of effective altruism. 50,000 dollars were spent on basically what boils down to funds for some bizarre form of woke segregation. Is this really the most effective way of spending 50k?

      I’d say in some sense it’s also the opposite of segregation, since it’s outreach to neglected and underrepresented demographics, and the goal is to improve inclusivity and diversity overall. If a group is underrepresented, then there may be in some sense de facto segregation already, possibly due to structural or cultural barriers, intentional or not. I don’t think this underrepresentation is an intended and desired consequence of decisions that have been made; I just think it has not been prioritized as something to address. That separate (segregated) spaces for underrepresented groups actually improve inclusivity and diversity overall in an unrepresentative movement is an empirical claim that can turn out to be true or false.

      Also, neglected demographics might have a lot of low-hanging fruit who would not have come across EA or animal advocacy as much otherwise, so it might be more cost-effective to do outreach to them to increase the overall size of the movement. Similarly, growing the movement internationally where it’s small and otherwise neglected might be more cost-effective than outreach generally. So this is outreach, which is generally valuable, but targeted at some neglected demographics, which could make it especially cost-effective among outreach interventions.

      Whether or not outreach, or Encompass specifically, is more cost-effective than other opportunities is another issue, too.

  16. Ambi says:

    You mentioned ImpactMatters as a version of GiveWell. I went to their site and looked at climate change charities and found the Eden Reforestation Projects which they claim can offset a year’s worth of personal emissions for $7! This to me is so preposterous that I don’t trust any of the conclusions on the site.

    I can see that the type of thing that they are doing is akin to what GiveWell does, but I am not trusting their rigor or methodology to produce an accurate picture of charity impacts.

    I probably shouldn’t be too critical of a post that tries to be maximally lazy though.

  17. AlexanderTheGrand says:

    VSW (Virtue-Signal Warning)

    Thanks a lot for this list. I’m a frequent reader and I trust your recommendations. After a total of an hour of research, I split my donation of $3000, or 5% of my income, among 4 of these charities and one other charity. I’ve felt indecision in the past which has maybe prevented me from donating (or maybe it was just apathy). But between reading this and reading “Nobody is Perfect, everything is Commensurate”, I was convinced I might as well, since I’m not in dire straits right now financially.

    I can faithfully say, there is a <5% chance that this year I would have donated any substantial amount of money if not for this post. So thanks a lot, and writing this post had AT LEAST $2,850 worth of charitable effect.

    It didn't really make me feel "good" in the warm/fuzzy way. But honestly, it was so easy to click those buttons that it didn't feel bad in the way I thought losing $3,000 would.