Axiology, Morality, Law

I.

Philosopher Amanda Askell questions the practice of moral offsetting.

Offsetting is where you compensate for a bad thing by doing a good thing, then consider yourself even. For example, an environmentalist takes a carbon-belching plane flight, then pays to clean up the same amount of carbon she released.

This can be pretty attractive. If you’re really environmentalist, but also really want to take a vacation to Europe, you could be pretty miserable not knowing whether your vacation is worth the cost to the planet. But if you can calculate that it would take about $70 to clean up more carbon than you release, that’s such a small addition to the overall cost of the trip that you can sigh with relief and take the flight guilt-free.

Or use offsets instead of becoming vegetarian. An typical person’s meat consumption averages 0.3 cows and 40 chickens per year. Animal Charity Evaluators believes that donating to a top animal charity this many animals’ lives for less than $5; others note this number is totally wrong and made up. But it’s hard to believe charities could be less cost-effective than just literally buying the animals; this would fix a year’s meat consumption offset price at around $500. Would I pay between $5 and $500 a year not to have to be a vegetarian? You bet.

Askell is uncomfortable with this concept for the same reasons I was when I first heard about it. Can we kill an enemy, then offset it with enough money to save somebody else’s life? Smash other people’s property, then give someone else enough money to buy different property? Can Bill Gates nuke entire cities for fun, then build better cities somewhere else?

She concludes:

There are a few different things that the harm-based ethicist could say in response to this, however. First, they could point out that as the immorality of the action increases, it becomes far less likely that performing this action and morally offsetting is the best option available, even out of those options that actualists would deem morally relevant. Second, it is very harmful to undermine social norms where people don’t behave immorally and compensate for it (imagine how terrible it would be to live in a world where this was acceptable). Third, it is – in expectation – bad to become the kind of person who offsets their moral harms. Such a person will usually have a much worse expected impact on the world than someone who strives to be as moral as they can be.

I think that these are compelling reasons to think that, in the actual world, we are – at best – morally permitted to offset trivial immoral actions, but that more serious immoral actions are almost always not the sorts of things we can morally offset. But I also think that the fact that these arguments all depend on contingent features of the world should be concerning to those who defend harm-based views in ethics.

I think Askell gets the right answer here – you can offset carbon emissions but not city-nuking. And I think her reasoning sort of touches on some of the important considerations. But I also think there’s a much more elegant theory that gives clear answers to these kinds of questions, and which relieves some of my previous doubts about the offsetting idea.

II.

Everything below is taken from vague concepts philosophers talk about all the time, but which I can’t find a single good online explanation of. I neither deserve credit for anything good about the ideas, nor can avoid blame for any mistakes or confusions in the phrasing. That having been said: consider the distinction between axiology, morality, and law.

Axiology is the study of what’s good. If you want to get all reductive, think of it as comparing the values of world-states. A world-state where everybody is happy seems better than a world-state where everybody is sad. A world-state with lots of beautiful art is better than a world-state containing only featureless concrete cubes. Maybe some people think a world-state full of people living in harmony with nature is better than a world-state full of gleaming domed cities, and other people believe the opposite; when they debate the point, they’re debating axiology.

Morality is the study of what the right thing to do is. If someone says “don’t murder”, they’re making a moral commandment. If someone says “Pirating music is wrong”, they’re making a moral claim. Maybe some people believe you should pull the lever on the trolley problem, and other people believe you shouldn’t; when they debate the point, they’re debating morality.

(this definition elides a complicated distinction between individual conscience and social pressure; fixing that would be really hard and I’m going to keep eliding it)

Law is – oh, come on, you know this one. If someone says “Don’t go above the speed limit, there’s a cop car behind that corner”, that’s law. If someone says “my state doesn’t allow recreational marijuana, but it will next year”, that’s law too. Maybe some people believe that zoning restrictions should ban skyscrapers in historic areas, and other people believe they shouldn’t; when they debate the point, they’re debating law.

These three concepts are pretty similar; they’re all about some vague sense of what is or isn’t desirable. But most societies stop short of making them exactly the same. Only the purest act-utilitarianesque consequentialists say that axiology exactly equals morality, and I’m not sure there is anybody quite that pure. And only the harshest of Puritans try to legislate the state law to be exactly identical to the moral one. To bridge the whole distance – to directly connect axiology to law and make it illegal to do anything other than the most utility-maximizing action at any given time – is such a mind-bogglingly bad idea that I don’t think anyone’s even considered it in all of human history.

These concepts stay separate because they each make different compromises between goodness, implementation, and coordination.

One example: axiology can’t distinguish between murdering your annoying neighbor vs. not donating money to save a child dying of parasitic worms in Uganda. To axiology, they’re both just one life snuffed out of the world before its time. If you forced it to draw some distinction, it would probably decide that saving the child dying of parasitic worms was more important, since they have a longer potential future lifespan.

But morality absolutely draws this distinction: it says not-murdering is obligatory, but donating money to Uganda is supererogatory. Even utilitarians who deny this distinction in principle will use it in everyday life: if their friend was considering not donating money, they would be a little upset; if their friend was considering murder, they would be horrified. If they themselves forgot to donate money, they’d feel a little bad; if they committed murder in the heat of passion, they’d feel awful.

Another example: Donating 10% of your income to charity is a moral rule. Axiology says “The world would be better if you donated all of it”, Law says “You won’t get in trouble even if you don’t donate any of it”, but at the moral level we set a clear and practical rule that meshes with our motivational system and makes the donation happen.

Another example: “Don’t have sex with someone who isn’t mature enough to consent” is a good moral rule. But it doesn’t make a good legal rule; we don’t trust police officers and judges to fairly determine whether someone’s mature enough in each individual case. A society which enshrined this rule in law would be one where you were afraid to have sex with anyone at all – because no matter what your partner’s maturity level, some police officer might say your partner seemed immature to them and drag you away. On the other hand, elites could have sex with arbitrarily young people, expecting police and judges to take their side.

So the state replaces this moral rule with the legal rule “don’t have sex with anyone below age 18”. Everyone knows this rule doesn’t perfectly capture reality – there’s no significant difference between 17.99-year-olds and 18.01-year-olds. It’s a useful hack that waters down the moral rule in order to make it more implementable. Realistically it gets things wrong sometimes; sometimes it will incorrectly tell people not to have sex with perfectly mature 17.99-year-olds, and other times it will incorrectly excuse sex with immature 18.01-year-olds. But this beats the alternative, where police have the power to break up any relationship they don’t like, and where everyone has to argue with everybody else about whether their relationships are okay or not.

A final example: axiology tells us a world without alcohol would be better than our current world: ending alcoholism could avert millions of deaths, illnesses, crimes, and abusive relationships. Morality only tells us that we should be careful drinking and stop if we find ourselves becoming alcoholic or ruining our relationships. And the law protests that it tried banning alcohol once, but it turned out to be unenforceable and gave too many new opportunities to organized crime, so it’s going to stay out of this one except to say you shouldn’t drink and drive.

So fundamentally, what is the difference between axiology, morality, and law?

Axiology is just our beliefs about what is good. If you defy axiology, you make the world worse.

At least from a rule-utilitarianesque perspective, morality is an attempt to triage the infinite demands of axiology, in order to make them implementable by specific people living in specific communities. It makes assumptions like “people have limited ability to predict the outcome of their actions”, “people are only going to do a certain amount and then get tired”, and “people do better with bright-line rules than with vague gradients of goodness”. It also admits that it’s important that everyone living in a community is on at least kind of the same page morally, both in order to create social pressure to follow the rules, and in order to build the social trust that allows the community to keep functioning. If you defy morality, you still make the world worse. And you feel guilty. And you betray the social trust that lets your community function smoothly. And you get ostracized as a bad person.

Law is an attempt to formalize the complicated demands of morality, in order to make them implementable by a state with police officers and law courts. It makes assumptions like “people’s vague intuitive moral judgments can sometimes give different results on the same case”, “sometimes police officers and legislators are corrupt or wrong”, and “we need to balance the benefits of laws against the cost of enforcing them”. It also tries to avert civil disorder or civil war by assuring everybody that it’s in their best interests to appeal to a fair universal law code rather than try to solve their disagreements directly. If you defy law, you still get all the problems with defying axiology and morality. And you make your country less peaceful and stable. And you go to jail.

In a healthy situation, each of these systems reinforces and promotes the other. Morality helps you implement axiology from your limited human perspective, but also helps prevent you from feeling guilty for not being God and not being able to save everybody. The law helps enforce the most important moral and axiological rules but also leaves people free enough to use their own best judgment on how to pursue the others. And axiology and morality help resolve disputes about what the law should be, and then lend the support of the community, the church, and the individual conscience in keeping people law-abiding.

In these healthy situations, the universally-agreed priority is that law trumps morality, and morality trumps axiology. First, because you can’t keep your obligations to your community from jail, and you can’t work to make the world a better place when you’re a universally-loathed social outcast. But also, because you can’t work to build strong communities and relationships in the middle of a civil war, and you can’t work to make the world a better place from within a low-trust defect-defect equilibrium. But also, because in a just society, axiology wants you to be moral (because morality is just a more-effective implementation of axiology), and morality wants you to be law-abiding (because law is just a more-effective way of coordinating morality). So first you do your legal duty, then your moral duty, and then if you have energy left over, you try to make the world a better place.

(Katja Grace has some really good writing on this kind of stuff here)

In unhealthy situations, you can get all sorts of weird conflicts. Most “moral dilemmas” are philosophers trying to create perverse situations where axiology and morality give opposite answers. For example, the fat man version of the trolley problem sets axiology (“it’s obviously better to have a world where one person dies than a world where five people die”) against morality (“it’s a useful rule that people generally shouldn’t push other people to their deaths”). And when morality and state law disagree, you get various acts of civil disobedience, from people hiding Jews from the Nazis all the way down to Kentucky clerks refusing to perform gay marriages.

I don’t have any special insight into these. My intuition (most authoritative source! is never wrong!) says that we should be very careful reversing the usual law-trumps-morality-trumps-axiology order, since the whole point of having more than one system is that we expect the systems to disagree and we want to suppress those disagreements in order to solve important implementation and coordination problems. But I also can’t deny that for enough gain, I’d reverse the order in a heartbeat. If someone told me that by breaking a promise to my friend (morality) I could cure all cancer forever (axiology), then f@$k my friend, and f@$k whatever social trust or community cohesion would be lost by the transaction.

III.

With this framework, we can propose a clearer answer to the moral offsetting problem: you can offset axiology, but not morality.

Emitting carbon doesn’t violate any moral law at all (in the stricter sense of morality used above). It does make the world a worse place. But there’s no unspoken social agreement not to do it, it doesn’t violate any codes, nobody’s going to lose trust in you because of it, you’re not making the community any less cohesive. If you make the world a worse place, it’s perfectly fine to compensate by making the world a better place. So pay to clean up some carbon, or donate to help children in Uganda with parasitic worms, or whatever.

Eating meat doesn’t violate any moral laws either. Again, it makes the world a worse place. But there aren’t any bonds of trust between humans and animals, nobody’s expecting you not to eat meat, there aren’t any written or unwritten codes saying you shouldn’t. So eat the meat and offset it by making the world better in some other way.

(the strongest counterargument I can think of here is that you’re not betraying animals, but you might be betraying your fellow animals-rights-activists! That is, if they’re working to establish a social norm against meat-eating, the sort of thing where being spotted with a cheeseburger on your plate produces the same level of horror as being spotted holding a bloody knife above a dead body, then your meat-eating is interfering with their ability to establish that norm, and this is a problem that requires more than just offsetting the cost of the meat involved)

Murdering someone does violate a moral law. The problem with murder isn’t just that it creates a world in which one extra person is dead. If that’s all we cared about, murdering would be no worse than failing to donate money to cure tropical diseases, which also kills people.

(and the problem isn’t just that it has some knock-on effects in terms of making people afraid of crime, or decreasing the level of social trust by 23.5 social-trustons, or whatever. If that were all, you could do what 90% of you are probably already thinking – “Just as we’re offsetting the murder by donating enough money to hospitals to save one extra life, couldn’t we offset the social costs by donating enough money to community centers to create 23.5 extra social-trustons?” There’s probably something like that which would work, but along with everything else we’re crossing a Schelling fence, breaking rules, and weakening the whole moral edifice. The cost isn’t infinite, but it’s pretty hard to calculate. If we’re positing some ridiculous offset that obviously outweighs any possible cost – maybe go back to the example of curing all cancer forever – then whatever, go ahead. If it’s anything less than that, be careful. I like the metaphor of these three systems being on three separate tiers – rather than two Morality Points being worth one Axiology Point, or whatever – exactly because we don’t really know how to interconvert them)

This is more precise than Askell’s claim that we can offset “trivial immoral actions” but not “more serious” ones. For example, suppose I built an entire power plant that emitted one million tons of carbon per year. Sounds pretty serious! But if I offset that with environmental donations or projects that prevented 1.1 million tons of carbon somewhere else, I can’t imagine anyone having a problem with it.

On the other hand, consider spitting in a stranger’s face. In the grand scheme of things, this isn’t so serious – certainly not as serious as emitting a million tons of carbon. But I would feel uncomfortable offsetting this with a donation to my local Prevent Others From Spitting In Strangers’ Face fund, even if the fund worked.

Askell gave a talk where she used the example of giving your sister a paper cut, and then offsetting that by devoting your entire life to helping the world and working for justice and saving literally thousands of people. Pretty much everyone agrees that’s okay. I guess I agree it’s okay. Heck, I guess I would agree that murdering someone in order to cure cancer forever would be okay. But now we’re just getting into the thing where you bulldoze through moral uncertainty by making the numbers so big that it’s impossible to be uncertain about them. Sure. You can do that. I’d be less happy about giving my sister a paper cut, and then offsetting by preventing one paper cut somewhere else. But that seems to be the best analogy to the “emit one ton of carbon, prevent one ton of carbon” offsetting we’ve been talking about elsewhere.

I realize all this is sort of hand-wavy – more of a “here’s one possible way we could look at these things” rather than “here’s something I have a lot of evidence is true”. But everyone – you, me, Amanda Askell, society – seems to want a system that tells us to offset carbon but not murder, and when we find such a system I think it’s worth taking it seriously.

This entry was posted in Uncategorized and tagged , , . Bookmark the permalink.

298 Responses to Axiology, Morality, Law

  1. Sniffnoy says:

    Only the purest act utilitarians say that axiology exactly equals morality, and I’m not sure there are any act utilitarians quite that pure.

    No Scott, that’s consequentialism, not utilitarianism. Utilitarianism is a specific type of consequentialism. Please stop confusing the two.

    Edit:

    Morality is an attempt to triage the infinite demands of axiology, in order to make them implementable by specific people living in specific communities.

    As your parenthesized bit below notes, isn’t that really only rule-consequentialist morality? I mean, as a rule-consequentialist you might claim that this is basically what e.g. deontologists are actually doing under the hood, but they’d presumably deny it.

    Edit: Also, the following is pure nitpicking irrelevant to the actual point, which I’d normally avoid, but since in this case it’s a common misconception, I feel like I should point out that in most of the US the age of consent is lower than 18.

    Edit: Anyway now that I’m done nitpicking and have actually read the rest, this is a good post. 🙂

    • Scott Alexander says:

      The main problem with your first point is that I never hear people called “act consequentialists”, although the term would make sense.

      I’ve changed it to “act-utilitarianesque consequentialists”, so I hope you’re happy.

      • Sniffnoy says:

        Just realized I never replied to this, so, thank you for fixing this. 🙂 Glad to see the distinction being made…

    • vV_Vv says:

      I mean, as a rule-consequentialist you might claim that this is basically what e.g. deontologists are actually doing under the hood, but they’d presumably deny it.

      Are deontologists really rule-consequentialists in denial, or are rule-consequentialists really deontologists in denial?

      I find it difficult to tell the two positions apart, except for extreme cases (e.g. fundamentalists).

      • Peter says:

        Derek Parfit said the two were “climbing the same mountain from different sides”. Others disagree – thus maybe there are people in denial on both sides. I think some rule-consequentialists and some deontologists are on the same climb, especially the rule-consequentialists. There’s what Parfit might call “Objective List Deontologists” – i.e. those with a great big long list of things Thou Shalt and Thou Shalt Not do, who resist the idea that their list might just be commentary to some small-but-profound kernel of morality from which all the rest flows. They aren’t climbing that mountain.

        Personally, I’m happier climbing the rule-consequentialist slope. Some of the positions on the deontological slope feel quite pointless to me, or felt quite pointless until I could see them reflected on the consequentialist slope. Of course there’s a bunch of act utilitarians at the foot of the mountain saying how pointless everything on the entire mountain is… I think a part of me was there once.

      • AdamDKing says:

        The difference between deontologists and consequentialists is agent-neutrality. Deontologists believe that morality depends on the agent (so me offsetting my carbon might be different from you offsetting your carbon, because of different salient details). Whereas consequentialists are agent-neutral — all that matters is the carbon gets offset.

        There are of course all sorts of different flavors of each, some having more in common than others, but the technical distinction is agent neutrality.

      • sconn says:

        I’d say the rule consequentialists are more sincere, because they have a system (consequentialism) for deciding which rules they’ll follow, whereas at least many deontologists take the rules as given from the outset and have no way of adjusting those rules to make sure they have good consequences.

        Example: Catholics think birth control is intrinsically evil. A rule consequentialist could examine that rule, discover that, on the whole, that leads to lots and lots of misery, and discard it. The Catholic can’t.

        Signed, an ex-Catholic rule-consequentialist

        • Virtua Lyric says:

          The Catholic can’t.

          The Catholic doesn’t want to. They sincerely care more about avoiding intrinsic evil than about avoiding miserable outcomes. Non-consequentialists are not inherently less sincere.

        • The original Mr. X says:

          I’d say the rule consequentialists are more sincere, because they have a system (consequentialism) for deciding which rules they’ll follow, whereas at least many deontologists take the rules as given from the outset and have no way of adjusting those rules to make sure they have good consequences.

          I don’t see what that’s got to do with sincerity. Practicality, maybe, although given how difficult it often is to tell what the consequences of our actions will be, and how easy it is for wishful thinking to lead us astray, the superior practicality of rule-consequentialism isn’t exactly a given.

          Example: Catholics think birth control is intrinsically evil. A rule consequentialist could examine that rule, discover that, on the whole, that leads to lots and lots of misery, and discard it. The Catholic can’t.

          From where I’m standing, the widespread normalisation of birth control doesn’t seem to have led to a reduction in misery; quite the reverse, in fact.

    • JohnWittle says:

      Oh come on man, this can’t possibly be true.

      I mean, I haven’t seriously read any ethics philosophy aside from rationalist material, so maybe the terms ‘consequentialism’ and ‘utilitarianism’ are defined as you say in the real world, with utilitarianism being a subphilosophy of consequentialism. But this doesn’t make any damn sense to me.

      To me, just based on the words themselves, utilitarianism is the belief that one’s moral beliefs can be represented using a utility function; that all the different kinds of value are fungible against each other and the only question is the exchange rate. I would generally say that utilitarianism isn’t opposed to any particular moral content; anything can be represented in a utility function.

      But then, conceptually underneath utilitarianism is consequentialism, where your values are only allowed to be over consequences of actions and not actions themselves.

      So if we evaluate a worldstate consisting of the scene of a murder, we can express both consequentalist utilitarianism:

      Loss of life: -100 utilitons
      Erosion of norm against murder, leading to 0.21 murders: -21 utilitons
      Victim would hhave bullied coworker but can't: +1 utiliton
      Total: -120 utilitons

      Then a deonotological utilitarianism:


      Act of murder: -50 utilitons
      Loss of life: -50 utilitons
      Erosion of norm against murder, leading to 0.21 murders: -21 utilitons
      Victim would hhave bullied coworker but can't: +1 utiliton
      Total: -120 utilitons

      The consequentialist scorns the idea that murder would be bad even if it had no bad consequences; to her, murder’s badness comes from its bad consequences. The deontologist disagrees; he says murder is individually bad in addition to its bad consequences.

      They might disagree on whether we should invent VR murder porn… but they both represent their values in a utility function. So how can they not both be utilitarians? Which would make consequentialism a subset of utilitarianism: the kind where your utility function must only be over consequences

      To me, this seems like the sanest way of parsing those terms, but I acknowledge that I don’t know how the high fallutin philosophical community parses them. Where am I wrong?

      Is it actually true, that a deontologist might represent their values in a utility function, and believe in the fungibility of utility, yet not be considered a utilitarian?

      • Dedicating Ruckus says:

        What you refer to as “utilitarianism” is in fact what’s generally philosophically called “consequentialism”. The fact that “utilitarianism” is not equivalent to “moral philosophies that use a utility function” is confusing, but that’s the usage.

        It’s entirely consistent with consequentialism to have terms in your utility function like “have people committed murders”. “Becoming a person who has committed murder” is, in fact, a consequence of the act of murder, and so consequentialists are free to consider it.

        “Utilitarianism”, meanwhile, refers to some specific subsets of consequentialism with defined utility functions, either “maximize happiness” (Bentham’s original formulation) or “maximize preference satisfaction” (preference utilitarianism) or “maximize some fuzzy function we can’t rigorously characterize, but it’ll be good, we promise” (CEV). There are various other variants and subtleties as well.

      • Sniffnoy says:

        Sorry, didn’t see this comment till now. But, Dedicating Ruckus has the right of it. I’d like to expand on his comment with a more detailed explication of the differences.

        The problem here, and what I suspect is likely tripping you up, is that the term “utility function” has two different meanings. There’s utility function in the decision-theoretic sense — a function that describes an agent’s preferences — and then there’s utility functions in the utilitarian sense which describes, uh… it’s not super clear. Some sort of measure of a person’s well-being.

        I’m not sure I’d say that every consequentialist agent has a utility function (in the first sense); but every consequentialist agent whose preferences adhere to certain conditions — we might say every “rational” consequentialist agent — necessarily has a utility function (again, in the first sense). This is Savage’s theorem (or, if you’re willing to assume the notion of probability already, the VNM theorem).

        A utilitarian is not just a consequentialist who uses a utility function (in the first sense) — that, again, is just any rational consequentialist. A utilitarian is a particular type of consequentialist, one whose utility function (in the first place) is derived by somehow aggregating everyone’s utility functions (in the second sense). (That Wikipedia article on the VNM theorem calls this second sense “E-utilities”, so that’s what I’ll call them.) Just what an E-utility function means and how it can be determined, as well as how these E-utility functions should be aggregated into a utility function (not E-utility function!), is unclear.

        Yes, this is terribly confusing terminology. But I hope I’ve made the distinction clear.

  2. dianelritter says:

    Seems to me that the people claiming that not eating meat leads to a better world are wrong, right down the line. If someone becomes a vegetarian, those 3 cows, and 40 chickens wouldn’t be killed ; in fact they wouldn’t be born. All of them were born and raised to sell — to non-vegetarians. If the number of vegetarians increases, fewer cows and chickens will be raised for food. So, the result of being a vegetarian isn’t saving the lives of 3 cows and 40 chickens; it results in 3 cows and 40 chickens never having lived at all. A cow is usually slaughtered for meat I believe at 6 months, or maybe a year, so becoming a vegetarian eliminates 1 1/2 to 3 years of cow existance from earth. If cows, on the whole prefer to exist, then you’ve done a bad thing by becoming a vegetarian. If vegetarians cared about the happiness of cows and chickens they would be pushing for humane treatment of animals raised for food, not for vegetarianism or ‘lab grown meat’.

    • Scott Alexander says:

      This has been looked into, and the general consensus is that animals suffer so terribly in factory farms that it would be better for them not to exist.

      • Leonard says:

        Perhaps so, but it still means that vegetarianism is suboptimal if you can find and eat any humanely-raised meat at all. So, while vegetarianism may be holier than being a factory-farm brute like me (questionable; next para), being a free-range carnivore would certainly be even holier. Indeed, if you could find util-positive meat, utilitarianism would suggest replacing as much of your food as possible with it.

        But what if you can’t find any util-positive meat? Even in this case, vegetarianism is problematic. As you mention in the OP, there is a risk that becoming vegetarian might succeed in locking in a social norm against meat-eating, which might be catastrophic for animal utility in the long run if it gets locked in. So one might want to avoid it even if one cannot find a single humanely-raised chicken.

        • EGI says:

          So according to this argument, I can hold slaves as long as they were specifically bred as slaves and have a life just barely worth living (according to the standards of whom?)? I can even kill these slaves if they don’t work as efficient as I’d like? I just have to make very sure they are additional people who would not exist if I hadn’t bred them as my slaves?

          This line of arguments (and the repugnant conclusion) is why I distrust every brand of consequentialism, which does not distinguish between existing and potential beings.

          • Said Achmiz says:

            Please! Absolutely nothing about consequentialism requires eliding this distinction. My sort of consequentialism certainly holds existing and potential beings to be very different!

            Now, utilitarianism—there’s a different story. But that’s only a small subset of consequentialisms…

          • rlms says:

            Not all consequentialisms! I follow what I call greedy preference utilitarianism, largely for the reason that you give (caring about potential lives too much causes absurdity). The idea is that at any moment you should try to satisfy the preferences of all entities that exist at that point (weighted by the strength of the preferences and the complexity of the entities). This means that we don’t care about potential lives intrinsically, which avoids problems, but we still care about some potential lives in a way that matches common intuitions because existing people have preferences about potential lives.

          • sconn says:

            Humans find liberty to be crucial to their happiness. So far as we can tell, cows don’t.

        • sconn says:

          Yes, it does seem odd to me that there isn’t more attention given to humane meat. There is a whole community centered around pastured meat, some for environmental reasons, some for health reasons, and others for animal-welfare reasons. Even if you live in a city, it is likely you would be able to find a farmshare within a reasonable distance – if a few people club together and make a trip out to the country once per year, they could easily buy all the beef they might want and be assured with their own eyes that those cows are living The Good Life For Cows. Chicken is more expensive and eggs and milk are too perishable to do with quite so much ease, but you can do that there too. You just have to be dialed-in to the natural food community in your area. Eatwild.com is a good place to start.

          For more systemic change, you’d have to know the animal-welfare laws in your country or state and campaign to improve them. This could make a much bigger difference in the happiness of animals than individually going vegan ever could.

      • Joe says:

        The number of factors on which farmed animals differ from one another makes me suspicious of the EA claim that the answer in all cases resolves to “life not worth living; should be prevented”. For example, animals raised for meat have quite different lives than e.g. egg-laying chickens or dairy cows. Laws differ wildly for different kinds of animals. Laws differ between countries — for example I’ve never seen a response to this article which argues that farm standards in the UK are quite different from those in the US, such that analyses of factory farm animal welfare conducted using US data aren’t very relevant to Brits deciding whether to go vegan. Even within a country there are a variety of more-demanding assurance schemes that farms may optionally adhere to, which seem not uncommon; every upscale burger restaurant seems to boast of how high-welfare its beef is.

        When I’ve seen EA folk discussing footage showing poor conditions in factory farms, the context tends to be “how can we use this to convert more people to veganism?” with very little discussion of “how representative is this of the typical factory farm; how prevalent actually are these kinds of conditions?” After all, it’s not like we’re without competing footage — a quick search for “broiler farm” on YouTube produces lots of videos of chicken farms that aren’t obviously horrifying. And (as described in more detail in the article I linked) the data suggests that (at least in my country) most farms do seem to follow regulations correctly, and the conditions in the shocking footage you’ve seen would represent a breach of these regulations.

        So perhaps the answer is that if the US tightened up its welfare standards, its meat industry could become positive-utility just as the UK’s is.

      • Antistotle says:

        If your goal is to prevent the sorts of abuses prevalent in factory farms, and you’re not going to take over the state/country/world and make people vegetarian/vegan by fiat, then arguing that veganism is the solution to the problem of factory farms will not get you what you want. Assuming that what you want, what you really really want is animals to stop suffering.

        There are many other humane ways to raise meat animals, ways that if done properly contribute to biodiversity and don’t pollute any more than the sorts of intensive farming and shipping technologies need to feed, say, Chicago in Jan, Feb and March.

      • Ketil says:

        Do we actually take the Repugnant Conclusion seriously – i.e. accept that it is better to have a maximal population at the brink of starvation? I thought that was a reductio ad absurdum, clearly it is better with fewer people with a higher standard of living, so maximum total utility is not the correct measure.

        In any case, this raises an interesting question – if we go vegan, we can have a much larger marginally worthwhile human population. How would that compare to a smaller marginally worthwhile human population plus a large marginally worthwhile population of farm animals? Perhaps we should eat farmed fish, to maximize the number of farmed animals…

        • Vamair says:

          My intuition gives different results when it’s “create a huge human hive or a rich happy family” and “save a huge human hive or a rich happy family”. Second on the first dilemma and first on the second one. It’s either I don’t really understand what “living near zero” means at all, or my intuition is wrong in one of the cases (it’s stronger in the second case) or the situations are really different.
          Still, the Repugnant Conclusion is extremely artificial (and is therefore a bad intuition pump) as it doesn’t consider any resource constraints. Making a person to live twice as good (which is quite easy when they live near zero level) is much less resource-intensive than multiplying the current population by two.

          • Ketil says:

            Does the whole Repugnant Conclusion depend on counting well-being of non-existing people? To me, that doesn’t make much sense, people who aren’t born, don’t suffer. It makes sense to try to improve things for people who actually exist, but it doesn’t make sense to create new people just for the sake of it.

            And these hypothetical scenarios always talk about killing the poor, without taking into account that killing itself (and most other interventions, like preventing people from reproducing) produces suffering. So I’ll buy the average utility argument where a single person with 10 util is better than ten persons with 9 utils each – but if you have a population of 9 utils, anything you do to get rid of them will likely lower first their level, and probably also the level of the tenth person.

            The Slightly Repugnant Conclusion I see, is that it can be defensible to let people live with very low utility (in extremis: slavery, colonialism, war) if their situation improves things enough for the remaining population. More moderately, we don’t want absolute income redistribution, since being able to make money (more money than the next person makes) motivates people to produce utility that benefits the rest of us. On the other hand, we likely want some redistribution, since marginal util per dollar drops off with increasing incomes.

          • Joe says:

            @Vamair

            This is good. I agree that the aversion people have to ‘repugnant conclusion’ style outcomes is based mainly on a perceived distinction between saving lives and creating new people. Nobody seems to ever suggest that when contemplating the trolley problem, if it turns out the five people have below-average utility lives and the one person above-average, we should keep the trolley rolling and plough through those five people, sacrificing them for the greater good of average utility.

          • Joe says:

            @Ketil

            It makes sense to try to improve things for people who actually exist, but it doesn’t make sense to create new people just for the sake of it.

            The problem with that is that the distinction between “people who currently exist” and “new people” is basically arbitrary. For example, your consciousness shuts down every time you go to sleep. It’s hard to justify a way in which you currently exist while sleeping, unless you want to invoke factors like “which atoms are you made from” or “is your waking up tomorrow caused by action or inaction” which seem flimsy and irrelevant and are usually dismissed as such by utilitarian frameworks.

          • AdamDKing says:

            @Joe

            If a deranged consequentialist kills me in my sleep, I don’t have much to complain about, do i? I might be uncomfortable with the prospect of being murdered in my sleep during my waking hours, and those close to me would definitely prefer that I was not murdered in my sleep. But for myself, I’d no longer be around to miss out on things. Even if I was two days away from finishing my magnum opus that would cement my place in history, give me the satisfaction of acheiving my goals, and make clear to my loved ones how much they mean to me, it’s not as if i’ll be conscious to say ‘man I wish that deranged consequentialist would have waited a few days.’

          • Doctor Mist says:

            It makes sense to try to improve things for people who actually exist, but it doesn’t make sense to create new people just for the sake of it.

            Why? Is there no legit reason for creating new people except for the improvement of our own state?

            Even were that the case, something like Parfit’s analysis can be made to fit. The comparison is, say, between (A) a population of N all at a level of 100 utilons, (B) a population of 2N, half at 100 and half at 95, and (C) a population of 2N all at 99.

            The normal argument is that B is preferable to A, because the N at 95 are pretty happy, and that C is preferable to B because the total happiness is higher; therefore C is preferable to A.

            But you can easily argue that B is preferable to A, even to the population of A. Perhaps the population of A wants children. Perhaps they want company. Perhaps they want shopkeepers. (There are less morally sound motivations: perhaps they want prostitutes or slaves, however well-treated. Assume the motivation is defensible.)

            From the standpoint of the population of A, B is clearly superior to A (their standing is the same and they have the company they desire) but C is probably inferior to A (their standing is reduced despite the company). Should that population consider B to be inferior, merely because once there the net population would consider B inferior to C? This suggests a selfishness that seems at odds with concluding that this is the morally correct analysis. I don’t say that it’s wrong, exactly, but it has a distinctly fishy smell.

      • MostlyCredibleHulk says:

        I wonder how could one arrive to such conclusion? I mean, besides basic problem of humans deciding what is better for non-humans basing on zero common experience, even among humans there are wide disagreement about whether it is better to live with suffering than die. And given that laws against assisting suicide and euthanasia still exist in most places, the majority opinion seems to be the “living with suffering” is better, in fact it is so much better than anybody choosing the alternative must be crazy. So this conclusion sounds a bit suspiciously self-serving, especially when it is promoted by the same people whose cause it conveniently serves.

    • Even for those who assign little or no value to preventing cow and chicken suffering, there are other important ways in which vegetarians benefit the world. For example, the threat of global warming could be reduced by not eating meat.

      • Dedicating Ruckus says:

        Vegetarianism is more efficient than non-vegetarianism, in that you can theoretically keep yourself fully nourished with fewer inputs under vegetarianism than while including meat. This is just trophic levels in action. However, this is true of literally every consumer decision we make, and it’s relatively rare that those who argue for a moral obligation to vegetarianism on this basis will also say we must live like mendicants in all other ways. Thus, I’m inclined to think this is not their real argument.

        • rlms says:

          This argument is more applicable to people for whom vegetarianism is not a great sacrifice. If a year of vegetarianism is as environmentally beneficial as, say, driving 1000 miles less, it is probably a very efficient way (in terms of impact/effort) of reducing your contribution to climate change.

  3. Jameson Quinn says:

    This is also a good lens to use to look at hypocrisy. When two groups are arguing about axiology, it’s common for one group to say to the other “but you’re a hypocrite, because you don’t fully apply that in your morality”; ditto for arguments about morality and accusations about law. In the other direction, it’s possible to believe that something would make a good law but not a good moral precept (and thus, not worth following until the law is passed), or a good moral precept without being true of axiology (less clear about this one… I think that there’s a general moral precept that the right way to promote moral precepts is to follow them).

    So, when accused of hypocrisy, it should be OK to say “you’re mixing up the levels there; in asking my [morality] to reflect my [axiology] perfectly, you’re actually being a hypocrite yourself, because you don’t do that.”

    Of course, the real world doesn’t fall into such clean boxes. Zealots actually do want their axiological principles to be applied morally, and people can be zealots to a greater or lesser degree; in practice, it can be hard to say exactly when they become hypocrites.

  4. robirahman says:

    Ooh! You just gave me an idea. What about legal offsets? Wherein you break a law but then rectify this by preventing further lawbreaking, or supporting other cases of law enforcement.

    Actually, now that I think about this further, I am certain I did not just invent this concept. Governments have been doing this for centuries. For example, in the classic formulation of the prisoner’s dilemma, one criminal is offered the chance to reduce his own sentence by revealing information that would incriminate his confederate. Or just plea bargains in general. Or cities that used to reward civic leaders with special privileges, such as the right to park illegally or in handicap spots.

    Now that I am aware that this concept exists, I can probably find many other examples.

    • Dedicating Ruckus says:

      This doesn’t seem like it really counts, because to the degree that these practices are codified in law or executed by those with the legal authority to do so, they are wholly legal in themselves, not a tradeoff to increase the total law-abiding-ness, or whatever. I would describe this kind of thing more as an edge case in the law meant to optimize axiology via legal methods.

      • I’m pretty sure that in 18th century English law, a criminal could get immunity by turning in some number of other criminals. His crime was still against the law but his contribution to enforcing the law substituted for punishment.

        • Nancy Lebovitz says:

          Some plea bargains work like that– a reduced (or no?) sentence in exchange for turning in other criminals.

        • MostlyCredibleHulk says:

          From what I heard, it is common to do the same in 21st century too. E.g., police catches the low-level drug dealer and then tells: “well, you’re screwed, you’re going away for a long time… unless, of course, you help us catch five of your peers. Or three of your bosses”. Very bizarre scenarios then can (and do) unfold if the said criminal actually only has two drug-dealing friends instead of the necessary five…

    • Tracy W says:

      Other examples of this in action would be conscription (take an innocent person and send them off to fight and perhaps die against their will), or pardons.

    • Nyctef says:

      I, for one, support any moral framework that lets me become Batman

    • vV_Vv says:

      Actually, now that I think about this further, I am certain I did not just invent this concept.

      It’s called vigilantism.

    • Salentino says:

      Not sure if this quite fits what you’re describing, but restorative justice is a thing. https://en.wikipedia.org/wiki/Restorative_justice

  5. yodelyak says:

    This nicely aligns with the joke I’ve always used to try to point to the way I’ve addressed this, which seems to line up with this without ever being explicit…

    The joke goes, “Offsets are great; I think we should use them more, and in more areas of life. I would never cheat on my partner; I’d feel so guilty. But if I meet someone who is regularly cheating on their partner, I might consider paying them not to on one occasion when they otherwise would. That way I could have one guilt-free night for myself.” Axiology versus morality sort of gets at this. But I think I still prefer the joke for making the point, because axiology and morality are much more entangled than it’s easy to explore (and always in different ways for different people).

    • TeMPOraL says:

      I don’t think the joke works, though – cheating is not accumulative, the way e.g. murder or theft is. Cheating twice might be worse than cheating just once, but I don’t think anyone would agree it’s two times worse, and cheating 20 times is definitely not 20 times worse.

      On the other hand, if you could someone who’s about to cheat for the first time, and prevent them from that, also ensuring they remain forever faithful – now this could qualify as an offset for you cheating.

  6. Taymon A. Beal says:

    The distinction between axiology and morality never made sense to me before, because I always basically defined the correct morality as “do whatever axiology says”. Obviously I didn’t actually do this in real life but I chalked that up to my being a hypocrite.

    It’d be interesting if this got me to adopt a new moral system, but I’m not sure that’s actually going to happen. In particular, my intuitions still say that the answer to “should you do thing X which would make the world better but is also kind of psychologically unrealistic?” is “yes, duh”. And the standards I hold other people to, in practice, I don’t think of as being based on morality, but on something closer to (though obviously not identical to) law.

    • Tracy W says:

      Hayek has an interesting take on it: that our sense of morality is evolved, in both the biological and social senses of evolved, and thus can’t be converted to a set of explicit principles. Which is why when confronted with scenarios like “kill one healthy person to use their organs to save five lives” we say “that’s wrong, you need a better moral system.” Or, conversely, if a moral system says “never treat others as means, only as ends” and we have Scott’s “paper cut to cure cancer” scenario then we do the paper cut.

      • Peter says:

        a moral system says “never treat others as means, only as ends”

        Note that whatever moral system that is, it’s not Kantian. The second formulation of the Categorical Imperative says nothing about not treating others as means; instead, it says under which conditions it may be done.

        (Also, “treating others as ends” is a really terrible turn of phrase. AFAICT it’s a category error almost as egregious as “colourless green ideas sleep furiously”, at least Chomsky had a good reason for his nonsensical utterance.)

    • Do you think other people should be blamed or punished for suboptimal axiology? If not, I do not think axiology is your real morality.

      • Doctor Mist says:

        If not, I do not think axiology is your real morality.

        Or else Taymon’s morality is an uncommonly libertarian one, which exists only to tell Taymon what Taymon should do, and everybody else’s moralities are their own problems.

        I confess there is a certain appeal to this approach. But it does raise certain meta-questions: does Taymon think it’s moral to deprecate murderers, and if so on what grounds?

  7. liskantope says:

    This post puts forth a reasonable-sounding proposal, but I can’t get on board with it because as somewhat of a consequentialist utilitarian myself, I’ve never been able to get onboard with moral offsets in the first place.

    I mean yes, if someone chooses to do action A which results in a 10-utilon increase in addition to doing action B which results in a 10-utilon decrease, then we could say that their choice “comes out even” and is neither moral nor immoral. But if we define the “right choice” to be the best choice out of the obvious available ones, then it is never the right choice to do both actions A and B rather than doing action A and just not doing action B.

    That for me has always been the way I justify the fact that offsetting murder with donating enough money to save a life is always inexcusable: you could instead have refrained from murdering that person while still donating that same money. And yes, by the same logic, burning fossil fuels while at the same time donating to environmental causes is also inexcusable (although, of course, it is obviously less so if your share in damaging the environment results in less harm than is done by murdering one person). The only inherent difference between these comes from the effects that come directly from breaking the law (decrease in social-trustons suggested in the post), which is obviously significant but in my opinion a bit superficial to the main discussion.

    There is also the separate matter of judging someone’s moral character, which I believe is based in part on to what extent they go out of their way to violate societal norms rather than rote adherence to “default behavior” in their society, but that’s a complicated side debate. Still, I certainly think much worse of someone’s character if they’re a murderer as opposed to a gas-guzzler (even if they guzzle enough gas to do as much harm as murdering someone) for this reason, and that is probably the intuition that leads people to view the former as obviously on a different moral ballpark from the latter. Maybe that’s close to being equivalent to what is being argued in the post, but I do think it’s important to distinguish judgment of character from judgment of actions.

    • Jameson Quinn says:

      I thought you were going somewhere different with “moral character”. In my view, moral character is sometimes demonstrated by breaching commonly-held moral precepts, not by keeping them, if you feel they are wrong. I guess that could just be a meta-precept, but when you need to add epicycles like that, it makes me question the model.

      • liskantope says:

        Your idea of “moral character” seems completely compatible with the aspect of it that I suggested: in terms of character, you get extra credit if your right action goes against societal norms and extra demerits if your wrong action goes against societal norms.

    • Antistotle says:

      That for me has always been the way I justify the fact that offsetting murder with donating enough money to save a life is always inexcusable: you could instead have refrained from murdering that person while still donating that same money.

      Well, but I’m a professional assassin, and that’s how I GET the money.

      I get paid to kill questionable to evil people, and I spend part of my fee on saving orphans and puppies. No one pays decent money to kill innocent people you know. Well, except sometimes when a husband wants to kill his wife. I don’t take those contracts. When a wife wants to kill her husband, it’s Ok because Patriarchy or whatever.

      Note, this has been an example intended to be somewhat humorous. Really there isn’t enough contract killing out there *in the first world* to make a living killing people. It’s too hard to get away with, and most people just aren’t willing to pay enough. That, and I’m not enough of a sociopath to kill someone who’s not a direct threat.

    • Alsadius says:

      The flaw there is that acts never just have one consequence. You’re not burning oil just to increase carbon so you can later offset it, you’re burning it to do something of value – driving to work at an orphanage, flying to Europe for a better historical understanding, or even just making plastic toys for your kids to play with all use oil and produce carbon, which is bad, but the good effects offset them. If you can *also* cancel out the bad side effects, while retaining the good primary effects, that’s even better.

      • Ketil says:

        This.

        I tend to think of it the other way around: I first decide to spend some of my surplus to do some good: offset carbon or save lives, for instance. I will then choose to do this economically, getting as much good as possible per surplus sacrificed. So I turn to malaria nets or deworming, since EA tells me this saves more lives per dollar given (or if I’m cynical, I can give fewer dollars). Or I replace beef with chicken but keep my overseas holiday, since the former is less of a sacrifice to me than the latter.

        I could of course do all of the above, but it makes sense to start with the things where the marginal utility is most favorable.

      • liskantope says:

        Well if we’re talking about good action A that requires harmful action B to execute, then that’s another scenario, and of course that’s a better solution if it’s available to you. But let’s stick with the least convenient possible world for now. It seems that the discussion about moral offsets treats the case where action B is freely chosen and its net effect is calculated to be -10 utilons (e.g. choosing to drive somewhere when you could almost just as easily have cycled), and good action A is chosen at the same time to offset B but is not contingent upon choosing B.

    • Jiro says:

      I mean yes, if someone chooses to do action A which results in a 10-utilon increase in addition to doing action B which results in a 10-utilon decrease, then we could say that their choice “comes out even” and is neither moral nor immoral. But if we define the “right choice” to be the best choice out of the obvious available ones, then it is never the right choice to do both actions A and B rather than doing action A and just not doing action B.

      If you’re going to use that reasoning, it would also apply to, for instance, not using 11$ of your income to buy bednets to prevent malaria. Or 12%. Or 13%, etc. If your obligation ends at some point, it cannot also be true that the right choice can be the “best choice” in the sense of the choice that maximizes utility the most.

      • liskantope says:

        Sure, I have no problem with where this reasoning leads me. My obligation ends at the point that I no longer have enough of my own money to otherwise contribute to society and to live a healthy life.

        • Jiro says:

          Then repeating something I said in a different context on Reddit: You are profoundly weird, and the analysis you are using gives no insight into the morality of people who are not similarly weird.

          Nobody except a few EAs thinks they have such an obligation, belief in such obligations is a trap for the scrupulous, and it utterly fails to capture what most people would think of as morality.

          • liskantope says:

            For what it’s worth, I don’t actually practice at all what I preached above. I was taking your word “obligation” as synonymous with “action that should be performed” and using it abstractly that way. But of course none of us does 100% of what we should do all of the time, and I’m comfortable accepting that I am no exception to this. That said, I realize I should be careful framing things in this language (especially with the word “obligation”) towards those who are prone to being overly-scrupulous.

            Anyway, I think this is distracting from my point about choosing A and B. Forget about obligations for a moment; I’m talking about making the right (= better) of two distinct choices.

    • you could instead have refrained from murdering that person while still donating that same money.

      What if the murder was what got you the money?

      • liskantope says:

        Well of course that would be different: it would mean that one choice was made which probably comes out above even as far as increasing/decreasing utilons goes. But that’s not the scenario people seem to be talking about when discussing moral offsets.

        • Amused Muser says:

          The entire notion of moral offsets and the discussion surrounding them rests on the idea of the actor being incapable of choosing {don’t do direct immoral thing, but still do offsetting action}. Often the reason for the inability to pursue this option is handwaved, or put down to a general psychological inability to choose that option.

          DavidFriedman’s set up (directly immoral thing being causally required in order to perform the offsetting action) preserves the usual assumptions surrounding moral offsets, where you have to choose between no direct and and no moral offset vs directly immoral act+moral offset. That it does so in a more concrete and plausible way makes it a *more useful* way of framing the situation, but actually does not at all change the situation itself.

          • liskantope says:

            Maybe that’s the way moral offsets are usually discussed — I haven’t read any of the literature. And that framework works okay for the argument made in this post, I guess. But I’m confused because the actual concrete examples I keep seeing don’t seem to fit such an assumption at all. I mean, someone who eats meat while donating to animal rights causes is perfectly capable of becoming vegetarian and still donating to those causes, right? I don’t see anything physically, financially, or even psychologically stopping them (in fact, there are obviously plenty of vegetarians who donate to such causes). And it seems to me that even if I can legitimately diagnose myself with a psychological inability to both refrain from eating meat and donate at the same time, then I could just as easily legitimately diagnose myself with a psychological inability just to refrain from eating meat and the whole exercise about evaluating free decisions becomes pointless.

            But it’s the end of my day and I’m tired and probably just talking past what I’m trying to reply to.

          • Amused Muser says:

            Oh, I basically agree. Even though most discussion of moral offsetting (that I’ve seen) is careful to explain or assume a context where you can’t do both “good” things, the conclusions from that narrow subset of cases seem to be applied Willy nilly to situations where that condition isn’t met — and where the “moral” thing to do is obviously to not do the bad thing and then do the “offset” anyway.

  8. actinide meta says:

    I’m kind of shocked that you believe in political authority. Is this an examined view? If you haven’t read Michael Huemer on the subject, I highly recommend (the first half of) The Problem of Political Authority.

    IMO there is no reason to think that law ever trumps morality, let alone that it almost always does. The right relationship of law and morality is that something *should* be illegal only if it is morally permissible to enforce the obligation not to do it. Whether something *is* illegal is often of practical importance, but not moral importance, except to the extent that it is indicative of a coordination equilibrium.

    • Dedicating Ruckus says:

      You can argue for a long time about the subtleties of the moral right to first use of force. But you can also just kind of avoid the issue, and say “obeying the currently constituted government is a Schelling point that aids greatly in coordination across the whole of society, and when this coordination breaks down the failure modes can be extremely bad (viz. civil war, genocide), so obeying is good in and of itself over and above whatever moral valence the acts commanded have”.

      IMO this kind of argument is basically sufficient to establish that obeying the existing government is a positive duty outside of extreme situations. Of course, you can have all kinds of interesting arguments about what constitutes an extreme situation, including some plausible ones that the current US qualifies. But once you’re making those arguments at all, you’ve broken the Schelling point and you’re back to a war of all against all in terms of who has the moral right to break the law in pursuit of their personal objectives. (You can extend this into the argument concerning who or what entity should rule; regardless of abstract moral rights, supporting the entity that does actually rule is a Schelling point that’s good at avoiding civil wars.)

      Typically, at this point, anarchists will claim that there could be a better equilibrium without any entity that we’d now recognize as a government (and that avoids claiming authority as a moral right). Examining the actual plausibility of this claim is a matter of complex historical/sociological analysis that easily gets derailed into fights over details (and/or throwing around “you just need to read my 800-page tome about how it could work in principle!”, which is pretty useless in a debate context), but you can bypass it entirely just by noting that there’s not a plausible path from the current world to Anarchotopia that doesn’t run through civil wars, and therefore any agitation for Anarchotopia is pretty bad in most reasonable axiological estimations.

      • actinide meta says:

        There is no reason to think that failing to obey government authority will, in general, lead to the collapse of civilization. Almost everyone breaks the speed limit and yet we are still here. If this seems like a plausible concern in a particular case, then, great, you have a reason (other than political authority) to obey in that case.

        • Dedicating Ruckus says:

          It’s correct that in most cases, any one particular act of disobedience will not lead to a catastrophic social breakdown (particularly in cases where disobedience doesn’t really map to an IPD defection, e.g. the widespread social norm that the real speed limit is 5-10MPH over the posted one, in the US). But any one act of disobedience is still entropic; it erodes the strength of the Schelling point “obey constituted authority”, which tends to increase the likelihood of more law violations in the future, which further erode the norm, repeat until machetes. This is a pretty standard tragedy of the commons/distributed prisoners’ dilemma phenomenon, and we generally don’t have any trouble condemning defection in those games as an immoral act in other realms.

          (There also exists a converse obligation on the part of the existing government to lessen this dynamic on their own end, a la the standard advice for military commanders “don’t give orders that you know won’t be obeyed”. This is one plausible ground for an argument that the current situation in the US is indeed extreme enough to undermine this kind of logic; following a “three felonies a day” train of reasoning, actually strictly abiding by all directives of the government is an untenable burden, so no one does it, and the value of that Schelling point is largely negated. In practice, we seem to resolve this by maintaining a loose collective idea of which governmental directives are actually serious enough to care about, and which aren’t, and basically ignoring all of the latter. But this unspoken equilibrium is of course much weaker than actually being able to use the letter of the law for your coordination point, and I strongly condemn the current US government for diluting its authority in this way.)

          This kind of Schelling point can be seen in similar terms as the rules in rule utilitarianism. Of course, you can just say “don’t break the law if it seems likely to lead to a breakdown in social order, but don’t feel constrained otherwise”. But people in general are not trustworthy to predict what acts will have an unacceptable cost in social cohesion, particularly when they have ulterior motives. Making everyone follow the same law is a way of making sure the equilibrium is actually preserved, at the expense of some personal freedom on the part of those who can actually make the utility calculation correctly. In a non-pathological situation (i.e. not the one we have now), obeying the law is not an unreasonable burden on anyone, and so the trade-off is worth it.

          • actinide meta says:

            I feel like this “non-pathological situation” is at best on the same order of plausibility as Anarchotopia. But stipulating that such a situation, where the laws are almost perfectly just, but not exactly, could exist, I’m still not convinced that only the Schelling point of perfect obedience to written law would stand between civilization and savagery. After all, if the written law is mostly just, the behavior of people who act morally but ignore the law should be hardly different from those who are explicitly law abiding. But I suppose I could be, as a matter of fact, wrong.

            Can you think of a historical example where, plausibly, widespread disobedience of an unjust law or command of government led to catastrophic social breakdown? This doesn’t really fit in with my model of how societies fail.

          • vV_Vv says:

            After all, if the written law is mostly just, the behavior of people who act morally but ignore the law should be hardly different from those who are explicitly law abiding.

            But how do you deal with those who ignore morality? Social shaming only works to an extent: high status people can get away with lots of stuff because their status protects them, and very low status people are immune to social shaming because they have no social capital to lose.

            Can you think of a historical example where, plausibly, widespread disobedience of an unjust law or command of government led to catastrophic social breakdown?

            Lack of regulation can cause “Tragedy of the commons” where everybody involved knows that what they are doing causes collective harm, but there is an individual incentive to do it, and they can’t prevent other from doing it, and usually there is a disincentive not to do it if many are doing it (e.g. you get driven out of business), therefore everybody does it.

            Scott gave various examples in his non-libertarian FAQ.

          • After all, if the written law is mostly just, the behavior of people who act morally but ignore the law should be hardly different from those who are explicitly law abiding.

            No, because as Scott notes, the law contains lots of arbitrary cut-off points like ages of consent and speed limits. You are better off with them than without them, but ideal moral reasoners in isolation would not be able to converge on them.
            A moral person who did not follow the letter of the law would often break the letter of the law.

            Can you think of a historical example where, plausibly, widespread disobedience of an unjust law or command of government led to catastrophic social breakdown?

            Things can get distinctly suboptimal without reaching catastrophe — for instance, countries where people ignore road safety, or don’t pay their taxes.

            And dramatic shifts to a different order can certainlly happen — think of the civil rights movement, and the downfall of apartheid. You don’t see a lot of moral vacua or power vacua, because there is usually someone waiting in the wings with a blueprint or a New Order.

            But how do you deal with those who ignore morality?

            The law.

        • There is no reason to think that failing to obey government authority will, in general, lead to the collapse of civilization.

          There was a time when I thought it would. That presented a problem. I could see no good moral argument for an obligation to obey laws but it seemed as though if there were no such obligation things would fall apart. So I decided to obey laws until I had adequately resolved the issue in my mind.

          Eventually I noticed that I seemed to be the only person acting that way and others thought my doing so distinctly odd.

          I concluded that stability depends on some mix of laws being commandments that there were good moral reasons to obey and enforcement being sufficient so that the amount of disobedience to laws people didn’t feel morally obligated to obey didn’t bring down the society. That seems to describe the society I live in, and it hasn’t collapsed yet.

          And I now feel free to offer a glass of wine to a friend who happens to be below the local drinking age.

          • nimim.k.m. says:

            Have you considered a case where the laws that nobody feels obligated to obey are things like “taking bribes is illegal”, and thus majority of everyday transactions with agents of government feature brown envelopes stuffed with cash?

            Such countries exist and they neither are collapsed wastelands, but nevertheless appear to be less well-functioning ones than the societies where everybody abides by the norm of not taking (or offering) bribes.

      • but you can bypass it entirely just by noting that there’s not a plausible path from the current world to Anarchotopia that doesn’t run through civil wars

        Why do you think that?

        Current governments spend about four times as large a fraction of national income as did England and the U.S. in the 19th century, and that change occurred with no civil wars. Is it obvious that increasing the size of government from 10% to 40% is enormously harder than decreasing it from 40% to zero?

        Do you also think that all plausible paths to minarchy run through civil war? It’s almost as big a change.

        • vV_Vv says:

          Is it obvious that increasing the size of government from 10% to 40% is enormously harder than decreasing it from 40% to zero?

          It seems very plausible that decreasing the size of the government becomes exponentially harder as you approach 0%.

          Likewise, increasing the size of the government also becomes exponentially harder as you approach 100%: even the Soviet Union had some amount of free trade, both legal and illegal, after all.

          Do you also think that all plausible paths to minarchy run through civil war? It’s almost as big a change.

          It seems to me there is a big difference between a state that only punishes things like murder, theft, etc., and a no state at all.

          Obviously, a functional minarchist society can suppress interpersonal violence before it escalates to full civil war, while an anarchic society can’t.

          As a real world example, consider a quasi-anarchic, but not minarchist, society (Libya), where a pet monkey pulls the hijab of a woman and in the next four days people fight with tanks and and artillery, leaving 16 dead and 50 wounded. Yes, seriously.

          I can’t see any obvious path from any current society based on the rule of law to an anarchist utopia that plausibly avoids that kind of failure mode.

        • Dedicating Ruckus says:

          It seems to me pretty obvious that increasing the size of government is easier than decreasing it (at least in current Western contexts). The expansion of government is an entropic process; all systemic tendencies currently active favor expansion, while none favor contraction.

          Even given a counter-entropic shrinkage back to 10%, I think there’s a significant difference between that and going all the way to 0%, as vV says above.

          (I also pretty much think that going from now to minarchy requires civil war, but that’s because we’re currently in a situation with extreme civil war risk, not necessarily because minarchy is as inherently unlikely as anarchy.)

          The basic reason why Anarchotopia requires civil war is because the state power is currently held by someone (whether individual or entity), and it’s highly profitable for them. Going to anarchy requires taking that away from the current holder, and the current holder has a lot of guns. Meanwhile, you could have a minarchic government that’s still very profitable for the holder of state power, but doesn’t involve all the stupid bullshit that is most of the current government by weight.

        • nimim.k.m. says:

          Current governments spend about four times as large a fraction of national income as did England and the U.S. in the 19th century, and that change occurred with no civil wars.

          I’m willing to agree that civil wars are maybe not necessary condition on that kind of pathway. However, I’ve assumed that the global world wars (center of which was the ideological battle over various strands of socialism and counter-reactions to it) played a significant part in the increasing amount of government participation in society, so I feel it’s bit cheating to treat that historical case as example that such changes can happen peacefully without cataclysmic events of death and misery.

        • thad says:

          How do you peacefully get a nuclear government to stop existing?

          That’s not the only reason I think it’s very difficult to get to a proper anarchy, but I find it the most compelling.

      • but you can bypass it entirely just by noting that there’s not a plausible path from the current world to Anarchotopia that doesn’t run through civil wars

        You may be interested in hearing about seasteading. 🙂 (Though it is a bit of a one trick pony, in that that would only work until people run out of reasonable space and resources to build new seasteads and have them still be flexibly rearrangable. It is most definitely a not-running-through-civil-wars option, though, regardless whether one were to consider it a good option.)

        • vV_Vv says:

          You may be interested in hearing about seasteading.

          Which is a fantasy inspired by Ayn Rand’s novels. In the real world, anything valuable at the sea needs to be guarded by government military vessels to prevent it from being raided by pirates.

    • Scott Alexander says:

      Have you read http://www.daviddfriedman.com/Academic/Property/Property.html ? If not, read it and see whether it resolves your objection.

      • actinide meta says:

        I have read it. It is explicitly a positive account, so I’m having to guess what you think it has to say normatively. I certainly think there are many laws, including ones that establish Schelling points for property rights, that one has reason to obey. I mentioned coordination as morally relevant. I don’t think this creates a presumption that one must obey arbitrary laws when it is neither otherwise morally required nor prudent.

        To take a concrete examples that doesn’t involve Nazis, do you believe that the founding of Uber and Lyft was wrong because they broke legally enforced taxi monopolies in many places? Is it likely to lead to the collapse of civil society, since having left the Schelling point where everyone obeys the law as written we might as well all become robbers?

        Maybe I just misunderstand your statement that obeying the law should have nearly lexicographic priority over everything else?

        • gph says:

          I think the point about law having lexographic priority is meant in the context of a hypothetical society which is perfectly just and filled with individuals all following a shared morality to the best of their ability.

          I think what you’re trying to get at would be examples of a breakdown between what you feel is moral and what the law says. I think all of us disagree with one law or another and even disobey at times. But we can’t all start disobeying all the time with zero authority or consequences. That would lead to anarchy/civil war;genocide

        • Jack Lecter says:

          having left the Schelling point where everyone obeys the law as written

          If that point was still intact, I would endorse this.

          I like Uber, but I imagine I’d like the Rule of Law more if I ever found any.

          I get that it’s possible to imagine sets of laws so awful that anarchy is preferable, and slightly less awful sets where customary disobedience is preferable, but I think people underestimate the benefits of having a formally defined Schelling point. In particular, the feedback mechanism which provides incentives to remove poorly-written laws from the books, rather than leaving them to be used against whoever happens to be unpopular at the moment.

          • actinide meta says:

            You are arguing, in favor of a duty of obedience, that it would as a second order effect lead to support for less unjust law, because people would be less inclined to hypocritically support a law while disobeying it. Well, perhaps. But if we are arguing for moral rules on consequential grounds, why not just ask people not to support laws which they do not obey? Or better yet not to support unjust laws even if the costs do not fall on them personally? These rules are less onerous and more effective in producing just law.

            Also, can I chalk you up as another interlocutor who defends a duty of obedience in a hypothetical situation which has never existed in the history of the world? You could be right, but I don’t think that justifies what Scott said originally.

          • Jack Lecter says:

            @actine

            It depends on the hypothetical situation in question-
            If you mean a duty to obey the law (qua the law), I don’t feel that in our current situation.

            I’ll admit my thoughts aren’t as clear as I’d like, but as of now I’m arguing less for obedience than enforcement: I don’t blame people for breaking the law (qua ‘breaking the law’), but I do blame those sworn to uphold it for not doing so.

            why not just ask people not to support laws which they do not obey? Or better yet not to support unjust laws even if the costs do not fall on them personally? These rules are less onerous and more effective in producing just law

            .

            There are really two answers to this, the practical answer and the biographical one.

            The practical answer is it seems like that people are terrible at this. I’ve had to try to explain why, but what I have so far is:
            1.If it doesn’t affect them and theirs, and it’s only being used against unpopular people, they don’t tend to care if the law could theoretically be used against ‘innocent’ people.
            2. There’s a strong presumption that authority is only used against bad people, even in the face of evidence to the contrary (this may or may not have to do with the Just World Fallacy)
            3. There are coordination problems involved which makes it impossible to unilaterally reform the system without mass public support.
            4. This is all fairly meta-level, and most people tend to operate on the object level. In practice, this often means making decisions on tribal grounds, which results in a lot of people opposing things largely because their the other tribe supports them. As an object-level example, consider Blue reactions to things that look like ‘deregulation’ and Red reactions to things that look ‘soft on crime’.

            The biographical answer is that your suggestions are both intelligent and well-thought-out, and I spent much of my childhood, my whole adolescence, and my early adulthood nagging everyone around me to do exactly as you suggest. Obviously it didn’t ‘work’ in the sense of causing societal change, and the reactions of the people I spoke to eventually convinced me that it would be much more doable to put pressure on enforcers of the law rather than the general citizenry, given their relative numbers. In retrospect, the people I spoke to were (obviously, in retrospect) such a small sample of the population as to constitute fairly weak evidence, so if other people disagree with my conclusions I’m quite willing to conclude I was premature.

        • Mary says:

          To take a concrete examples that doesn’t involve Nazis, do you believe that the founding of Uber and Lyft was wrong because they broke legally enforced taxi monopolies in many places? Is it likely to lead to the collapse of civil society, since having left the Schelling point where everyone obeys the law as written we might as well all become robbers?

          They did obey the law exactly as written. The Schelling point would have to be obey the spirit as well as the letter. Which is long gone, if it ever existed.

          • Jack Lecter says:

            I tend to be skeptical of claims involving ‘the spirit’ of the law.

            In my experience, ‘the spirit’ of the law is often totally obvious to multiple groups of people, all apparently sincere, and all in disagreement. This is kind of a hobby of mine, actually. I think legislators have an incentive to make laws broad enough to catch all the people they want to catch, and relatively little incentive to narrow the laws to prevent catching people they don’t.

            On an unrelated note, let me say how much I enjoy your comments on here. I’m not great at names, and there are a lot of regulars here, but I’ve made a point of remembering a couple (the profile picture helps). I’m honestly not sure if I tend to agree with you- I have a tendency to overinterpret and a corresponding reluctance to make inferential leaps, and your comments are often in the form of questions, which are necessarily ambiguous. But they tend to be questions I think need to be asked.

      • Jack Lecter says:

        the widespread social norm that the real speed limit is 5-10MPH over the posted one, in the US

        I hate this. On my worse days, I can almost convince myself that it would be worth a revolution just to get rid of that stupid fecking thing.

        Which maybe says something about the fragility of political authority, given how rare that opinion is.

        • Ketil says:

          Well, you’re also supposed to pay 15% over the stated price in restaurants. Just consider speeding tipping for highways.

        • Winter Shaker says:

          I think this is a non-crazy norm. Speed limits are sensible, but having absolutely no leeway would, at the extreme, have people concentrating so hard on their speedometer that they take their eyes off the road a dangerous amount of the time. Whereas glancing back at it often enough to make sure you don’t drift more than 5mph about the limit is probably safer.

          This is distinct from restaurant tipping where there is a norm that you as a customer personally get to influence particular staff’s rate of pay, a norm that doesn’t apply in almost any other line of business and that we would probably be better off without – service staff would simply get paid what their contract says the business has to pay them, and diners could simply pay what the bill says. Indeed there are countries where there is no tipping culture; it’s just expected that restaurants will pay their staff a fair wage. But there are no countries that I know of that don’t have speed limits (and if there are, I doubt they have much in the way of paved roads in the first place).

          • Dedicating Ruckus says:

            I think things would be strictly better if the posted speed limits were bumped up by the 5-10MPH people tend to drive over them, and then the new limits were actually enforced. People will generally speaking drive as quickly as they feel safe driving anyway, and they’re usually close enough to correct about this; if you actually put the limit at people’s instinctive safety margin, then you can still penalize those who are being blatantly reckless, while not also fostering widespread norms of “laws don’t necessarily actually apply”.

            (Meanwhile, tipping in restaurants at least creates an immediate incentive for the restaurant staff to act pleasantly, on a shorter timescale than the restaurant owner deciding that someone is driving customers away and firing them, which has too much friction except for blatant cases. I tend to think this leads to better results than otherwise, though I admittedly have never spent much time in countries where tipping is not the convention to do a comparison. (And any such comparison would probably be pretty confounded by normal differences between national politeness norms anyway…))

          • Jack Lecter says:

            Ideally, I was thinking of having the limit be the maximum safe speed, so people would tend to be under it…

            In retrospect, I formed this opinion when I was eight years old, so I may have miscalculated human nature, or failed to account for the difficulties of coordinating multiple agents… I’m not as sure now.

            I think any given system is unlikely to be the best one possible, just because the search space is so large, but there’s a heckuva difference between saying ‘I bet there’s a better way to do this’ and saying ‘You’re all idiots, my system is obviously superior!’

            So I could stand to stop and show a little humility for a while before trying again. Thanks

    • Wrong Species says:

      Let’s say we had a world without government. A group of homeowners sign a contract to hire one company to enforce certain rules. Let’s assume away the origin of the state versus property(we can come back to it later). What is the difference between this organization and a state?

      • baconbacon says:

        Besides the fact that it is voluntarily agreed to by 100% of the population, with explicit contractual limits?

        • Wrong Species says:

          Right. So let’s say that these people have kids. These kids grow up in this place and continue to live there as adults. Now when did they consent to the rules?

          • IrishDude says:

            How do the kids continue to live there as adults? Do they buy their parents or neighbors home and sign a contract agreeing to the HOA rules? If so, it’s at the point of signing the HOA contract that they consent to the rules.

          • Wrong Species says:

            No contract. They just continue to live with their parents as they did when they were kids.

          • IrishDude says:

            If they live with their parents, it seems they’re bound by the rules the parents set on their household. And the parents are bound by the HOA agreements they signed.

            Similarly, suppose my HOA has rules against loud music after 10pm that I agreed to. If a friend from out of town comes to my house, it’s still a violation of HOA rules if he blasts his stereo late at night rather than me, even if he didn’t sign the HOA contract. As property owner, I’d assume responsibility for violations of the HOA rules that happen due to my guests, and therefore would ensure my guests didn’t violate those rules. Any guests in my house, whether they were born there or came from out of town, would need to consent to my rules before I’d allow them to reside at my property. When my kids are adults, they’ll have to agree to my rules if they want to stay.

          • Wrong Species says:

            But with your friend, we can say he implicitly agreed to the rules when he came to your house. But the children obviously didn’t consent to the rules when they were born. So when exactly did they consent?

          • random832 says:

            @IrishDude okay, let’s not mince words. What happens when the parents die and the children inherit the house?

          • IrishDude says:

            @Wrong Species
            At what age do you think kids are capable of consent? Whatever that age is, is when they consent to the rules of the household.

            @random832
            For the kids to take ownership over the house and get the deed signed over in their name, they need to sign the HOA agreement as a precondition.

          • Wrong Species says:

            So wouldn’t that imply that I consent to the rules of the state at the age of majority then?

          • IrishDude says:

            @Wrong Species
            I’d say you consent* to the state in the same way that you consent* to your neighbor when they come to your house with a gun, say they own your house now and you have to follow their rules, and you decide to stay in your house and follow your neighbor’s rules. Or the way you consent* to a carjacker’s demands when you passively hand over your keys to avoid getting hurt. You’ve consented* in that you agreed to follow rules imposed on you under the threat of unjust coercive force.

            Two questions for you.
            1) Per baconbacon: Temporarily putting aside the issue of kids, do you think the original inhabitants of an HOA are being ruled by a state, if 100% of the population voluntarily agrees to the rules enforced by a company they pick?
            2) How do you define a state?

          • Wrong Species says:

            Why is it that the kid consents to the rules when he reaches the age of majority even though he never made an explicit agreement but you don’t consent when it comes to government?

          • IrishDude says:

            @Wrong Species
            No response to my questions?

            Why is it that the kid consents to the rules when he reaches the age of majority even though he never made an explicit agreement but you don’t consent when it comes to government?

            He either consents to the rules, or just coercion can be used to remove him from the rightful owner’s property. The age of majority kid may in some way be consenting under duress (the threat of being kicked out of the house), similar to a boyfriend consenting under duress if his girlfriend threatens to leave him unless he changes some behavior and he agrees to change. The girlfriend owns her body and has the right to withhold her body from the boyfriend if he doesn’t agree to her rules. Same for a homeowner with their age of majority kid.

            I think it may be reasonable to say that adults consent to a state’s rule, but that consent is under unjust coercion from an illegitimate authority, like a man consenting to give his keys to a carjacker, and it seems to me a different kind of consent than a homeowner making rules his adult kids must consent to to reside there. To highlight this different type of consent, consent under the threat of unjust coercion, I thought an asterisk was appropriate.

            Also, consent of the governed isn’t the only thing that matters when evaluating the morality of a situation and differences between states and private associations. The rights and limits of property owners to make and enforce rules, and how one can legitimately come to own property, matter too. The Problem of Political Authority by Huemer has the subtitle “An Examination of the Right to Coerce and the Duty to Obey”. If you want to evaluate differences between HOAs and states, a dive into when someone should have the right to coerce is helpful.

            Also also, for non-rights based differences between HOAs and states, see Friedman’s essay here. The incentives for private and state owners are different, scale and ease of exit matters, and these differences should lead you to expect private organizations to provide better rules than states.

          • Wrong Species says:

            We could go all day about different facets of libertarianism and political philosophy but I’m specifically asking this question because consent is so vital to the question of political authority. It’s the difference between “tax is theft” and “tax is a payment”.

            I think it may be reasonable to say that adults consent to a state’s rule, but that consent is under unjust coercion from an illegitimate authority, like a man consenting to give his keys to a carjacker, and it seems to me a different kind of consent than a homeowner making rules his adult kids must consent to to reside there.

            You are begging the question. Why is the state’s authority illegitimate when the parents authority in this case isn’t? Why is it wrong for the state to demand that you follow its rules but not when the parent does? Why is the state more analogous to the car jacker than the parent?

          • IrishDude says:

            We could go all day about different facets of libertarianism and political philosophy

            I don’t think we’re talking about any facets other than your original question: “What is the difference between this organization and a state?”

            Answering my questions above would help us find where exactly we differ in perspectives. But fair enough, you don’t have to answer any questions you don’t want to. It does make it harder to reach a meeting of the minds though.

            I’m specifically asking this question because consent is so vital to the question of political authority. It’s the difference between “tax is theft” and “tax is a payment”.

            Consent is vital, but not the only element of political authority. Political authority also hinges on who has the right to coerce and what limits there are to that coercion. Whether someone consents to my demands under the threat of force is an aspect of political authority, but so is whether I have the right to threaten force.

            Why is the state’s authority illegitimate when the parents authority in this case isn’t? Why is it wrong for the state to demand that you follow its rules but not when the parent does? Why is the state more analogous to the car jacker than the parent?

            I don’t think the state has legitimate property rights. Its ownership claims originate from conquest and fiat, which I don’t think are legitimate ways to become a rightful owner of property. I think parents that purchase a home have have gained ownership in a way that ought to be respected. I respect homesteading and voluntary transfer as just ways to gain property rights, I don’t respect conquest and fiat.

            I don’t think the state has legitimate authority in the same way I don’t think my neighbor has legitimate authority if they come over with a gun and claim authority over my property. He doesn’t become the rightful owner with legitimate authority, even if I consent* to his rule. Not even if a majority of my neighbors vote to approve this hostile takeover.

          • sconn says:

            What makes purchase a valid way to get property, but not conquest? I understand purchase *seems* more peaceful, but any property ownership at all comes with the understanding that if you wish to continue to own it, you have to defend it from all comers. And there is no “original owner” — if you buy a house on a lot, at some point that lot was stolen by someone who had it before, back in the mists of prehistory at least, and likely a lot more recently than that. And if it weren’t for a government protecting your title to your property today, another country, corporation, or tribe would quickly annex it.

          • random832 says:

            For the kids to take ownership over the house and get the deed signed over in their name, they need to sign the HOA agreement as a precondition.

            And supposing they refuse, and carry on living in the house anyway?

            He either consents to the rules, or just coercion can be used to remove him from the rightful owner’s property.

            Who is the rightful owner, after the parents have died and before the children go through this deed transfer procedure you just made up?

          • Wrong Species says:

            @Irishdude

            At this point, you’re saying that the main difference is between how the property was originally obtained. But that’s a far cry from saying that the main difference between the state and property is that the state has “rights” that the other doesn’t. The HMO is “coercing” the kid in the same way that the state is “coercing” you to follow its rules. A state may have a more unjust origin than a property owner but in a world without an explicit state, property becomes indistinguishable from states.

          • IrishDude says:

            @Wrong Species

            At this point, you’re saying that the main difference is between how the property was originally obtained. But that’s a far cry from saying that the main difference between the state and property is that the state has “rights” that the other doesn’t.

            I think I have the right to make rules on my property. I don’t think my neighbor has the right to make rules on my property if he comes over with a gun and claims ownership. If you think a group of people can gain the right to make rules on my property through their conquest, then you believe this group of people has rights that others would not.

            I think I have the right to use my money as I see fit. I don’t think my neighbor has the right to take my money without my permission to use as he sees fit. If you think a group of people can gain the right to take my money without my permission to use as they see fit, then you believe this group of people has rights that others would not.

            How you think people should justly gain property is a component of what rights you think people have.

          • IrishDude says:

            @sconn

            What makes purchase a valid way to get property, but not conquest? I understand purchase *seems* more peaceful, but any property ownership at all comes with the understanding that if you wish to continue to own it, you have to defend it from all comers.

            It depends on your view of morality. To me, obtaining property through voluntary transfer is moral, obtaining property through coercive force is not. If you think a thief has the right to whatever they steal as long as they’re successful, then you have a different set of morals than I do.

            And there is no “original owner” — if you buy a house on a lot, at some point that lot was stolen by someone who had it before, back in the mists of prehistory at least, and likely a lot more recently than that.

            If someone has a better claim to current property other than the current owners, I think arbitration should be used to figure out how to resolve that dispute. In the absence of that, I think people that bought their property have the best claim to it. I think government’s claim to dominion over all property in their borders is the least defensible.

            And if it weren’t for a government protecting your title to your property today, another country, corporation, or tribe would quickly annex it.

            Maybe, maybe not. I’d like the choice to pick someone else to protect my property if I think they could do a better job for a cheaper price.

          • IrishDude says:

            @random832

            And supposing they refuse, and carry on living in the house anyway?

            Maybe the HOA would issue fines, and if those weren’t paid then they’d put a lien on the house. Eventually that could result in the HOA foreclosing on the home.

            Who is the rightful owner, after the parents have died and before the children go through this deed transfer procedure you just made up?

            I’m guessing this is how deed transfer works now, if a house is passed from parent to child and the house is part of an HOA, but if not I’m certainly open to being corrected. I’m not an expert in inheritance law, but I’d guess the executor of the estate is in temporary control of the property until it is transferred.

          • I don’t think my neighbor has the right to make rules on my property if he comes over with a gun and claims ownership. I

            So you think the US, Canada, Austalia, etc should be handed back to their original inhabitants?

          • IrishDude says:

            @TheAncientGeekAKA1Z

            So you think the US, Canada, Austalia, etc should be handed back to their original inhabitants?

            I’ll repeat my earlier comment: If someone has a better claim to current property other than the current owners, I think arbitration should be used to figure out how to resolve that dispute. In the absence of that, I think people that bought their property have the best claim to it. I think government’s claim to dominion over all property in their borders is the least defensible.

  9. RohanV says:

    I think there is some sense of “cumulative-ness” where offsets work. For example, emitting carbon is not in and of itself bad, because otherwise you would be doing something wrong every time you exhale.

    Similarly, eating meat cannot be bad, otherwise all carnivores are innately evil. Or if you were stranded on an island, hunting to survive isn’t wrong.

    But cumulatively, we can say that emitting so much carbon that it changes the climate, or eating so much meat that we need a cruel infrastructure to support it, is wrong. But in this cumulative case, the offset works, because we are interested in reducing the total.

    In contrast, offsets don’t work in the case where a single act is wrong.

    • Jiro says:

      I don’t see how you could claim that under most types of vegetarianism, eating meat is wrong, without also claiming that a single act of eating meat is wrong as well. Eating meat is supposed to be wrong because it violates the rights of the individual animal, not just because of the collective effect of lots of meat-eaters on lots of animals.

      • Antistotle says:

        There are many arguments give for why eating meat is bad. Whomever one is arguing with tends to shift around until they find one you are unwilling or unable to argue with.

    • Scott Alexander says:

      “Similarly, eating meat cannot be bad, otherwise all carnivores are innately evil.”

      Killing and eating humans can’t be wrong, otherwise think of how evil lions would be!

      • Dedicating Ruckus says:

        Killing and eating humans is a perfectly valid fulfillment of a lion’s telos, and so it is not wrong for a lion.

        Killing and eating humans is a really bad violation of a human’s telos, and so it is indeed wrong for a human.

        By this argument, killing and eating other animals is a part of humans’ telos with impeccable pedigree, and so it isn’t wrong.

      • Antistotle says:

        To simplify what Mr. Ruckus says[1], lions who hunt and kill Humans are hunted with a LOT more aggression than lions who only hunt other non-humans. Which means that it might not be morally wrong for a lion, but it’s a really bad idea.

        Humans who kill and eat other humans are pretty much treated the same way.

    • because otherwise you would be doing something wrong every time you exhale.

      I used to make that argument, but I don’t think it’s right. The carbon dioxide you release is from metabolizing carbon that got fixed in the process of making the food you are burning. If you didn’t exhale you wouldn’t eat and that carbon would still be in the atmosphere.

      • sconn says:

        Wouldn’t it still be fixed in the plant that you ate?

        Actually, wait. If you ate meat, then you stopped an *animal* from exhaling instead of you, so eating meat is exhalation-neutral. And therefore … moral?

        • random832 says:

          And if you don’t eat the plant it rots (absent some other intervention to cause the carbon to actually stay fixed).

      • Dedicating Ruckus says:

        On a consequentialist framework, that means that the correct action is to starve yourself to death while also making sure that all the food you would otherwise have eaten gets fixed long-term somehow (e.g. dropping it in the deep ocean).

      • Jaskologist says:

        Isn’t that still arguing that it’s ok because the evil is offset? It’s just that in this case, the offset came first.

        • Jaskologist says:

          Come to think of it, this point me to another important difference which might give a hint as to when offsets are acceptable.

          The ideal level of CO2 in the atmosphere, whatever it is, is not 0, so it isn’t intrinsically wrong to produce it. On the other hand, the ideal number of murders is 0. This is also the ideal number of thefts, papercuts, etc.

          (In the real world, we trade all of those things off against other concerns, but that is a different matter.)

          If there were some proper minimum number of murders we wanted in society, just as there is a proper minimum amount of CO2 in the atmosphere, then perhaps murder offsets would make more sense.

        • Mary says:

          the offset came first

          And isn’t that a can of worms?

          If you save a bunch of lives, have you built up a positive balance that you are allowed to draw on to kill an equal number of people?

          If offsets are right, order wouldn’t matter.

  10. Joe says:

    I agree this kind of layered model is helpful, especially with resolving some of our conflicting intuitions over the disturbing answers utilitarianism sometimes gives to thought experiments, e.g. the “fat man trolley problem variant” you mention.

    In particular, I wonder if folks would be more comfortable embracing these kinds of conclusions if they could label them “axiological, but definitely not moral”, instead of feeling the need to amend utilitarianism with all sorts of qualifiers to make it produce what they feel are the right answers.

    • AC Harper says:

      Scott makes a proposal for an axial/moral/legal dimension of viewing our obligations – but the dimension admits no individual/collective distinction, other than that implied by law. So although the model is helpful it is still firmly on the ‘ought’ side of the is/ought conundrum.

      Career criminals in practice (they may or may not be reflective) put their personal axiology ahead of the collective axiology (and morals and law). Similarly career excusers (we know who we are) might put their own comfort ahead of axial/moral/legal concerns – and these cases are important because peoples’ fear of the consequences of their individual ‘bad’ behaviour are more compelling that their desire for the collective ‘good’.

      The axial/moral/legal model may or may not be a useful model but it is incomplete and I don’t believe people use something like this in practice

  11. robotpliers says:

    The difference between morality and what you call axiology is that morality deals in the personal sphere of direct actions that effect people/things, while axiology is more abstract and indirect. So, people are more willing to hand wave and “offset” the latter. Its human nature to feel less concern for the more abstract, removed, and less immediate. But what is currently under axiology can be converted into morality given time and circumstances. Future generations will probably not be OK with just offsetting carbon emissions, and absolute personal/community reductions may become a moral imperative.

  12. Matthias says:

    Probably more of a subcomponent of what distinguishes morality from axiology than an alternative theory, but: when people can engage in directed utility reduction, (BillGates can order a hit on anyone for the price of one assassin plus one malaria offset) this gives them a lot of bargaining power in the way that being able to engage in diffuse utility reduction (Bill Gates invests in a factory that pumps CO2 into the air, leading to one hundred excess deaths) can’t.

    (Something like this might explain some of the apparent paradoxes involving things becoming Your Problem, folk-morally speaking, by interacting with them; IDK.)

  13. Mark Dominus says:

    Daniel Dennett hsa a discussion that is complementary to the one you have in part II, about how one of the principal functions of moral rules is to cut short the never-ending demands of axiology. You can debate the trolley problem for far longer than is productive, but at some point in the heat of the situtation you must actually make a decision, and when making decisions it’s useful to have conversation-stoppers like “but that would be murder”.

    (Chapter 17 of Darwin’s Dangerous Idea, especially section 3.)

    It might be interesting to compare your analysis with his.

  14. jasonbayz says:

    So the state replaces this moral rule with the legal rule “don’t have sex with anyone below age 18”. Everyone knows this rule doesn’t perfectly capture reality – there’s no significant difference between 17.99-year-olds and 18.01-year-olds. It’s a useful hack that waters down the moral rule in order to make it more implementable. Realistically it gets things wrong sometimes; sometimes it will incorrectly tell people not to have sex with perfectly mature 17.99-year-olds, and other times it will incorrectly excuse sex with immature 18.01-year-olds. But this beats the alternative, where police have the power to break up any relationship they don’t like, and where everyone has to argue with everybody else about whether their relationships are okay or not.

    I would see this as a good argument against that moral rule. There are many cases where you could find a twelve year old and twenty year old where the twelve year old would blow away the twenty year old on virtually any test of maturity you could find: be it academic intelligence, social intelligence, or which person you’d want to babysit your kid. Yet, few would argue that it is, then, morally less wrong to have sex with the twelve year old than the twenty year old. Indeed, if one doesn’t have any hangups about premarital sex in general, few would argue that it is wrong at all to have sex with someone over 18 who is “immature” unless the immaturity was so great as to become outright retardation.

    Attempts at universally consistent moral systems are always floundering on things like this, which is why I have little time for them. Having sex with children is wrong because I don’t like it. All those objections, about maturity and consent and parental authority and the like, they matter, but they are not applicable in all cases, and don’t need to be. Because I don’t like it in all cases, you see?

    • Scott Alexander says:

      I think maybe then we’re placing the boundary between morality and axiology in a different place.

      You might say something like “Axiologically, a world where you have sex with the very-mature twelve year old is better than a world where you have sex with the immature twenty year old. But I find it personally useful/virtuous to maintain a heuristic distinction that applies to me and that I’ll tell my conscience to enforce.”

      It seems to me that other people would draw the distinction differently, and say that in the absence of any law they would be okay dating a hypothetical completely-mature seventeen year old. In that case, the relevant distinction is morality/law.

      • carvenvisage says:

        Is 12 a typo in the first case? Or you mean with a 13 year old or something? I think the normal view is that’s not even close to the borderline where the maturity heuristic comes up, but right in the middle of the wrong wrong wrong axiological territory.

        And they kind of said as much here

        Having sex with children is wrong because I don’t like it All those objections, about maturity and consent and parental authority and the like, they matter, but they are not applicable in all cases, and don’t need to be. Because I don’t like it in all cases, you see?

  15. Douglas Knight says:

    It’s not that relevant to the rest of the essay, but …

    But it’s hard to believe charities could be less cost-effective than just literally buying the animals; this would fix a year’s meat consumption offset price at around $500.

    That seems very confused. When you buy meat, you do buy the animals, more or less. What does that have to do with anything? If you buy animals to keep them out of the mouths of other meat-eaters, the market will adjust and produce more meat in future years. Or are you proposing to keep a zoo to compensate for the animals that you killed? But now you are in population ethics: do a good life and a bad life cancel out? That sounds like an absurd coincidence.

    (Incidentally, I doubt I would find it hard to believe that these charities are less cost-effective than your proposal, if I could make sense of it. Probably even if it were nonsense.)

    • Weasel says:

      I think the other thing is, in calculating an offset for meat consumption, we need to not only calculate the cost of buying the animals themselves but of keeping them. A cow lives 20 years, a chicken 10. You need to give them a place to sleep, veterinary care, etc. So you’ll need to pay for a farm with a constant population of 6 cows and 400 chickens, and for someone to be taking good care of them. (This may be 400 chickens and 400 roosters depending on how/if egg production was counted, and never mind sheep, pigs, etc). I am not a farmer, feel like keeping 6 cows and 400 chickens is going to cost more than $500 a year even assuming you don’t give them medical treatment (in this “offset” situation I think it would be “right” to give them medical care if you an average family would give equivalent care to their pet dog – so minor surgeries but maybe only palliative care for cancer rather than extensive chemo).

      If you’re trying to say that if a cow can be purchased for say, $300, then it must mean that keeping a cow for its entire life costs less than $300 or the farmer makes no profit, I think that’s fallacious as the farmer selling the cow is probably keeping it in the factory-like conditions that make vegetarianism so desireable, and the farmer sells it at age 2 rather than age 20, which is how old you’d be keeping it.

      So, suffice it to say, I think the $500 per year upper bound on the cost of a vegetarian offset is way off.

      (I quickly googled the cost of boarding a horse, since that’s a popular service and a horse probably has similar requirements to a cow, and that’s $400-$500 a month; so I think the upper bound is more on the order of $6,000 per year, likely even higher than that!)

      • sconn says:

        Yeah, I got hung up on that too. A pastured cow needs an acre at least (depending on where). Cost of the use of an acre of (fertile, zoned-agricultural) land for 10-20 years, even if you provide nothing else, is a whole lot more than $500.

    • Ketil says:

      I didn’t interpret that as suggesting a charity would actually buy animals to keep them out of the slaughterhouse. Rather, if a farmer can sell a cow for $500, he would probably be willing to accept less than that to refrain from raising the cow in the first place (obviously raising a cow has costs, so profits from the sale is way less than the sale price). You would only have to pay him the difference between profits from a cow and profits from the next best thing.

      • Douglas Knight says:

        There are always more farmers, as I said.

      • Mary says:

        Exactly what keeps him from pocketing the $500 and keeping on raising cows as before? Because no one can count cows that aren’t there, he can claim to be not raising as many as he likes.

        • Jiro says:

          You can count cows who are not there, at least well enough that it’s not possible to make up any number whatsoever for it.

          • Mary says:

            Why not? If I say, I’m not raising six cows right now because of those payments, how could you refute me?

  16. danridge says:

    I think when it comes to deciding the cases in which offsetting is appropriate, I notice something immediately which seems to make it easier to decide between them; perhaps this is only because I’m unfamiliar with the moral philosophies which are relevant. This factor is the ability to turn the harm into a commodity. On the one, hand carbon emitted is simply carbon emitted, and the idea is that the less of it the better; one cannot distinguish between this or that molecule, and I think in general local and temporal effects (more carbon here or there, emit now vs later) are pretty easy to convert. On the opposite end, I think most people consider humans to be impossible to convert; if you save one here and kill another there, then imagine reversing that outcome, it’s easy to distinguish between the cases. Again, not being familiar with utility calculus may be the only reason I can’t intuitively grasp a way to convert one person to another, or into carbon or utils, but as it stands I consider the distinction somewhat fundamental; if we don’t value more complex systems, such as ourselves, because they are complex, I don’t think there’s even a motivation to construct morality. For that reason, the animal rights/meat one falls closer to the fence, because being less complex animal lives feel a bit closer to commodities. I’d even differentiate between cows (mammals, fairly close to us) and chickens (vicious idiot raptors, probably at least capable of pain in some capacity).

    This way of thinking naturally treats information and complexity as being worth preserving, or worth being delicate with at least. One of the more intuitive moral ideas to me is that the past is worth preserving, largely because I’ve seen what a wealth of information small physical details can disclose about the remotest events under clever scrutiny, ie. not just the words in a book, but the physical reality of the book itself (to be clear, this discloses information about its manufacture and use, which makes up part of the data against which to test historical hypotheses). So, I am disgusted when I hear of someone defacing or obliterating some well preserved artifact from the past. What this comes back to is the doing of something which can’t be undone, the culture lost will certainly be “offset” in short order, but it can’t actually be replaced, and some information is lost.

    This philosophy mostly just encodes an intuitive understanding; to go back to an above example, while it’s arguable that cows are indeed more complex than chickens, the basis for this just substitutes evolutionary proximity to humans for an actual measure of intelligence. Cephalopods also seem fairly intelligent in a more alien way, and if we wanted this mode of thinking to decide real cases and make prescriptions (say, which animals should we or should we not eat), we might need to find some way of finding their equivalent place in the mammalian scale of intelligence. All that being said, while this may not even qualify as a coherent philosophy, I think it is a MUCH easier way to understand moral intuition around offsetting.

    One last thought: what is perhaps more worth preserving than any individual animal life is the ecosystem that they help compose, because this is identifiable as a complex system. This is NOT a system which excludes humans, because the animals we’re considering are domesticated, but a factory farm is still not the ‘natural’ place or ecosystem for a cow. This makes it inefficient. The conclusion of this line thinking would probably have policy decisions made on aesthetic grounds, and that may seem unwise or irrelevant; however, I mentioned before that without certain preferences towards complex states I see no impetus to create a moral system of any kind. Thus, I believe morality really is generated by and beholden to aesthetics. And what this means is that while the suffering of conscious individuals is probably a good proxy for our real objective in setting morality, the actual goal which morality should strive for is the health of the system which those individuals compose. We care what ‘kind of’ world we are creating, what it ‘looks like’.

    • Sam Reuben says:

      Your philosophy there is quite coherent, and only really needs some better examination of complexity and difficulty to reproduce in order to be a highly powerful tool for explaining moral intuitions about precious things (from people to priceless works of art) as well as predicting and perhaps commanding them. I think it’s some excellent work so far, and I’d like to see how far it can be pushed.

      The ecosystem argument is extremely interesting, by the way, and has far more reach than just factory farming. It supports nature preservation, of course, but also all sorts of traditional human ways of life. I believe it deserves some real attention. The idea that aesthetics guide morality is not unique, incidentally, but I’d simply note that aesthetic questions are questions about what is or is not good. Morality is a slightly abstracted form of goodness, which I believe is one of the points Scott was trying to bring up with his axiology.

    • sconn says:

      Hm. I find locating morality in “complex systems” results in favoring the state, cult, tribe, etc. over the individual. For me that’s reason enough to abandon it — a state or group may be complex and beautiful, but if that comes at the price of individual happiness, I would rather get rid of it.

      On the other hand, some people are more like ants than like cats, and perhaps they would happily sacrifice their happiness to be parts of something larger than themselves. I sure as heck wouldn’t.

      • danridge says:

        Your argument brings to mind two somewhat contradictory positions. The first is that there may be a more selfish way of pursuing ideal institutions, if you consider that they ought to suit the needs of whatever individuals make them up. That means that if your environment, your local government, your community, is not entirely satisfactory to you, you probably have moral grounds to demand change, which may be comforting. And if no man is an island (and you are, of course, reading this via the internet on a computer, so you have clearly not gone full Thoreau), you must contend with your environment. If you don’t have opinions on how to structure it and what is best for it, maybe you can just live your life in it unmolested, or maybe someone else will have opinions and change it to your detriment. Anyway, this doesn’t really incorporate individualism, it more challenges its viability. The libertarian argument would assume that the ideal environment is defined as that which suits each individual as best as possible, and the best way to achieve this is to allow them to fight for their own interest and reach an equilibrium, but there are many well known ways in which highly suboptimal equilibria can be reached in this method. This may imply that even self-interest compels a person to morally consider themselves part of some larger system, and to have some opinion of its ideal form. These are generally problems with orienting towards the individual.

        The other position deals with problems of orienting towards complexity. Basically, it’s possible that becoming a component in a more complex entity may be bad for individuals in a way that is fundamental, meaning that no system containing moral entities is actually beautiful. After all, systems become complex mostly because they self-replicate, allowing them to stabilize their information, which implies at least a subtle coercion; and individuals “systematize” themselves, taking on roles using intelligence which are not natural to them, and their participation in the machine chafes away their humanity. So, collectives may provide certain benefits, but mostly they create reliance and a fear of leaving. I have wondered whether all art and culture is merely papering over the horror of an existence which simply cannot make sense to us nor be fulfilling; even more likely, it is a direct expression of this, a subconscious realization that we are not living in the way that is best, and we have been stripped of the ability to pursue or maybe even conceive of it. So, if you don’t believe that the beauty distilled from suffering in art is somehow greater or nobler than those encountered in a simple fulfilling life, the only conclusion is that systems comprising moral entities are by definition immoral. But I think this conclusion will probably also condemn intelligence above a certain level, probably that at which one can question ones existence.

        • sconn says:

          I believe in groups and systems — provided they serve the happiness of the people in them. To my mind that’s what they’re for. Humans like to be part of things bigger than them, and sometimes they will even sacrifice their preference about how the group is run for the privilege of being a part of it. I’m not really arguing against that.

          But I think the group *only* has value insofar as it serves the happiness (and survival) of the people in it. if it doesn’t do this, it’s failing as a group. And yeah, that’s where you get into cultiness, as a group mainly serves goals like “stop members from ever wanting to leave” and “find more members.” These serve the group, but not the members of the group. I think these groups should change or die out.

          I actually wrote a blog post about this some time ago; it may or may not interest you: http://agiftuniverse.blogspot.com/2016/04/the-unit-of-mattering.html

          Hm. I think I would rather have humanity be happy all the time than have great art about being sad. It seems kind of unethical to desire the latter; I mean, if I found out that all the suffering in my life was engineered by others because they knew I had it in me to create great art, I’d be ticked. Imagine if you could time travel, and you accidentally prevented Tennyson’s dear friend Arthur Hallam from dying. Uh-oh, you just prevented the poem “In Memoriam” from ever getting written! That’s a big loss, but would it be legitimate to then *kill* Arthur Hallam to fix your mistake, so Tennyson could write his poem? I don’t think so.

          However, I don’t think I’d go so far as to say we’d be better off dumber if that made us happier. Honestly I don’t think it would; cows suffer a lot, as we always discuss on here. Just knowing things and finding things out is enjoyable. But perhaps there’s a bit of your worldview creeping in there, and I’m not 100% consistent. Or perhaps it’s simply that *my* existence, in its actualized form, is important to me, and replacing me with a dumber but happier version of me wouldn’t really be me. That’s like wireheading. It would remove my ability to achieve the things I want to achieve, and that would lessen me. (Personally I rate liberty on a par with happiness and life, in the triad of things I find morally significant.)

  17. InferentialDistance says:

    Good post. I think one of the things making a difference between carbon offsets and murder offsets is that the air is not a moral agent. When you dump a bunch of carbon, but offset it by buying a bunch of carbon sinks, you have a net neutral impact on moral agents. When you kill a person, and then save someone else, you have still killed the person, and saved someone else. You can never make right what you did to the person it was done to. And even in axiological sense this is bad, because you can choose who you kill and who you save, which means you can make world-worsening trades that provide personal benefit. For example, killing off your competition so you can drive up your prices, and use a fraction of the increased profits to save starving African kids; sure, the net living people is the same, but all your customers have to pay more. You get enough of that and you’re straight into cyberpunk corporate dystopias. And that’s before counting the destruction of social trust you mentioned.

    • Jiro says:

      You can never make right what you did to the person it was done to.

      That reasoning would not allow you to eat animals if you believe that the animal specifically is harmed. And pretty much everyone who thinks that it’s wrong to eat animals thinks that.

      Pretty much the same reply goes to uau below:

      But if you murder someone, then even after offsetting the result will be worse at least for the murdered party.

      since the individual animal would be worse off.

      Also Cerastes below:

      Murder is bad even if you save someone else first because the two humans are not identical,

      and again, the individual animal would be worse off even if another one is not.

      Sam Reuben:

      Because people’s lives are not fungible goods.

      Ditto.

      • John Nerst says:

        While most people have some sympathy for animals, they typically don’t consider them fully-fledged moral agents. Most, but not all, would probably accept animal lives as fungible in a way human lives aren’t.

        The incommensurability of moral agents is one of my main objections to the whole “offset” idea. If you harm “the world” in an abstract sense you can make it up to it by compensating, sure, but that’s because the “recipient” is the same in both cases. You can’t, as someone said in a different comment, cheat on your spouse and make it up to them by preventing some other instance of cheating by someone else. There is a reason we say “make it up to them” and not just “make it up”.

        This is one of the things that bother me about some forms of utilitarianism, and specifically (WARNING: Kinda-spoilers for Unsong his paragraph) about the justification for the Unsong universe existing (unless I misunderstood it). Utility in the Unsong multiverse seemed to be aggregated on a universe-level, i.e. if a universe had positive utility in total it would be allowed to exist. But why aggregated on a universe level? I don’t think that move is morally justified. Why not some other level? In my mind the individual level is the most defensible one: universes would not be created if they were to contain any single life not worth living.

        The pattern shows up in other places as well. My main moral problem with affirmative action and some other social justices ideas is that harm and restitution are aggregated on a supra-individual level, “setting things right” between other people than the ones who actually did harm/had harm done to them.

  18. uau says:

    There’s also a game-theoretic difference between the examples. If you offset the carbon you produce, nobody really ends up worse overall. But if you murder someone, then even after offsetting the result will be worse at least for the murdered party.

    If it were considered perfectly OK to murder someone as long as you have the resources to offset the death of one person, you could go to person A and tell him “I’ll murder you unless you pay me $1000”. He considers paying better than death, so you lose nothing. Repeat with person B and so on. Enabling this blackmail would cause obvious harm even if nobody actually died. Thus you’d at the very least need to consider what restrictions you’d have in place to prevent similar issues.

    I can think of cases where causing deaths could perhaps be reasonably offset. For example, suppose that you like driving and speeding while drunk, and sometimes run over a kid. If you city has lots of traffic accidents that kill children daily, I think improving the traffic environment to reduce deaths could offset the deaths you personally cause. This would be the case especially if pretty much every person could say that they expect themselves and their families to be safer overall due to your actions. If you only improved the situation in the poorest minority neighborhoods (assuming there’s low-hanging fruit for safety work there, allowing saving lives at the cheapest price) but killed people in more well-off neighborhoods then the people in those neighborhoods would have more reason to morally condemn you.

    So, I’d suggest an alternative rule to distinguish your “good” examples from “bad” ones: whether any particular person ends up worse overall when you consider both the “bad” act and the offsetting “good” one. In my above drunk-driving example some do end up worse – but the victims’ expected chances were overall better, even if they did end up unlucky in the end.

  19. Jiro says:

    Eating meat doesn’t violate any moral laws either. Again, it makes the world a worse place. But there aren’t any bonds of trust between humans and animals, nobody’s expecting you not to eat meat, there aren’t any written or unwritten codes saying you shouldn’t.

    There are codes saying that you shouldn’t. These codes just aren’t shared by all people. But they’re shared by a significant number of people.

    And in the general case, there’s a gradation from “codes shared by everyone” to “codes shared by a lot of people” to “codes shared by a significant…” all the way down to codes not shared by many people at all.

  20. Jiro says:

    Also, I’m not convinced that denying offsets is compatible with utilitarianism and EA. If you buy everything that Scott is saying here, then how can you say that someone who fails to donate is wrong?

  21. Cerastes says:

    I suggest a simpler framework for the offsets which I think offers more explanatory power: Are the individuals/consequences fungible?

    Murder is bad even if you save someone else first because the two humans are not identical, and it makes a difference which is alive and which isn’t (e.g. they have different loved ones, different effects on the world, different personalities, different jobs, different preferred charities, etc.). Killing one teleporter-produced duplicate within seconds of teleportation to save another may be acceptable, because they’re both effectively the same person at that point. Whether cows or fish or oysters are fungible intra- or inter-specifically is going to be contentious, though I suspect most people would see oysters are fungible, and thus killing 6 oysters to save 6 is neutral. CO2 offsets are perfectly fine because the atmosphere doesn’t care *which* CO2 molecules are in the atmosphere, just that there’s a certain number/concentration of them.

    IMHO, you can even “black box” the actual action in this view and just ask “would anyone/anything change if I swapped A for B?” For CO2, the answer is no, for people, the answer is yes, so CO2 offsets are fine but murder offsets aren’t. Time adds a wrinkle – even if you can reproduce a city down to the level of individual atoms, you don’t get to nuke it unless you do so fast enough that nobody notices, otherwise the lag causes harm.

  22. Matthias says:

    The distinction between morality and law (as defined here) seems a bit more diffuse than the distinction between either and axiology, in that each of the former refers to coordination by social sanction, just at differing levels of severity/formality. Possibly we could distinguish more finely between these, where (the governing logic of) “Twitter campaigns to get someone fired” are somewhere between (the governing logic of) “agreeing that a thing is immoral” and (the governing logic of) “summons to court,” though of course the more levels you distinguish between the fuzzier the difference between any two will be.

    (I’m not quite sure how to phrase the following thought, but: something something interesting to do with the fact that in the Western European Middle Ages cultural consensus about morality was more like a unique Schelling point set by committees of academics/bureaucrats and law was more built on a folk-theoretic patchwork with little central coordination (or at least all of that relative to today.) So some aspects of how to apply or think about the morality/law difference may be quite culturally/historically specific, rather than just flowing from the nature of the difficulties of implementing an axiology among agents like us.)

    • Douglas Knight says:

      I think Scott intends morality to be rules for yourself, completely unenforced. But it is certainly true that there are different degrees of formality of external enforcement. I think formality is a more relevant distinction than severity.

      I would like you to expand on the parenthetical. I think Scott is saying that morality and law are contingent, which seems quite compatible with decentralization.

      • Douglas Knight says:

        Actually Scott writes

        (this definition elides a complicated distinction between individual conscience and social pressure; fixing that would be really hard and I’m going to keep eliding it)

        so I’m completely wrong about his intentions. But I also doubt Matthias would have written his comment in light of that, so I wonder if Scott added it in response.

  23. TK-421 says:

    To my mind, one aspect of the sorts of harms that offsets make a certain intuitive sense for, such as emitting carbon, is that they’re fungible. When you book your flight on Pollution Airways, you’re not harming any single specific person; it’s just increasing the diffuse cloud of risk that everyone on the planet shares. So if you emit $70 worth of carbon, and then pay $70 plus transaction costs for scrubbing carbon, everything comes out even in the end (assuming your payment is used effectively, anyway).

    The same is not true if you murder Alice but save Bob’s life. For one thing, their lives are not commensurable in the same way; it is not clear that “kill Alice” + “save Bob” cancels to zero in the same way that “add 70 carbon” + “remove 70 carbon” would. Saving Bob’s life is a good thing, but it’s a different good thing. (For that matter, it seems to me that offsets probably need to be specific to the harm in question; you can’t emit carbon and then balance it by donating malaria nets, for example.)

    There’s also a much tighter causal coupling between “you shoot Alice” → “Alice dies”, as opposed to “you emit a bunch of carbon” → “lung disease rates rise slightly” → “Alice’s lung disorder gets slightly worse” → “Alice dies”. The second one can happen unintentionally and unnoticeably; it’s an externality of what you actually wanted to do, which was to go on vacation. In a practical sense, it’s harder for humans to track that generalized sort of harm, so allowing people to pay offsets to counteract it is better than nothing; that way you can mitigate the downstream harm while still getting to do the primary thing you wanted to do. Killing someone does not share that property… you just have to not do that in the first place.

  24. Sam Reuben says:

    This seems like an outstandingly elaborate answer to a very simple question.

    Q. Why is it that, generally speaking, you’re not allowed to pay for killing one person’s life for killing another?

    A. Because people’s lives are not fungible goods.

    Seriously, most of the stupid problems that utilitarianism gets itself into can be answered by some variant of this. One person’s life can’t be replaced by another person’s life, but carbon dioxide molecules certainly can be replaced by other carbon dioxide molecules. Adding and then removing ten dollars from someone’s bank account balance doesn’t change anything, apart from the transaction records. Rebuilding a city doesn’t result in the same city that was destroyed. This is all fairly simple and obvious, if you don’t try and predicate an entire moral system on trying to make everything mathematically replaceable from the outset.

    Axiology has absolutely nothing to do with this, by the way, unless you predicate it by saying “utilitarian axiology.” I think that it’s good to not treat people’s lives as if they can be replaced without loss by other lives. That’s completely in agreement with banning the moral offset of murder. Only utilitarians have this problem. (Hardline deontologists have a different problem, it’s worth noting, and I’m not quite in favor of that, but that doesn’t make this any less of a silly issue to have.)

    Isn’t the whole rationalist community supposed to be all about not mistaking the map for the territory? If so, then why does everyone mistake the moral map of utilitarianism for the territory of seriously difficult choices?

    Edit: a lot of people are making this same point. I’m glad to see that there’s a lot of folks coming to the immediate fungibility conclusion, and would like to push more people to examine how this affects utilitarianism itself. Namely: if a utile is a measure of happiness, is happiness fungible? Can we say that the happiness of a cruel person in living a cruel life is totally replaceable by the happiness of a kind person in living a kind life? The answer to this is typically to make the utile a measure of pleasure. In this case, is pleasure the one thing we seek in life? I don’t believe that’s true for most people, and the people who do seek pleasure above all else tend to lead wretched and unenviable lives. With this put together, are we comfortable in saying that the basic premise of utilitarianism even works?

    • Scott Alexander says:

      This falls apart in the animal case. Animals aren’t exactly fungible either – I mean, all cows look the same to me, but I assume Cow #1604931 considers itself very non-fungible with all the other cows, and if you’re a farmer or a pet owner they all have different personalities. But it seems like moral offsetting is okay here.

      Or: suppose I set myself a rule that I would donate $1000 to save African children each year. One year I’m running low on money, so I decide to skip my commitment and pay $2000 the next year. This seems legit (if I trust myself to comply), even though it would probably go to different African children and children aren’t fungible.

      On the other hand, I wouldn’t feel comfortable vandalizing a mailbox and then paying for some kind of vandalism prevention scheme for other mailboxes, even though all mailboxes are basically fungible.

      And in cases where morality (or law) says the offset is fine, I would feel a lot less bad about it. There’s a community pool nearby only open to members; membership costs either some number of hours of community service, or some fixed cost that I think pays for workers to do that amount. If only the community service was allowed, I wouldn’t feel comfortable sneaking into the pool and paying the workers under the table, but since society’s explicitly said that’s okay, I do feel comfortable with it.

      • Sam Reuben says:

        Cerastes above says:

        Whether cows or fish or oysters are fungible intra- or inter-specifically is going to be contentious, though I suspect most people would see oysters are fungible, and thus killing 6 oysters to save 6 is neutral.

        which I love as an idea. I’m more than prepared to open up the discussion of how fungible things are, which would place a cow at somewhere on the spectrum between oyster and human. There’s some really interesting meat there, which has significant relevance to mathematics (as an incidental point), and would help to handle a lot of our moral reservations or lack thereof when it comes to killing certain things.

        Regarding the money: the question of who the money goes towards is massively abstracted from the donation. The person who the money would save could end up doing good things, or bad things. We don’t know. Their lives are so far abstracted from any ability to individually examine them that they become fungible – not because they are, but because we sure can’t tell them apart. This is the same reason that utilitarian life calculus is reasonable at massive scales (e.g. military, national policy) but falls apart when you take it to small scales.

        Regarding the vandalism: are you sure that the discomfort of that situation isn’t because of unaccounted externalities? Let’s say that, instead of paying for the vandalism prevention scheme, you just replaced the mailbox, so quickly and perfectly that the owner would never notice. Reservations about the act don’t disappear, but they sure do dwindle remarkably – from “that’s totally not okay” to a mere “that’s kind of poor conduct.” The remaining discomfort is because of the generally accepted deontological rule of don’t-mess-with-other-people’s-stuff, which can be shown by having you just vandalize your own mailbox (or an abandoned mailbox deep in the woods that you know for a fact that the owner doesn’t care about). If this is done, there’s no problem with vandalizing the mailbox whatsoever. Thus, when all factors external to the fungibility of the mailbox are removed, we find that indeed, mailboxes are fungible and that we don’t care about what happens to them so long as they’re replaced. (Another good thought experiment: we care a lot if a human or minimally-fungible animal gets hit by a car, but don’t care if the same happens to a mailbox, so long as it’s replaced.)

        Regarding the pool case: if a societal norm is broken, that’s an externality just like in the mailbox case. Norms seem to be remarkably difficult to “pay off” in any form, and so resist the utilitarian calculus entirely. (No amount of money you pay can make that norm be not-broken again.) I don’t think this proves anything about fungibility at all.

        I recognize that a lot of these cases fall under the antagonism you were describing between morality, axiology, and law (which I’ve glossed as societal norms) in the main essay. I would simply add that the reason why they undermine any given utilitarian position so strongly is because societal norms and harm done to humans are highly fungibility-resistant.

        (The harm-done-to-humans point is why the Omelas story resonates so strongly with so many people.)

      • kaminiwa says:

        I think from a human perspective, cows are pretty fungible – I buy a hamburger, not a slice of Cow #1604931. I would assume there are numerous large-scale livestock deals all the time, and no one would notice if I swapped a few cows between deals.

        Cash donations definitely seem fungible. I’m not donating to save African Child #1604931, I’m donating towards “eradicate Malaria”. The same amount of progress gets made.

        Plus, in both examples, *I* don’t really have any access to the non-fungible information to begin with: I don’t know which cow I’m eating, or which child I’m saving.

      • Antistotle says:

        > but I assume Cow #1604931 considers itself very non-fungible with all the other cows,

        Do you have any reasonably hard evidence of this?

        > and if you’re a farmer or a pet owner they all have different personalities.

        Not past a certain size of herd. If you’ve got 8 or 10 cows you might anthropomorphize the behaviors of one or three.

        If you’ve got 80 or 100, especially if they’re range cattle (https://cdn.mirrranchgroup.com/media/Brush-Creek-Ranch.jpg) they are almost 100 percent fungible.

      • Jiro says:

        I think the right answer is to recognize that non-rationalists distinguish between action and inaction, as well as consider some more important than others. Required-inactions are generally unlimited (you must refrain from ever vandalizing any mailboxes anywhere in the world, unless doing so in service of a more important category) and don’t funge, while required-actions are not (you must donate X to charity, you must help other people if it doesn’t impose on your life too much) and may be fungible within the category (you can pick which charity).

        If you find yourself putting too many epicycles on utilitarianism to get it to model any morality that people would agree on, maybe your model should be something else.

        It still doesn’t handle the animals case, but I think that vegetarianism is almost always a case of signalling and people don’t alieve that killing animals is wrong. In the rare cases where they do, they reach conclusions that would be anathema to most vegetarians precisely because they are being rational and most vegetarians aren’t.

      • Ketil says:

        Animals aren’t exactly fungible either – I mean, all cows look the same to me

        Eh…I think you just said that cows are fungible to you. Whether or not they are fungible to others isn’t all that relevant when you are the one making decisions on what to eat – and as somebody else pointed out, whether you consider farm animals fungible or not is probably a major factor in being vegetarian or not.

      • dtldarek says:

        A partial reason for disagreement could be about what is or what we expect to be fungible.

        Let me consider the mailbox vandalism example. While most mailboxes are usually fungible, you cannot be sure that the one of your neighbor is. For example, it can have some sentimental value for the neighbor and even replacing it perfectly won’t do – perhaps you laugh at sentimentality yourself, but it is not your call to make.

        To exaggerate the example even more, imagine that your neighbor is actually an alien that can access more “dimensions” or whatever that you do not see. Even if you replace the mailbox atom-by-atom, it might differ within these other “dimensions” that your are unable to sense. Even if you can sense the world perfectly, perhaps brief absence of that particular mailbox caused some local system to be out of its equilibrium/desired state (e.g., there was a camera targeted on the mailbox that triggers the doomsday device).

        In real world that mailbox could have been special in that it had some customized sensor that you were not aware of. But then, if you were to break it (say your car drifted on ice), you could go to your neighbor and apologize/explain (i.e., offset, at least partially, broken trust), and ask what you can do to fix the situation (i.e., what your neighbor thinks is enough to offset the broken mailbox).

        Of course, there are a lot of nuances even with this simple example (e.g., what if your neighbor asks for something ridiculous), but a lot of arguments that we think are about ideas (say, if the axiology/morality/law model is good or not) happen to be about expectations, that is, what we think is common/reasonable/etc. For example, if it is common to have unique mailboxes, then these certainly will not not fungible; on the other hand, if we live in a world where random lightnings destroy mailboxes a few times a day, then vandalizing your neighbor’s mailbox and then replacing it perfectly does not sound as bad (although there’s still some costs there related to trust, social contracts, and so on).

    • dtldarek says:

      Relevant: XKCD 556 (Don’t forget to check the alt-text, e.g., click on the picture.)

  25. eh says:

    Every time I read philosophical arguments about ethics, there seems to be a very complicated process of first rationalising intuition into an internally consistent model, then changing behaviour and intuition based on that model, and repeating the process every time a roadblock is hit.

    This seems to be a good way of achieving precise beliefs, but is it a good way of achieving accuracy? A meta-ethical model which relies on a community tangling out inconsistencies doesn’t seem to be using any heuristic for good other than intuition, which is what everyone was already doing, and it seems inevitable that one person would discover different inconsistencies to another and thus make different changes to their behaviour and intuition, eventually ending up with a different set of morality entirely. Does aiming for consistency really do anything other that finding the next step of a random walk through ethics?

  26. ProntoTheArcherist says:

    Askell puts it like this:

    GOOD: I don’t work late and make it to the game tonight, fulfilling my promise.
    OFFSET: I work late and miss the game, but take you out to dinner.
    BAD: I work late and miss the game, and don’t take you out to dinner

    For morality (as opposed to axiology) I’m having trouble seeing the OFFSET as NEUTRAL, since by including another moral GOOD as a component of the OFFSET, a position better than the original GOOD is revealed:

    GOOD: I promise to make the game, and I take you out to dinner.
    NEUTRAL: I promised to make it to the game, and I keep my promise.
    OFFSET: I work late and miss the game, and take you out to dinner to make up for it.
    BAD: I work late and miss the game, and don’t take you out to dinner.

    Or in the animal examples:

    GOOD: I don’t eat meat and I give to effective animal charities
    NEUTRAL: I don’t eat meat
    OFFSET: I eat meat, but give to effective animal charities
    BAD: I eat meat and don’t give to effective animal charities

    I’m likely missing something, but reframing these examples this way makes it clear to me that the OFFSET position is not morally neutral, because it necessarily requires the creation of an even better moral GOOD (that may have not been contemplated until the need for it to offset an immoral act arose, but it was still possible all along). I don’t know if the new NEUTRAL is better than the OFFSET – I tend to doubt it as perfect offset is unlikely as in the case of the yoghurt example – but it’s definitely worse than the new GOOD.

    Thanks for the read as always!

  27. Mary says:

    A world-state where everybody is happy seems better than a world-state where everybody is sad.

    Only where all other things are equal. If a million homicidal maniacs tried to massacre everyone else in the world, the world in which they succeed and are happy is worse than one in which they are all stopped at the cost of every other person in the world except a million people, all of whom are sad over the slaughter.

    So the claim

    axiology (“it’s obviously better to have a world where one person dies than a world where five people die”)

    leaves out the crucial point that the worlds are “where five people die” and “where one person dies, and another one becomes a murderer.” “Obviously better” is a bit strong a claim for that.

    • Antistotle says:

      I kept getting lost in details like this–for example for every work of Joy Division or Johnny Coltraine there’s a work of Keven Federline, the Crash Test Dummies or some boy band.

      But I don’t think that’s the point. I think it’s a Thought Experiment type thing–it inherently is too complex and fragile (at least as stated) for the real world, but is useful for thinking about.

      • Mary says:

        This is not a detail. It’s central to the whole assertion. If axiology can deal with only happiness and life and not with morality, it’s not the study of good at all.

        • sconn says:

          Isn’t it just the study of ontological good rather than moral good? A cookie is ontologically good (because it exists, is tasty, has calories, whatever) but it can’t be morally good. Only moral agents can be good.

          • Mary says:

            Nope, it’s just the study of the good.

            Anyway, the question at hand is whether, as your argument requires, it’s impossible for a moral good to be an ontological good.

    • Itai Bar-Natan says:

      This post is not intended to give an end-all be-all declaration of what axiology is; the purpose of these remarks is to give a brief illustrative description of what axiology says about a particular topic to contrast it with morality and law. Although they’re missing some qualifying statements (“all else being equal” in both of them) the meaning is clear.

      • Mary says:

        Not when the claim about the trolley problem is included.

      • Itai Bar-Natan says:

        You’re right. I misunderstood your comment. As I understand it now, you’re saying that it’s not even clear in the fat man version of the trolley problem that it would be axiologically better if you were to push the level. I retract my original response as it doesn’t respond to what you were actually saying.

        Still, I think the sentence this comes from, “the fat man version of the trolley problem sets axiology… against morality…”, gives a correct impression about the role the fat man version of the trolley problem plays in moral discussions; in particular, many people think that pushing the fat man leads to axiological good but is morally bad, and use this dilemma as an argument against consequentialism, exactly exhibiting the contrast Scott discusses.

  28. jebbyderinger says:

    I hadn’t heard of Axiology until this point. It was interesting to read about it along with morality & law which I’m much more familiar with. Reading this had me thinking a lot about sociopaths and how they often ignore morality & focus strictly on the law to guide their behavior.

  29. Jack Lecter says:

    But now we’re just getting into the thing where you bulldoze through moral uncertainty by making the numbers so big that it’s impossible to be uncertain about them. Sure. You can do that.

    I seem to recall Eliezer Yudkowsky rather memorably demonstrating that at least some people can’t do that. Or won’t do that.

    At any rate, it proved a spectacularly ineffective heuristic to quell conflict between different moral intuitions.

    I’m not sure how much that gets to what’s being debated here or not, but it seems… close enough emotionally that it’s hard to believe it’s completely irrelevant. (Something something sacred value taboo trade-offs?)

  30. Testlord says:

    I think this is a neat breakdown. The only thing that doesn’t quite connect for me is the final step where we go back to offsetting meat, or plane trips – with the understanding that they are axiological rather than moral concerns.

    If I read the post correctly, an axiological accounting system would not care what sort of offset you did. So you should be able to offset eating meat by reducing your carbon emissions, or donating to a deworming initiative.

    It’s true that it’s theoretically easier to be certain that you’ve completely offset your axiological carbon debt by capturing the exact same amount of carbon, but overall the obsession with reversing the exact action you took seems to come from the moral tier, not the axiological one. It’s the moral tier that views those two actions as connected in some way and that makes it seem to me like these must be moral concerns, not axiological ones.

  31. Anon. says:

    This is nothing but epicycles.

    I realize all this is sort of hand-wavy

    You don’t say! Time to face the facts and drop utlititarianism.

  32. kominek says:

    With this framework, we can propose a clearer answer to the moral offsetting problem: you can offset axiology, but not morality.

    you could probably address the issue within axiology, as well, by implementing some element of path dependence when evaluating world states. a world where the goodness dropped from X to X-Y for a period of time before resuming its value of X is inferior to a world where the goodness remained at a constant X through that time. overcoming that would take effort as a function of Y.

    also using vector clocks of goodness is coming to mind, but i haven’t fleshed that idea out beyond this series of words.

  33. Michael Cohen says:

    To bridge the whole distance – to directly connect axiology to law and make it illegal to do anything other than the most utility-maximizing action at any given time – is such a mind-bogglingly bad idea that I don’t think anyone’s even considered it in all of human history

    Indeed, it would be illegal for lawmakers to keep this law on the books.

    • Charles F says:

      I think he was making a joke by referencing this

      In all situations, the government of Raikoth will take the normatively correct action.

      • Michael Cohen says:

        Much as I’d like to take credit for that, I was just noting that such a society would not maximize the utility of its inhabitants, so the optimal action of lawmakers would be to repeal the law, making it illegal to do anything otherwise.

  34. danculley says:

    I’m not sure any of this really addresses the carbon offset example, including the fungibility discussion. There is nothing morally wrong with emitting carbon, apart from its impact on the climate. That is determined by the total stock of carbon in the atmosphere. No one would care if I emitted carbon out the door of my space station. So axiologically, I don’t think you can say not offsetting is better. I don’t see any reason to have an axiological preference among any of the actions that lead to the same net carbon emissions. Obviously, less is better — by why privilege saving another ton of carbon over any of the other infinite good things one could do?

    I suppose one could ask why we care about humans rather than the total stock of human life. Interesting question, but not one anyone really disagrees on.

  35. Michael Cohen says:

    I strongly recommend (to Scott especially, but also to anyone) Normative Ethics by Shelly Kagan. It asks basically every question that has ever been asked where differing answers give different ethical theories, and where each camp has at least a few serious adherents. For almost all these questions, it outlines the costs of taking either side. It gives an incredibly clean structure for dividing and comparing ethical theories, but most excitingly to me, by being so careful with all the decision points in picking a theory, it points toward a number of possible theories (given different combinations of answers to these questions) that have received very little philosophical air time, despite being reasonable and interesting. I feel a bit like a Mormon here, but this book actually changed my life.

  36. Philosophisticat says:

    I find this to be kind of a conceptual mess – some things that strike me as confused/wrong:

    1) The way you talk about axiology, morality, and law is really sloppy. Axiology, as you define it, concerns which states of the world are good. Morality, as you define it, concerns which actions are right. Axiology cannot “conflict with” “disagree with” “trump” or “give different answers than” moral claims because they are not even about the same things. Axiology cannot “demand” things because it does not concern demands at all. When you talk about axiology in these ways, you’re generally just equivocating between axiology as you defined it and Maximizing Consequentialism, which is a moral view.

    2) The discussion of “Law” is also confused. Your discussion of Law is inconsistent throughout between descriptive questions about what happens to be, or will be law, and normative questions about law. Of course, it’s thoroughly uninteresting and obvious that descriptive Law will come apart from any normative notion. But which normative claims are we talking about? Are they claims about which laws it would be good to have? Then the question just is a question of axiology. Are they claims about which laws we morally ought to implement? Then the question just is a question of morality. Are they claims about some third thing, perhaps “which laws are just”? It would be nice to know. And like above, depending on which way one goes, various claims about “conflict” “disagreement” etc. between Law and the other questions become nonsensical.

    3) It would be nice if your point did not rest on rule consequentialism being the correct view of morality, since it is false.

    You may have a good overall point but I’m having so much trouble coherently translating much of the post that it’s standing in the way of my appreciating it.

    • Said Achmiz says:

      I clicked on those links with some interest, even excitement, but was somewhat disappointed by the following:

      1. Some of them are about rule consequentialism, not rule utilitarianism specifically
      2. Some of them don’t distinguish between rule consequentialism and rule utilitarianism
      3. Some of them don’t do much, or anything, to untangle objections to rule utilitarianism from objections to utilitarianism in general, or from rule consequentialism in general

      (I agree that rule utilitarianism is false. But, like you, I am saddened by sloppy thinking in moral philosophy.)

      • Philosophisticat says:

        All objections to rule consequentialism are ipso facto objections to rule utilitarianism. And the papers are all dealing with objections to the rule-based views which are not objections to the act-based view (which is false for other reasons).

    • Philosophisticat says:

      In retrospect this came out a bit more negative than I like.

    • Michael Cohen says:

      I don’t have a big objection to your using the term “rule consequentialism” to mean “what set of rules would have the best outcome if everyone followed them?” This is probably common usage, honestly. And all of the papers you link to assume either this (or 90% adoption, or optimal adoption). But when you say that Scott’s point rests on rule consequentialism being correct, I only agree with you if you mean rule consequentialism in a broader sense, a sense which includes the following as a type of rule consequentialism: “what set of rules would have the best outcome if I followed them?”. It doesn’t seem like any of the articles you link to object to this form of rule consequentialism, and in fact, I’ve been very much on the fence between this version of rule consequentialism and act consequentialism, since reading this paper. It’s not about ethics at all, but it compares causal decision theory with another decision theory which would imply a rule consequentialist approach of the sort where you jut look at the set of rules that would make the world the best if (just) you followed them. By the way, reading this shook me to my core, my mouth was wide open half the time, and I felt like how I would imagine a committed atheist would feel beholding God.

      • Philosophisticat says:

        I wasn’t actually sure whether Scott’s post rests on rule consequentialism being correct – at various points it sounded like he was implicitly or explicitly assuming something like it, but maybe it was inessential to the overall point, which I couldn’t really follow because of conceptual perplexity. I was being more earnest than sarcastic when I said it would be nice if it didn’t.

        I think individualized indirect consequentialist type views are going to be subject to very similar problems as the universalized ones. For example, the structure of the objection raised in the first paper I cite depends only on the comparison worlds differing from ours in more than what is up to us, which holds in those cases as well. I hadn’t noticed that this was also the structure of the Timeless Decision Theory view, but if so, it’s nice to also have a decisive objection against that view, which I didn’t like anyway (I’ve never really gotten a grip on the motivations for the view beyond a desperate attachment to one-boxing in Newcomb’s problem). I think we should just accept CDT.

  37. blacktrance says:

    First, I think you’re being too restrictive about the conclusions that axiology can reach. It’s demanding if you think there’s only one or a few goods and they’re agent-neutral, but that needn’t be the case and most people think it isn’t. Most people admit categories like “my good”, “my family’s good”, “faraway strangers’ good”, and so on, and that they should be balanced, not treated as if one person’s good was as important as another’s. They’d consider it admirable if some stranger upended their own life to help the poor, but they usually wouldn’t consider it to be good to do so themselves.
    Other possible conclusions include that the good is one’s own pleasure, and it’d be strange to describe that as demanding.

    Second, axiology and law are both parts of morality, so it’s strange to talk of one of them conflicting with or overriding any of the others.
    Unless you’re a Kantian, you think that the good has at least some relevance to the right. But if you conclude that the right isn’t entirely about maximizing the good, axiology isn’t being trumped – its job is to tell you what the good is, not what to do about it. “Whatever the good is, it’s obligatory to destroy it” is a conceivable view. If you’re a utilitarian, you think that morality is about maximizing the good, whatever that is. (And it’s a view that often conflicts with our moral intuitions, so we should be wary of twisting our beliefs into reconciling the two. A utilitarian may consider social respectability to be instrumental to other goals and so avoid saying outrageous stuff all the time, but other – not as exculpatory – explanations for not telling the non-donor friend that they’re as bad as a murderer is that they don’t alieve utilitarianism or haven’t thought through its radical implications.) If straightforwardly trying to maximize utility doesn’t do so successfully, you’re still doing your best to produce the good, which happens to be indirect. For the consequentalist, moral dilemmas involve conflicts in different approaches to maximizing the good, but the right thing to do is whatever does it best – so there’s no conflict between utility and anything else.
    As for the law (in a normative sense), it’s concerned with enacting justice, which is a part of morality. It’s not concerned with aspects of morality outside of that, which explains why almost everybody thinks murder should be illegal, but even most vegetarians think meat-eating should be legal. It’s also important not to confuse normative and positive law, and if the two diverge, when to violate the latter is a question for morality as a whole. In such cases (which are very common), morality trumps the law, but considerations of prudence weigh against violating it egregiously.

    The philosopher Stephen Finlay puts it well in his paper Too Much Morality:

    [A]s a rule, ‘best’ seems to imply ‘ought’: if the I-94 is the best route to drive to Chicago, then it’s the route you ought to take; if medicine X is the best child’s pain reliever, then it’s the pain reliever you ought to give your child for her pain; if the best move is pawn to Q5, then that’s the move you ought to make. It sounds very odd to say, ‘A is the best, but you ought to choose B’.

    The soundness of best-implies-ought seems obvious to me. But if the result is implausibly demanding, remember that “best” is also up for revision.

    • JASSCC says:

      One thing I have often wondered about moral philosophies: how much weight do they give to calculation difficulties? The chess example is great — you seldom know that a move is best, and you could then often sink more time into trying to get more confident without knowing that you will get that confidence, much less certainty of your best move.

      It is I think clear that what is best comes with a cost-benefit analysis. What I think is less clear is the difficulty of determining the best is one of the costs! Moreover, in some situations it might be such a high cost as to be completely infeasible. Therefore the moral actor would seem to me to have a very good reason for not doing “the best” determined ex post in many common situations.

      • blacktrance says:

        The fundamental importance of calculation difficulties is usually exaggerated. Just take them into account when making decisions. There’s no silver bullet, but nor are they a “checkmate, [adherents of whatever view]!”. It’s more relevant to applied ethics than to fundamental moral philosophy.

        • JASSCC says:

          “Just take them into account when making decisions”? How can I do that if I don’t have a solid understanding of how difficult it will be even to get good information? I’m not following.

          My concern is that hard moral decisions often entail a ton of uncertainty. How do you take such uncertainty into account?

    • thad says:

      I find the claim that, as a rule, best implies ought, to be obviously false. Now, there may be times where best does in fact imply ought, but taking it as a rule seems to get absurd results.

      I don’t know what ought is supposed to be entailed by the statement “OJ Simpson is the best running back to ever play for the Buffalo Bills”, but I don’t think the people I’ve seen making that statement took themselves to be implying any ought claim.

      I take the claim “I-94 is the best route to drive to Chicago” to be making judgements, but not necessarily moral ones. My first reading of that sentence is that I-94 is the easiest route to drive to get to Chicago. It could also mean that I-94 currently has the shortest drive time due to traffic conditions. It could be about how scenic the I-94 route is. None of those seem to me to imply any ought statement.

      • blacktrance says:

        Most people wouldn’t interpret “OJ Simpson is the best running back to ever play for the Buffalo Bills” as implying an ought claim, but that’s because the circumstances to which it’s relevant are unlikely to ever come up – you’re never going to be in a position in which you’re forming a team in which you can include one of the Buffalo Bills’ running backs in their prime. Nevertheless, if you were in such a situation, it would be strange to choose some other Bills running back. I know this makes it sound very indirect, but that’s why it’s “best implies ought” and not “best is ought” – if it were the latter, it’d be more obvious.

        Presumably, “I-94 is the best route to drive to Chicago” is an all-things-considered judgment. If you’re using different criteria from the speaker, then it might not be the best route for you. But it’d be strange to say that it’s the best route by whatever criteria you’re using, but nevertheless you should take some other road.

        If you judge X to be all-things-considered the best choice by whatever criteria you’re using, and you’re choosing between it and a worse Y, it’d be strange for you to claim that you should choose Y.

        • thad says:

          Do you take “should” and “ought” to mean the same thing? That’s the impression I get, but I take them to have distinct meanings. To my reading,

          Presumably, “I-94 is the best route to drive to Chicago” is an all-things-considered judgment. If you’re using different criteria from the speaker, then it might not be the best route for you. But it’d be strange to say that it’s the best route by whatever criteria you’re using, but nevertheless you should take some other road.

          If you judge X to be all-things-considered the best choice by whatever criteria you’re using, and you’re choosing between it and a worse Y, it’d be strange for you to claim that you should choose Y.

          has very little do with the conversation of whether or not best implies ought. “Ought” has a moral component that “should” does not. If I should take I-94 and I don’t, I may have acted strangely but I haven’t done anything wrong. If I ought to take I-94 and I don’t, then I’ve done something wrong. Likewise, lots of things are strange without being morally wrong. it would be strange for me to wear clown makeup to work

          As for the Bills, you don’t actually say what the ought claim implied by the statement “OJ Simpson is the best Bills RB of all time” is. My claim is that it has no moral significance, and that while you can come up with some sort of goal-oriented statement (if your goal is to build the best team, then you should select OJ) it will nevertheless fail to capture the moral grounds on which one might choose to select another running back instead of OJ. It seems strange because OJ is a particular case that introduces a moral concern not present in most cases of selecting all-time great running backs.

          • blacktrance says:

            I take “should” and “ought” to be synonymous.
            But best-implies-ought is part of goal-oriented behavior in general, and isn’t limited to morality. If you believe that you should take the I-94 because it’s the best, and you don’t, you’d be contradicting yourself. You’d be wrong in (at least) the ordinary sense, regardless of any moral component, in the same way that you’d be wrong if you were trying to win at chess and knowingly made a suboptimal move. Regardless, while there are many acts that are wrong (morally or otherwise), there are few that display a contradiction as clearly as believing that something is the best (morally or not), but choosing otherwise. That’s the source of the strangeness.

            The problem with the OJ Simpson statement is that it’s ambiguous – it doesn’t distinguish between “OJ Simpson is the best Bills RB qua RB” and “OJ Simpson is the best Bills RB according to [the criteria I’d use to make a decision in which they’re relevant]”. The former statement implies that if your sole criterion were his performance, you should choose him.

          • thad says:

            Ah, ok, yes, that is the source of our disagreement.

            I grant that both “best” and “should” pick out an individual object or course of action, and where they are used in the same context they pick out the same object. (So, I could say that “I-95 is the best route” as a general statement and “we shouldn’t take I-95” as a statement about a particular trip without contradiction; perhaps traffic will be unusually bad at that particular time).

            But none of that has to do with morality or one’s obligations, and so “ought” doesn’t enter into it.

          • random832 says:

            I think you’re making a distinction between “should” and “ought” that doesn’t actually exist in the English language as normally used.

  38. registrationisdumb says:

    Overall good essay but aside from some minor nitpicks about whether there are laws or certain things or whether certain things make the world a better or worse place, there’s one major thing I don’t think either of you touch on.

    Murdering someone extinguishes a unique life and it is a specific bad act that is not reversible. Emitting 10 billion tons of carbon, at least in terms of Global Warming, is a very vague and general bad act that you can offset and no-one will notice. There is definitely a spectrum that things fall on, and those that are more reversible probably more good/moral to offset.

  39. Briefling says:

    You end up stating, and then dismissing, the idea that morality only “trumps” axiology because violating social norms has massive secondary effects. (And therefore you really can use offsets for moral violations, they just have to be really big.)

    Just my 2 cents — I think this is totally correct and right, and does not need to be dismissed or recoiled from at all.

  40. hnau says:

    (this definition elides a complicated distinction between individual conscience and social pressure; fixing that would be really hard and I’m going to keep eliding it)

    Somewhat provocative thesis: the distinction is neither complicated nor hard to fix. “Individual conscience” is a less-overt form of social pressure, applied mostly via conditioning and possibly even via natural selection (i.e. group selection). On the other end of the overtness spectrum, strong (but mostly unspoken) social mores and conventions shade into systems of law.

    What about axiology? Well, as it’s defined here, I’m skeptical as to whether it actually makes sense as a category. At the very least, the extreme reductionist view– that a snapshot of the world at a specific point in time can be analyzed as “good” or “bad” and that this is the objective function of morality– strikes me as clearly wrong. The things we think of as fundamentally “good” and “bad” are events or experiences, not states. “Axiology” looks to me like an artificial back-formation– a construct that consequentialist reasoning superimposes on morality out of necessity to justify itself.

  41. Amanda Askell says:

    Thanks for commenting on this piece – I appreciate it!

    I’m inclined to agree with a lot of this post. The way that Tyler and I put it in the paper version is the trilemma between (i) you can offset extreme actions like murder, or (ii) you can’t offset any immoral actions, or (iii) you have to find a non-arbitary cutoff point for offsetting. I think that utilitarians are kind of forced into (i), but they’re at least helped by two things: one is the consideration I mention in the section you quote and the other is the fact that allowing extreme offsets would be very bad socially (though people often hate these “indirect consequentialist justification” arguments). But I do think offsetting is sort of a problem for the utilitarian.

    The view that I take you to be endorsing here is one that accepts (iii). If failing to save someone is not a violation of a moral law but killing them is, then you can offset the former but not the latter. I’m sympathetic to this – I think it’s the most plausible kind of cutoff view (these get discussed more in the paper but not as much in the post/talk). One worry for these views is just that they’ll still have something that feels like discontinuity: e.g. you can offset some bad act that results in disvalue n by creating value m, but you cannot offset a bad act that violates a moral law by creating value m’ even if it results in disvalue much less than n and even if m’ is much much greater than m. One way to get around this is to adopt something like the following view: if m’ is sufficienly large then the act that would usually be a violation of a moral law is actually permissible. But that’s weird in the case of offsetting. In the case of breaking a pomise to cure cancer you need to break your promise or cancer won’t be cured. But it’s a bit weird to say “well I wouldn’t have given a billion dollars to charity if I hadn’t been so guilty about breaking my promise, so I guess breaking my promise no longer violates a moral law in this case.” Another response is just to accept this kind of discontinuity at the axiological level. (Side note: you could defend breaking your promise in the cancer cure case either because you think that violations of moral law can be outweighed by sufficiently high axiological gains or because you have this sort of “axiologically-sensitive” theory of the moral law. So I don’t take this to be a counterexample to the ‘moral law first’ view.)

    One worry I have for this sort of view in practice is just where we draw the line at “moral law” and “not a moral law”. I worry that a lot of the cases where offsetting feels fine seem to strongly overlap with cases where we don’t have an identifiable victim. E.g. increasing demand for products that will in expectation bring into existence animals with terrible lives or producing carbon that harms people in developing countries (to make it easier, suppose that they’re people who have already been born). These aren’t even omissions, they’re just acts where the agents harmed aren’t ones that we have particularly close relationships with. It’s easier to harm these people than it is to punch someone on the street or break a promise to a friend. But I think that should make us skeptical of the claim that these actions don’t violate a moral law. It’s not impossible that the moral laws happened to draw a sharp distinction between harming people we can see (which is awkward) and harming people at a distance (which is less awkward), but it would be suspiciously convenient. So I wouldn’t be surprised if there were actually a bunch of actions that one simply can’t offset because they violate a moral law, and things like eating meat might be among them.

    On the order of priority: my view is basically that the morality probably trumps axiology regardless of which moral theory you have (for utilitarians as well, it’s just that they have a more axiologiaclly-dependent moral view). With less confidence I’d probably say that you’re required to follow the law when this is what morality demands (i.e. a “morality trumps the law” view). But morality often does demand following the law, even if you personally disagree with the law. For one thing, the law is *usually* less demanding than morality because it’s a set of rules that we’ve created to keep society functioning (a) without imposing too many constraints on people and (b) with general consensus. Sometimes you have a moral duty to violate the law, but you’d want to be very careful about checking you are in a position to know that this is right, both for the reasons you give and because the law is generated by a collection of voters, which is usually more reliable than the judgment of a single voter.

  42. JASSCC says:

    If there are charities that can, save, say, 10,000,000 expected lives per year (a little less than 1/5 of the number of people who die each year, so that would be to me an unbelievably large number of preventable deaths prevented), and there are, say, 500,000,000 people on Earth who could easily afford to save those lives, isn’t each of those half billion people on average responsible for .02 lives not saved if they skip donating that year? Or should it be on a sliding scale of wealth? (If so, the *median* responsibility even in this wealthy-by-global-standards group would be far lower than that .02, given the way wealth is distributed.)

    And should we expect anything from the next-wealthiest 3,000,000,000 who can do something more than nothing, making the fraction even lower? By contrast, a competent and efficient homicidal person who relents on murder at the last minute is with very high likelihood solely responsible for the intended victim not being dead. It seems that this should be pertinent in a comparison between murder and not donating to save a life.

    To pluck numbers from thin air, suppose we find that a median American has about 1/20 of the average wealth of this wealthy group. Every year that this person doesn’t donate, her or his responsibility is for about .001 lives not saved. Would a friend possibly react with 1/1000 the horror that he or she would display on finding out that the person in question had murdered someone that year? I think possibly that’s not too far off — not donating might be perceived by some as approximately 1 millimurder evil.

    But wait, there’s another factor to consider: a murderer surely understands and feels confident about what murder will do — it will make someone dead. But does our median American understand and feel confident about what the charitable donation will do? And if not, does this lack of certainty not then change the moral significance of the decision, perhaps making it less significant than murder in yet another way?

    And finally, due to declining marginal utility of wealth, the median American has actually far fewer utiles to give per dollar than her or his wealthier compatriots. So if we weight this by utiles sacrificed, his or her evil would be diminished still more, probably to much more than 1 or even 10-100 micromurders, but well under 1 millimurder.

  43. Peter Gerdes says:

    Interesting, but

    I feel you vacillate a bit in whether you are using morality to describe the set of useful norms and rules which, by following, tends to make one a more moral (in the later sense) person or you mean some objective extra set of facts about which actions are acceptable and unacceptable in various situations, i.e., moral realism as defined by philosophy. Note that its not at all clear the rules in a rule-utilitarian system actually correspond with either of these as arguably it is about developing the right rules that (if you believe in such concepts) concepts like permissible or not apply

    I would argue that the pure act-utilitarian doesn’t actually believe in moral facts about permissibly or blame. Rather their moral ontology is reduced to a pure preference function on possible states of affairs. I suppose you could supplement this with belief in concepts like permissible or required but why introduce those at all, far better to jettison notions like blame and duty from our moral ontology all together with the extra bonus of helping with the guilty utilitarian problem.

  44. Lirio says:

    This is tangential, but it always bugged me how when discussing the Trolley Problem nobody ever seems interested in what happens if your intervention fails. If you pull on the switch on the tracks, and the handle breaks off, then five people die. This is too bad but you did what you could. If you push the fat man off the bridge and he either fails to land on the tracks, or else proves to not have enough mass to stop the trolley, then six people die, one of whom would have lived if you had done nothing. What’s more a track switch suddenly malfunctioning at a critical moment seems a far less likely than the body of a heavy-set gentleman proving no match for a high speed trolley. This means the two scenarios are not just different on the question of direct vs indirect harm, but also on the question of potential maximum harm.

    No analysis i have seen thus far seems to acknowledge this, even though i believe it has an effect on how people respond to it. If someone does not believe pushing the fat man over is an effective intervention, they will refuse regardless of how they feel about direct versus indirect harms.

    • JASSCC says:

      I have sometimes wondered why one is not obligated to first urge the fat man to jump. And if he refuses to do it, maybe it’s because he knows of some way in which he is uniquely empowered to save still more lives, which would have been prevented by just shoving him. He’s a moral actor, too!

      This is an important moral impulse, sometimes stated as central: to treat people not as objects, but as a independent moral actors. The framing of the question seems to me to try to artificially foreclose that.

      Ditto with uncertainty, and you mention. Don’t we want and need people to stop and think about life-and-death decisions? Well, the problem says there’s no time for that, so you must decide instantly! But in any real-world scenario entailing imminent death of strangers, actual human beings would be frantically considering what the hell *could* be done, not instantly resolve one or two simple, binary options wherein some lives are spared and some are sacrificed with certainty. This to me feels almost like a trap.

    • dansimonicouldbewrong says:

      I don’t think this is tangential at all. I believe it’s a fundamental flaw of consequentialism in general that it presumes to disentangle means from ends, and in so doing eliminates a fundamental aspect of morality. The problem–as I explain in a comment below–is that it fails to take into account the imperfection of human knowledge that characterizes the real world in which moral decisions must be made. In real life, people can’t know for certain the consequences of their actions, and therefore must focus on the morality of the actions themselves. Abstract perfect-knowledge scenarios of the sort presented in “trolley problems” thus give us vanishingly little insight into real-world morality.

      • davidweber2 says:

        Lirio is making a purely consequentialist argument for why the expected outcome is that pushing a fat man is a terrible idea. In most cases, the consequentialst thing to do is to maximize expected utility, so if you expect that the odds of a fat person stopping a train are smaller than the difference between their life and the lives you could save, than you shouldn’t push them.

        In general, imperfect knowledge in consequentialism is taken into account in the same way that any rational agent pursues their agenda, just that the agenda is moral rather than money. Investment is an uncertain business– does this mean that the rational thing to do is to completely disregard what investments will make you the most money in expectation, and instead only invest in some traditional manner?

        • dansimonicouldbewrong says:

          But in the case of investment strategy, the answers, given perfect information, are almost always trivial–all the interesting work deals with the far more difficult task of estimating and managing risk (i.e., uncertainty). Why, then, do moral philosophers seem to spend such an inordinate amount of time playing with obscure and unrealistic perfect-information scenarios, when the real challenge of morality, as with investing, is managing uncertainty?

          • JASSCC says:

            I agree completely, at least with regard to trolley scenarios. I’d also like to direct your attention to my comment above concerning *asking* the fat man to jump.

            If the scenario is such that there is no time to do even that, then it’s a situation of such split second decision-making that the uncertainty would surely dominate a large fraction of well-functioning people’s thinking.

          • davidweber2 says:

            it’s a fair point that moral philosophy should consider uncertainty. But if you’re a consequentialist, one can, for the most part, leave that to the question of rational behavior. The reason why philosophers focus on cases with perfect information is that it gets at fundamental values, rather than implementation questions. It’s worth simplifying to the question of “does this person’s life matter more or less than these other people’s lives” without including probabilistic complications.

            What is valuable in investment is trivial, but this isn’t true for all selfish values. In some respects, the field of aesthetics is entirely about this question of selfish values in a similar manner (though I must confess much more ignorance to the subject than to ethics).

          • dansimonicouldbewrong says:

            @davidweber2 It’s not hard to come up with trolley problem-like abstract hypotheticals designed to get at the “fundamental values” of investors. Consider, for example, the goal of maximizing long-term gain, and imagine a hypothetical investment vehicle that immediately collapses to near-worthlessness, stays in that state for time x, then instantaneously rockets to y times its original value. For what values of x and y would an investor interested in maximizing long-term gains be willing to invest all his or her assets in such a vehicle? Half his or her assets? Do the investor’s circumstances affect the answer?

            For all I know, somebody somewhere has investigated such questions, but it’s pretty obvious that they fade into unrealistic irrelevance once the ever-present shadow of uncertainty is re-introduced into the scenario. I argue that morality is similar: the trolley problem question of whether it’s okay to murder one person to save five with certainty–or the related Stalinist/jihadist question of whether it’s okay to murder millions of innocents in order to achieve eternal paradise for everyone else–may make for an entertaining abstract hypothetical, but it’s utterly irrelevant to morality in the real world, where such outcomes can never be foretold with anything approaching certainty.

          • Lirio says:

            While i agree that managing uncertainty is a key and necessary component of morality, is nothing about consequentialism that makes it unable to handle uncertainty. Additionally, i submit that any consequentialist who attempts to disentangle means from ends is doing consquentialism wrong. The entire point of morality is to determine which among the available means will get us closer to our terminal values. A moral analysis that ignores means is at best of limited use.

            What’s more, i would argue that for all the uncertainty they have to handle, investors are fundamentally consequentialists. When they judge what investment strategies to pursue, they judge them on consequentialist grounds. Investors don’t particularly care about which approaches are intrinsically right or intrinsically wrong, they care about which ones will make them the most money for the least risk. That is a consequentialist analysis, in that it judges actions by their risks and expected returns.

  45. Peter Gerdes says:

    Also, I’m not at all convinced you’ve shown that trivial moral harms are the ones that are appropriate to use offsetting with respect to. It seems to me that the property you are really tracking here is something more like is the bad behavior something people are almost surely going to do anyway.

    That’s what makes the difference with carbon offsetting or meat offsetting. If we lived somewhere where revenge killings were still part of our culture they too might fall into this if we knew we couldn’t coordinate a norm against them.

    • carvenvisage says:

      In such a society requiring moral offsets would be a good way to ensure people didn’t frivolously non-revenge-kill people. If you kill someone in revenge, that’s a big deal, and you should do something to show you’re not a wanton murderer.

  46. FullMeta_Rationalist says:

    morality = -ln(bad/good)
    because low entropy is fragile.

    Thought experiment: Is it morally permissible to pump 1 ton of carbon dioxide into Alice’s lungs if I can prevent this scenario from occurring to Bob? What are the aggregate ramifications of implementing a norm where this is morally permissible? How is the supply of preventable victims generated?

  47. dansimonicouldbewrong says:

    What this entire discussion leaves out is the issue of imperfect knowledge. The real reason why law trumps morality, which trumps axiology, is that this sequence represents an ascending scale of required understanding of consequences. Laws are normally designed so as to allow people to follow them with only very basic knowledge of the consequences of their actions: as long as you follow the law, even if someone is harmed as a result of something you did (or failed to do), the act has to be proven to be intentional or reckless–that is, to have really obviously risked harming someone–before you’re legally at fault. Morality, on the other hand, requires at least a thorough understanding of the local consequences of your actions: you may be morally at fault, for example, for an act (or failure to act) that merely carries a greater risk of resulting in direct harm than some other course of action. And axiology further requires a global understanding of the consequences of your actions: an action can be axiologically bad if the global net sum of its consequences is negative, however it might look locally.

    The real problem, then, with the axiological point of view–of which offsets are an example–is that it is simply unrealistic to expect anyone, ever, to have anything resembling a full understanding of the global consequences of one’s actions. Morality–trying to avoid direct local harm and effect direct local good–is hard enough to determine, and can easily lead to mistakes due to incomplete understanding of local circumstances. But believing that any human-generated global net tally of the consequences of a set of possible actions–say, a local act of dubious morality, coupled with a donation to a worthy-seeming cause–can be counted on as reliable is mere hubristic fantasy.

  48. davidweber2 says:

    The problem with comparing carbon offset with meat offset is that in the case of carbon offset, we’re talking about literally making net less carbon in the atmosphere, whereas in the case of meat, the actual effect of the donation is much less clear. Does the donation result in research into meat alternatives? Lobbying to change production laws? Advertising campaigns? In the last case, you are literally the target of the advertisements you’re funding, so you’ve mostly just thrown money down the drain.

  49. Psy-Kosh says:

    While there’re some good ideas here, there is something that rubs me the wrong way. You’re redefining morality in such a way that, as I understand it, prevents one from expressing a viewpoint about morality that surrounding society disagrees with.

    “I believe supporting killing animals for sustenance, when you have alternatives, is immoral.”

    “Aha, but since most people don’t seem to effectively think so and it’s not betraying society/etc, BY DEFINITION it’s not immoral.”

    While I think some of the categorizations/way of slicing things you have may have some use, actually restricting the word “morality” in that way strikes me a bit too “word-game”y, even though I know that’s not your intent.

  50. Yaleocon says:

    What you call “axiology” is what any philosopher calls “morality”. What you call “morality” is what any philosopher calls “heuristics”. (Law is law.) I know you’ve studied philosophy; I know you know that. Why the epicycles? Why the extra unnecessary terminology?

    You have grown increasingly uncomfortable with the demands of consequentialism (avoiding 80000 hours, etc), and this is a culmination of that process. The difference between secretly killing someone and letting someone die in consequentialism is null. In non-consequentialism, you’re allowed to elevate the heuristics which distinguish those two to the level of morality via “categorical imperatives” or “sacred values”. But you can’t have it both ways. Be consequentialist or don’t. This post is waffling about that choice.

    • thad says:

      I’m extremely skeptical of any claim that all philosophers use the same term for anything. Now, when I searched the SEP for “morality” the first link was to an article called “The Definition of Morality“. The first two senses it gives, the first distinction it draws between various senses of the term “morality”, both reference codes of conduct. You will note that Scott’s definition of “axiology”, the one you claim refers to morality, is not about conduct. Oddly enough, his definition of “morality”, while it does not use the term “code of conduct” does talk about actions to be taken, which seems like it might be the same thing. I’ll leave searching the SEP for “axiology” as an exercise for the reader, but the short version is going to line up pretty well with Scott’s short version.

      Even if all philosophers called the same thing “morality’, it would not be that thing which Scott calls “axiology.” And the thing which he calls “axiology” is pretty recognizable as a distinct thing, and in fact as the distinct thing that the SEP calls axiology in the first result if you search for that term

  51. Furslid says:

    What if the difference isn’t offsets, but who can legitimize offsets?

    Axiology is very personal. All that is necessary for axiological offsets is that the person accepts that their offset is sufficient. So if you think that a paying for carbon reduction offsets your plane trip, it does.

    Morality is social. If your society thinks that your offset is sufficient, then it is. Often times, society defers to the victim of whatever wrong you did. If your friends think that buying them dinner offsets breaking a social commitment, then it does.

    Law is relation to the state. If the governmental power thinks that your offset is sufficient, then it is. You get a pardon, or don’t get prosecuted, or get a light sentence. If you get a pardon for theft because you saved someone’s life then it’s a sufficient offset.

    Offsets should be acceptable in all cases. The difference is in who accepts them. This implies that moral and legal wrongs cannot be offset in secret. Society or the state must be aware so it can accept or reject the proposed offset. This can also lead to interesting dynamics. What if someone believes that society should accept their offset, but society rejects it? What if the state accepts an offset, but the person believes their actions were not offset?

    • Doctor Mist says:

      Morality is social. If your society thinks that your offset is sufficient, then it is.

      But there must be more to it than that. Our society clearly thinks that the empty offset is sufficient for eating a hamburger, but that doesn’t eliminate Scott’s sense of guilt when he eats a hamburger. I see three possibilities. One is that this aspect of Scott’s morality is not purely social. The second is that it comes not from Scott’s morality but from his axiology — though if axiology is capable of inducing guilt, the value of drawing the distinction starts to seem questionable. The third is that the phrase “your society” is hiding a very complex calculus: Scott’s society is a fluid amalgam of the USA, the medical profession, the EA community, the rationalist community, his household, and who knows what else; in which case your claim loses a lot of its apparent usefulness. Am I missing a fourth?

  52. WashedOut says:

    I would say there are entire domains of potential social responsibility that people don’t care at all about, and others that people are passionate for. How is someone meant to act if they cannot offset within the same domain?

    Suppose I don’t give a toss about animal welfare and I want to continue eating the same amount of animal products indefinitely with no scope for adjustment. However i’m really concerned about the educational standards of poor children. Is my (alleged) X units of damage in animal-welfare domain offset-able by Y units of benefits to the child-welfare domain?

    I believe most people’s ethical considerations are thus partitioned.

  53. Curcnha says:

    Self defence and defence of other would seem to be cases where the action of harming, even killing, another person is partially or fully morally offset by the saving of oneself or ones charge. Likewise, the act of killing enemy combatants could be seen as offset by the ongoing perceived moral virtue of defending ones country, family, or other valued thing. If we want to go less extreme, then there is stealing to feed ones family or white lies.

    While I would generally conclude that the moral difference here is one of attribution of intent, it could also be seen as the damaging action being made neutral or even positive through its offset by the direct connection to the positive action, the greater good, or whatever approach one chooses to utilise.

    Perhaps we apply offsets in all three levels but the more identifiable and/or personable a victim is, the more the harm needs to be directly justified?

    • publiusvarinius says:

      Self defence and defence of other would seem to be cases where the action of harming, even killing, another person is partially or fully morally offset by the saving of oneself or ones charge.

      I don’t think that’s a fruitful way of looking at it.

      Scenario 1: Your pistol is fully loaded. The Nazis are breaking down your door. Their aim is to kill the Jews you’re hiding in the basement. The building is surrounded, and the probability that you manage to save anyone is negligible. Is shooting at the Nazis immoral?

      Scenario 2: You accidentally disrespected a local deity. An angry mob of N people is now gathering to lynch you. The GAU-8 on the roof could eliminate them all. For which values of N does your continued existence offset their death?

    • Dedicating Ruckus says:

      Yeah, self-defense isn’t best explained by an offsetting framework. Closer to correct here is the deontological account, where murder is defined as killing an innocent, and then self-defense isn’t murder because the people being (potentially) killed aren’t innocents.

      This is one of those cases where utilitarianism sort of works in most normal cases, but pushed beyond an edge it starts giving you wildly immoral results.

  54. szopeno says:

    Whenever I read posts like this, I start to wonder where is exactly the difference between utilitarianism/consequentialism and virtue-based ethics or deontology.

  55. Steve Sailer says:

    Can we kill an enemy, then offset it with enough money to save somebody else’s life?

    That’s pretty common in a lot of cultures. Evelyn Waugh claimed to have observed a Death Row negotiation in Ethiopia between a murderer and the family of the man he killed over how much compensation to pay the bereaved in order not to be executed. Waugh was impressed with the murderer’s insistence that he wasn’t going to ruin his heirs’ inheritance, so they might as well just shoot him now.

  56. acrimonymous says:

    Scott,
    I have a request for a feature to add to this blog.

    I’d like to read SSC posts on my smartphone on the train, but I don’t have Internet access (iPhone 4s with no cell contract). Theoretically, I could open a website in my apartment and carry the phone onto the train, but in reality what happens is that, when I get onto the train and try to scroll through a website, I can’t. And sometimes the phone tries to re-load a website and I lose it altogether.

    PDFs don’t have the non-scrolling issue and usually not the re-loading issue. So if SSC had a “Download as PDF” feature like Wikipedia articles, I could read it on the train.

    Sorry to bring up something that would probably only benefit me…

    • Itai Bar-Natan says:

      One thing you can try to do is save the page as an HTML file and view that on your smartphone. That should prevent a reload from destroying the information. However, I don’t know which smartphone browsers allow you to view in-memory HTML files and what this does with the scrolling problem.

    • Douglas Knight says:

      Open the page in Safari. Click the share icon, the square box with the arrow pointing upwards. This will show a row of colorful icons (and beneath that a row of monochrome icons) such as mail, message, notes. Scroll through the row. To the right is an orange icon called “Save PDF to iBooks.” Also, there are apps like Instapaper and Pocket that save webpages without converting them to PDF.

  57. Jack V says:

    …huh.

    OK, I found that three-way division INCREDIBLY helpful. It seemed to clarify a whole lot of things that were confused in my head, like “most moral dilemmas”.

    On the other hand, I think the carbon offset example is exhibiting something different. Like, if you do something wrong, but don’t break any legal or moral codes and prevent all the possible harm that could be done by it, you haven’t really done anything wrong. If morality is based on the consequences of your actions, and if you offset more carbon than you emit, then… the consequences are GOOD. Whether or not we can offset axiology (I think I agree with your argument but I’m not sure), I can’t see any morality where if carbon offsetting works[1], there’s any unethical problem there. Carbon IS fungible whether or not morality is.

    [1] That is, assuming you offset in some way that actually works. There are legitimate questions whether carbon offsetting as an industry works or not on practical or moral levels, so it might not ACTUALLY be ok, but I don’t think that affects the way we consider the moral questions.

    • Jack V says:

      I have a similar but opposite problem to the vegetarian example. I agree with some of it, that in the short term, increasing animal welfare is a reasonable offset to eating meat. And that people who need meat to be healthy NEED to eat meat and that outweighs most concerns about the animals.

      But to me, “don’t kill or harm an animal” IS a moral law, just like not killing or harming humans (less so for me, although some people might say more so). So for me offsetting animal harm potentially has problems:

      1. Helping one animal doesn’t offset harming another animal (although improving farmed animal conditions in general does contribute to making farming animals ok).

      2. We need to pay attention to the end game we’re driving towards. Is that “mostly humane farming and there’s meat but it’s a bit more expensive but we subsidize it for people who can’t afford it”? That’s mostly ok. Or is it “farming is incredibly cruel but we make up for it by reducing wild animal suffering somehow?” I’m not convinced this is a moral set-up.

    • Virtua Lyric says:

      I’m also concerned with whether offsets actually work on a practical level. What if some acts damage the environment in irreversible ways, while others can be reversed or “cleaned up”? If you don’t pay attention to which ones you’re doing, you may be doing a lot of irreversible damage and offsetting it by fixing other kinds of damage. Eventually you’ll end up with so much irreversible damage that the environmental disaster you were trying to avoid just happens anyway and none of your offsets could possibly prevent it.

  58. P. George Stewart says:

    I like the distinction between axiology and morality, but as a side-note, I’ve always been dubious about linking morality and law. I know lots of people do it, but I suspect that may be part of what distinguishes true liberalism (including libertarianism) from the Left and the Right.

    Law is “suum cuique tribuere”, to each his own, punishment must fit the crime, etc. It’s really a mechanism for producing social order, including orderly transfer of property, and its only constantly contiguous connection with morality is that like any other human endeavour, it must proceed morally.

    Now some people don’t think that, and think that law is a mechanism for producing a certain kind of moral order – the Right’s all about indoctrination into “traditional values,” and the Left’s all about indoctrination into the more-or-less Rousseauian idea.

    Liberalism, true liberalism, classical liberalism and libertarianism, say, “No, law’s not anybody’s particular tool to do good with as they see it, it is (or ought to be only) the mechanism of keeping sovereign individuals interacting peacefully.”

    I suppose one could say that last bit has some connection with morality – peaceful interaction is a desideratum. But that’s pretty much a moral lowest common denominator for most human beings of whatever political persuasion (just some are willing to go through an egg-breaking period to get the omelet).

    • I don’t mean to harp on you, but I’m going to write something that will come across as very disdainful and disparaging of what you wrote. I’m actually glad you wrote this post because it is a perfect example of one of Karl Marx’s greatest pet peeves (and one of mine, if you can’t tell).

      It’s when classical liberals try to universalize their particular, historically-specific, class-contingent preferences as universal human preferences. The Catholic Church, the Ancien Regime, and the feudal aristocracy were all equally bad about falsely universalizing their own particular preferences as universally good for society, and Marx was hard on them too. But Marx held classical liberals to a higher standard—possibly because Marx was very much a student of classical liberalism. He felt like the classical liberals, the heirs of the Enlightenment, ought to have known better. But here they were, falling into the same convenient, self-serving, sloppy thinking.

      “All you on the Right want to advance your particular preferences, and the same with all of you on the Left, but we classical liberals only care about advancing individual sovereignty (which is obviously some objectively, universally good thing, amirite?)”

      Marx would probably respond, “How convenient that you defend a version of individual sovereignty that happens to amount to freedom to buy and sell and be secure in one’s property! At least, in your mercy, you are willing to defend this freedom for the rich and propertyless alike. What charity! /end sarcasm”

      That said, Marx was a huge supporter of the classical liberal project for what it was worth, i.e. insofar as it moved humanity past the obsoleted servile modes of production. He was, for example, a huge supporter of the Union cause during the American Civil War. (And likewise, Marx saw the servile modes of production—slavery, feudalism, and “Oriental Despotism”—as improvements over primitive communism, or as Marx called it in Victorian-Era fashion, “savagery.”)

      For Marx, conceptions of freedom always needed to be evaluated in a historically-specific way according to what was materially possible at a given time.

      The first triumph of freedom was humankind’s attainment of a limited amount of freedom from the vagaries of nature. Agriculture. Irrigation. Calendar systems to predict and master the changing of the seasons. Clothing and strong shelters to shield us from the elements. Mastering nature. Marx was all about that. Even slaves and serfs, as unfree as they seem to us now, shared in this more limited conception of freedom. In that sense, agriculture and the advent of class society was in their interest, though the lion’s share of the benefits of class society went to their superiors. At least the advent of class society made them truly “human,” which meant above all to have a soul, to be made in God’s image—to be a thinking being to whom God granted dominion over all of the dumb, unthinking plants and animals, so that humankind might master them and no longer be at the mercy of them. “Oh, those poor hunter-gatherer savages at the mercy of nature and ignorance! It’s too bad that they will never taste sweet nectar of TRUE FREEDOM like you Christian serfs have!” So might say a priest or a feudal lord. And not without justification! We may laugh at this now, but that is simply because it is now materially possible for us to pursue a more expansive conception of freedom which puts the older conception of freedom to shame. So might the communists of the future look back on people in capitalist society with pity for our horribly limited and, to their minds, unambitious conception of freedom—if Marx ends up being correct!

      The developing forces of production under early capitalism made it practically possible for humankind in general to pursue an even more expansive conception of freedom as “individual sovereignty over self and property.” The “bourgeois” conception of freedom as individual sovereignty over one’s person and property was a real improvement, and probably the most expansive conception of freedom that was materially possible in the era of early capitalism. But Marx suspected that an even more expansive and substantive conception of freedom would become materially possible for all at a certain point—freedom from want, freedom from the circumstances of one’s birth and inheritance, freedom to order one’s activities as one saw fit.

      “Well, you might as well desire freedom from gravity! Want, need, desire, and the need for incentives to get people to do work that they would otherwise not like to do—these things are just as eternal as gravity!”

      Marx would argue that we might yet still obtain freedom from gravity—not by changing the laws of nature, but by mastering them so fully that to oppose gravity will require no more effort than it takes today to oppose the Polio virus.

      Likewise, we might indeed obtain freedom from want and work someday, as implausible as that still might seem. But it will require such a level of technology and AI automation that any communist calling for revolution at this moment today is deluded. As long as compulsory labor is needed and the production of our wants is not fully automated, then involuntary labor will quickly re-emerge in some form, whether as wage-labor, gulag-labor, or some other perverted form that falsely drapes itself in communist red.

      • P. George Stewart says:

        All this seems to me to be a massive conflation of freedom and power.

        We don’t have “freedom from gravity” because we don’t have the power or ability to nullify its effects.

        It’s basically a philosophical confusion between two uses of “freedom” that have a family resemblance, but no essential connection. “Constraint” imposed by other people is a different thing entirely from “constraint” imposed by nature. The former is to do with politics/law, the latter to do with science and technology.

        Politics is not a species of technology and its a monstrous mistake to confuddle the two. No doubt technology will free us from all sorts of natural limitations in due course, but you can’t force people to think (and thereby free us from natural constraints and limits) by political or legal means.

        This conflation of freedom and power, of politics and technology, is one of the main reasons (the other reasons have more to do with economics) why we’ve never had “real socialism,” and why all attempts to implement “real socialism” have turned to shit – either you get the scum rising in a brutal, kafkaesque abbatoir (early to mid 20th century); or Marxoid idealism mutates into the Gramscian/Alinskyite Machiavellianism of contemporary “social justice” and “identity politics;” or (for more delicate sensibilities), you end up with the mute, existential angst of the standard Postmodern faith-leap to the Left.

        The current globalist/technocratic refresh (which you seem to be cheering on) bids fair to end up with a Borg-like situation in which all of us are meat puppets teleoperated by AI. But no doubt that’ll end up not having been “real socialism” either 😉

        GIGO.

        • What you idealize as freedom from human constraints (individual sovereignty over property) is itself just another type of social constraint on what other people are allowed to do. It has its uses, but at the end of the day it is just a way of organizing the social division of labor and incentivizing others’ contributions to it. It is a practical tool, or rule, for organizing human behavior in certain contexts of scarcity that may someday become obsolete, not a timeless and eternally correct ideal.

          • P. George Stewart says:

            The liberal rule doesn’t idealize “freedom from human constraints” as some sort of separate, treasured category. When I’m making the distinction between human constraint and natural constraint, and saying politics and law are about the former, I’m not saying something like, “All human actions should be unconstrained, woohoo!”

            Obviously some actions are being constrained – actions that harm or constrain others in the (naturally) free and harmless exercise of their powers.

            That’s why the rule is a procedural way of apportioning control-of-stuff (as opposed to a centrally-directed, planned apportionment of control-of-stuff), why the initiation of violence, force, constraint, etc., is an important part of the concept, and why the liberal rule is a response to the initiation of violence, force, constraint, etc.

            Fundamentally, the situation is this: human beings, all human beings regardless of race, creed, etc., have natural capacities and powers, which often involve designs on stuff and the manipulation of stuff. The rule is that we (the rest of us) should allow them to exercise those capacities and powers so long as they don’t harm others. There’s obviously lots of room for debate on what “harm” consists in, but one important type of harm is certainly the initiation of constraints, violence, etc., on others.

            Now, the fact that this rule arose in a certain historical/economic context, doesn’t speak at all to its universality or lack thereof (any more than the fact that it arose mainly among white people means it only applies to white people). It stands on its own merits, or lack of merits, as a utility-maximizing, human-flourishing-maximizing rule, applicable (or not) to all human beings at all times in all possible contexts. (When I say “or not” here, again, the fact that “it arose contextually” is not an argument against it.)

  59. andrewducker says:

    I think that the “Can you pay for offsets on CO2?” question is qualitatively different to “Can you pay for offsets on murder?”.

    “Emitting some carbon dioxide” isn’t a bad thing. “Increasing the carbon dioxide levels in the atmosphere” is a bad thing. Offsetting means that the net change to the carbon dioxide in the atmosphere is zero (or negative). And, therefore, your overall impact isn’t “some bad and some good” it’s “no change”.

    Whereas if you kill someone and save someone else’s life then your overall result is “some bad and some good”, and the two don’t cancel each other out.

    Now, you might think that the mathematics of war are that you sacrifice some people to save others. But the morality there is “We need to do a thing which is a moral good. What will it cost?” and not “I fancy murdering some people, what would I have to do to make it ok?” – the answer being “Nothing” – it’s never “ok” to have ended up with dead people because of choices you make, it’s an awful thing which might sometimes decide is better than the alternative.

  60. tmk says:

    > Emitting carbon doesn’t violate any moral law at all
    > Eating meat doesn’t violate any moral laws either.
    > they’re working to establish a social norm against meat-eating

    I think you glance over this too quickly. Some people are clearly trying to establish social norms against carbon emission and meat-eating. Which does make sense, social norms should be continually updated to optimize axiology. For carbon emissions this seems difficult. A person cannot have zero emissions, so there can never be a hard and fast rule like “don’t murder”. Softer moral rules like “don’t emit too much carbon” are difficult. So you can make an argument that trying to establish morals against carbon emissions will not work, but you should make that argument explicitly.

    A social rule against meat eating feels more productive, but of course we are far away from that today.

  61. Machine Interface says:

    Rationalist discussion of morality are such a weird thing. Not that the arguments and points made themselves are bad, but rather morality is fundamentally irrational concept, that cannot be grounded nor indeed ever shown to represent anything other than the particular preferences of an individual or group, which have no more merit or truth to them than the preferences of any other individual or group.

    And yet here are rationalists, just *assuming* true objective moral exists somewhere, hardcoded into the fabric of the universe, and that if we think inductively *really hard* about it, we will eventually find it! This is even weirder when the rationalists in question are also atheists.

    And I know how derisive that sounds, but I’m actually genuinely currious — I don’t understand how the vast majority of rationalists can embrace any moral view other than moral antirealism.

    • szopeno says:

      Actually this is the same conclusion as mine after a long discussion with one calvinist (I am atheist). My answer is that because our morality is hardwired, there is no way we can’t see morality as not objective. I like chocolate and especially when it’s formed into nice, aesthetically pleasing bars. I know there is no objective reason to prefer them over the chocolate which would be just put into formless, hairy balls, but this does not prevent me from still preferring one form of chocolate over another (or, chocolate over the separately given equivalent of its nutritional etc content)

    • The non-realist positions include relativism, constructivism, in addition to outright nihilism. Some of them can look and work like objectivism.

    • Matt C says:

      Basically, what szopeno said. Moral feelings feel true and they feel important, even to rationalists. You’re not going to turn off your feelings about morality. If you’re a cerebral systematizing type of person, you’re going to be pulled very hard toward systematizing those feelings.

      I suspect a lot of rationalists would acknowledge that consequentialism isn’t actually an independently and objectively real truth, but rather “we’ve got to roll with something, what else are you gonna use”. Me, I don’t feel like this is true on a personal level, but I would agree with it on a policymaker level.

  62. Telofy says:

    I would much prefer you didn’t link to Harrison’s piece without heavy disclaimers. I think it’s the best example of an Arthur Chu–type attack on niceness, community, and civilization that I’ve encountered over the past two years in my effective altruism and animal rights circles.

    Animal Charity Evaluators has published a detailed response to Harrison’s claims because they require detail: some claims are interesting but in no way the attack on ACE that he makes them out to be (e.g., ACE is not against animal rights in any way and even recommends an organization with basically that term in its name); others are the result of one of the 150+ pages on the ACE website going way out of date and ACE failing to mark it accordingly; others are attacks that – put more nicely – would’ve been appropriate criticisms of ACE in 2014; others hold ACE to standards they might live up to if they had three-digit millions of funding and an order of magnitude more staff; others are conspiracy theories I can’t take seriously no matter how I try; etc.

    To the ACE outsider, all these problems – which have long been pointed out to Harrison – are brilliantly hidden behind scientific lingo and statistical analyses. An abuse of exactly the language that conveys high status and reliability in our circles.

    I have pointed out to many people that ACE has heard the same criticisms of its estimates – but put more nicely – in 2014 and has reacted by adding disclaimers over disclaimers to its point estimates. Things along the lines of “these estimates cannot be taken literally,” “these estimates contain factors with error bars spanning three orders of magnitude,” etc. This was a purely communicative change already. ACE was long aware that it cannot rely on these models and has made them only one of seven evaluation criteria.

    The criticism continued. It seemed disproportionate to me at this point since many people in the EA movement put out estimates like that – including GiveWell – and everyone seemed to understand what caution was required in handling them. But ACE reacted again by pledging to reimplement their estimates, this time with Guesstimate, so that they could give confidence intervals rather than point estimates to better convey their extreme uncertainty. They spent a year working on this update, then published it, even hiding the estimates completely from the summaries and only linking to them from a “Documents” tab. And only then did Harrison start attacking them for what were effectively their 2014 “sins.”

    When I explain this to people, they sometimes steelman Harrison in ways that I can understand. They say that by publishing numbers one assumes the responsibility for making sure that other people don’t misuse these numbers and contacting them when one notices such misuse. I don’t share this notion, but it seems sufficiently moral in nature that I can’t strongly argue against it. (Personally I think your “Vegetarianism for Meat-Eaters” estimates were sufficiently cautious not to constitute misuse, but ACE seems to be even more cautions than me and, I think, disagrees.)

    Even its factual wrongness about ACE’s actual positions aside, Harrison’s piece is the prime example of why sensitive people like me withdraw from online discussions, grow afraid of assuming leading or public-facing positions at NGOs, and leave forums like the EA Facebook group more and more to the similarly aggressive and unreasonable hordes (or so I’ve heard, since I don’t go there anymore). It’s the type of attack on niceness, community, and civilization that I don’t think you should give airtime to.

  63. Peter says:

    A lot of this is there in Mill, who literally wrote the book on Utilitarianism (http://www.gutenberg.org/files/11224/11224-h/11224-h.htm). Chapter V of Utilitarianism, “ON THE CONNEXION BETWEEN JUSTICE AND UTILITY.”, has a fair amount of layering going on. There’s a paragraph about Duty: “It is a part of the notion of Duty in every one of its forms, that a person may rightfully be compelled to fulfil it. Duty is a thing which may be exacted from a person, as one exacts a debt.”, and following that paragraph, “This, therefore, being the characteristic difference which marks off, not justice, but morality in general, from the remaining provinces of Expediency and Worthiness; the character is still to be sought which distinguishes justice from other branches of morality.”

    So, as far as I can tell, for Mill, utilitarianism is a general theory of goodness, goodness comes in a variety of forms. Presumably Expediency is all of the self-serving utilitarian good – which I think corresponds to what Scott calls “axiology” – that people don’t need to be encouraged to do, like feeding yourself. Worthiness is presumably things like feeding the poor. What Mill calls morality Scott calls morality. For Mill, justice is a subset of morality, and about punishing people who do harm; I think the actionable harms that come under the heading of “immorality”, still, justice is bigger than law, and includes punishment by the opinions of society, or by your own conscience.

    There’s also Kant (one of the two main traditional rallying points for people who don’t like utilitarianism), who has Perfect Duties and Imperfect Duties. Perfect Duties are things like not murdering, what Scott and Mill call morality. Imperfect Duties are things like charity, and developing your talents: not doing anything about these duties is wrong, the more virtuous a person is the more they will pursue them (subject to never going against a Perfect Duty, of course), pursuit of them for the sake of duty accrues moral worth.

    So, yeah, different layers/areas/something of goodness/axiology/morality/something with different sorts of rules seem to be recognised as very much a thing, although as to how many layers there are and what goes in what layer and what names we should give to those layers seems to be something we can argue about until an Outside Context Problem comes along to make it all irrelevant.

    Of course, although Mill and Kant were talking in general terms more than a century ago, I don’t think they anticipated the modern “offsetting” thing, which is why we need Scott. Or maybe they did, I mean my amateur Mill scholarship isn’t particularly thorough, and I’m even shakier on Kant.

  64. Andrew Hunter says:

    Can Bill Gates nuke entire cities for fun, then build better cities somewhere else?

    Not only can he, but in the case of San Francisco, he is morally obligated to do so.

  65. Maznak says:

    Ok. Would you have one randomly selected person painlessly killed to relieve billion people from mild, some 15 more minutes to last, headache? (I have seen this dilemma somewhere on Steven Landsburg’s web, thebigquestions.com)

    • Said Achmiz says:

      This dilemma is usually given much more starkly in rationalist circles.

      But we can make it even more stark—and far more interesting. So here are some questions that I would ask, were I ever to encounter a utilitarian willing to answer them:

      1. Would you, personally, submit to 50 years of torture, to save 3^^^3 people from a dust speck?
      2. Would you condemn your best friend to 50 years of torture, to save 3^^^3 people from a dust speck?
      3. Would you condemn your mother (grandmother, kid brother, etc.) to 50 years of torture, to save 3^^^3 people from a dust speck?

      Bonus questions for utilitarians who think that animals have moral value (m is how much more valuable you think 1 human is than 1 chicken):

      4. Would you condemn your best friend to 50 years of torture, to save m * 3^^^3 chickens from a dust speck[1]?
      5. Would you condemn your mother (grandmother, kid brother, etc.) to 50 years of torture, to save m * 3^^^3 chickens from a dust speck?

      [1] Or equivalent discomfort.

      P.S.: Substitute anything you like for “dust speck” in the questions above, if the original formulation seems absurd to you. Take it all the way up to “death”, if need be.

      • Jiro says:

        You can eliminate the torture and dust specks. “Would you kill 20 humans, to save 20 * m + 1 chickens?”

      • Maznak says:

        Right. But there was another, even more interesting point in this Landsburg’s dilemma (applicable to all the other similar dilemmas).
        People routinely take 1:1billion etc risk of death to avoid mild discomfort or to get some mild pleasure. Like driving to buy something they don’t really need at that moment.
        So by killing this one randomly selected person, you are only giving them what they in fact want.

        • Dedicating Ruckus says:

          “Kill one person, with probability 0.000000001” is really not the same as “kill one selected person out of a billion people, with probability 1”. Nor are either the same as “impose a 0.000000001 risk of death on a billion people”.

          Much as utilitarians may wish to throw away all real-world information just so they can get something mathematically commensurable, morality doesn’t actually work like that.

    • carvenvisage says:

      it might depend on how many driving accidents, deaths, and other disasters the 15 minute headaches would cause.

      Also if you’re proposed with this choice, find the person responsible and crucify them. The thing about devil’s bargains is that a devil can happily tell you to fuck off and do both if he’s willing to threaten either.

  66. theory says:

    This model also has the useful effect of demonstrating why the Copenhagen Interpretation of Ethics poses such a problem for many people. Briefly laid out, the Copenhagen Interpretation of Ethics says that people who interact with a problem while benefiting off of it are criticized for doing so, even if they help alleviate or solve the problem. For example, a startup paid homeless people to walk around acting as mobile wi-fi hotspots during SXSW; PETA paid families’ water bills if they agreed to go vegan; New York City tested a program for homeless while using a control group of homeless that were rejected from the program.

    These are all examples of intuitively amoral actions (at their core, benefiting in some way off the less fortunate) that nevertheless make the world an axiologically better place.

  67. Wrong Species says:

    Have utilitarians talked about the diminishing marginal utility of each additional human life? I feel like it should be obvious that the first billion are far more important than the next billion but utilitarians generally talk about these large numbers of people as if they were just as important.

    • Dedicating Ruckus says:

      Utilitarianism generally speaking sums utilities strictly linearly (I think?). I consider this to be a mistake, and it’s one of the main ingredients in stupid results of utilitarianism, like Pascal’s Mugging or the torture-vs-dust-specks problem.

      • Witness says:

        Indeed. I don’t think this is limited to discussions about morality, either. Lots of people assume linearity (and constant/static valuations of various parameters, etc.) when they’re asked to do calculations, sometimes because that’s the extent of their own mathematical comfort level and sometimes because that’s the expected comfort level of their audience.

        People’s intuitions often get them closer to “correct” than trying to force them to do the math without an adequate math background.

    • Joe says:

      Not sure I see the justification for this, since utility is something experienced by individuals, not groups.

      There are lots of reasons why the number of people might be instrumentally important, though. For example, saving a life in a world of only a few hundred people might have a significant impact on how large the population can grow in the future, whereas on whether in a world of billions of people it probably wouldn’t. And you might feel more insignificant knowing you live in a world of billions — but this is a consequence of the information you’re fed, not a direct implication of the population size, since you’d feel the same even if you’d been misled and there really were only a few hundred people after all.

    • DocKaon says:

      The utilitarianism espoused in rationalist circles seems to be the most naïve form with no accounting for diminishing marginal returns, time discounting, uncertainty, or the difficulties of comparing utility between people. Rationalists then are prone to applying this very simple model to extreme cases. That they get bizarrely non-intuitive results is not taken to be a flaw in this procedure, but instead drives many of their beliefs.

    • Yosarian2 says:

      There are different types of utilitarianism. As Scott mentioned a while back, some view the *sum* of utility important, others view the *average* utility as most important. I think the second view is probably better, so long as you hedge it a little to avoid the obvious pitfalls (like “no killing unhappy people to raise the average”).

      The idea of diminishing marginal utility seems like a really neat way to look at it, though, and it’s one I haven’t heard before; it makes sense, both intuitively and economically.

      • bbartlog says:

        The problem is that if you’re looking for a Grand System to guide your decisions, then ‘hedging a little to avoid the obvious pitfalls’ is already a clear indication that you’ve made some kind of mistake. The mere existence of so-called ‘obvious pitfalls’ means that you’re falling back on some other methods that you clearly regard as more reliable, at which point you should be asking yourself why you don’t just use that guidance all the time instead. I mean, in this particular case you can make some good average-utility-maximizing arguments that exclude most situations that call for murder, but this still leaves you, I think, in a position where you haven’t adequately justified and examined the reasons that led you to review and elaborate on this particular class of cases.
        Certainly it looks to me more like a case of someone who likes the *idea* of a grand unified theory of moral decisions, who however in practice is more about rationalizing whatever hodgepodge of heuristics society saddled them with.

        • Yosarian2 says:

          The problem is that if you’re looking for a Grand System to guide your decisions, then ‘hedging a little to avoid the obvious pitfalls’ is already a clear indication that you’ve made some kind of mistake. The mere existence of so-called ‘obvious pitfalls’ means that you’re falling back on some other methods that you clearly regard as more reliable, at which point you should be asking yourself why you don’t just use that guidance all the time instead.

          Sure. That’s true of every moral theory though. No matter what the moral theory is, if you work it through and come to a repugnant conclusion, the vast majority of people will go back and edit the moral theory rather then accept the repugnant conclusion.

          Certainly it looks to me more like a case of someone who likes the *idea* of a grand unified theory of moral decisions, who however in practice is more about rationalizing whatever hodgepodge of heuristics society saddled them with

          Not quite that simple. More accurately, I accept that:

          -You should use rational thought to try to align your personal ethics and morality towards “the greater good”, which means you have to have a system to understand what that means in as precise a way as you can. People who aren’t reflective in this way often go moderately wrong in various ways which makes their lives and the lives of those around them less good then they should be, and this does a lot of harm because it’s such a common mistake to just, as you say, “accept whatever hodgepodge heuristics society saddled them with”.

          -But if you ever rationally work through and come to a conclusion that is just totally in opposition to your inherent sense of morality and empathy, stop, reset, and reconsider, probably go back to the drawing board. Something has probably gone terribly wrong somewhere, even if you can’t figure out where. People who rationally come up with reasons why they should do things that are obviously horrible for the greater good occasionally make the world a much worse place, very quickly, and if anything this is a much more dangerous trap then the first one.

          The two kind of conflict, sure, but that’s ok; if anything the occasional conflict between the two is a good saftey measure to stop you from going too far off the rails.

  68. Bugmaster says:

    Correct me if I’m wrong, but wasn’t Utilitarianism/Consequentialism designed specifically to simplify moral decisions ? Instead of following a labyrinthine decision tree of deontological rules and amendments, now all you need to do is just calculate the expected utility, and go with whichever decision maximizes that value.

    If we now need to build epicycles upon epicycles on top of Utilitarianism just to make it usable by humans, maybe it’s time to admit that either Utilitarianism is hopelessly flawed, or that humans are, or perhaps both.

  69. Yosarian2 says:

    Really interesting idea. I do get the distinction you’re making here, and I think it’s valid in terms of how most people view the world.

    I think the one part that doesn’t feel right to me is your order. I don’t think that Axiology leads to morality and morality leads to law; I think it’s more like Axiology leads to morality and Axiology leads to law, at least ideally. That it, we try to create moral rules that will make the world a better place (basically rule utilitarianism; “Most people in history who thought they were going to make the world a better place by killing a bunch of people were wrong, so if you think killing people is going to make the world a better place, you’re probably wrong”), and separate, we create laws designed to try and make the world a better place.

    I don’t think law and morality have all that much to do with each other. Sometimes they reinforce each other and that’s great, but often not. I don’t feel that it’s immoral to drive 70 mph in a 65 mph zone, most people don’t as far as I can tell, but I do understand the reason for the law in terms of reducing total deaths from car accidents.

    When you do allow an interplay between morality and law, you can sometimes get nasty feedback loops and circular arguments which can cause a lot of harm. “Marijuana should be illegal”. “Why?” “Because it’s immoral.” “Why is it immoral?” “Because it’s illegal!” At one point I’m sure that at least people who thought drugs should be both immoral and illegal though they were making the world a better place, same with the people who were in favor of prohibition, but if you’re not careful one can break free of the other and you end up with free floating “moral rules” or “laws” or both that no longer have anything to do with making the world better and hang on long past the point where they are doing more harm then good.

    On a side note; I do see a lot of evidence that people do see the world in this way. For example I think the people who are opposed to carbon offsets and think to carbon taxes or cap and trade aren’t enough and all that seem to be the people who think carbon pollution is an actual moral issue. They tend to say things like “But you’re letting people POLLUTE, and all they have to do is pay some money!” Meanwhile, I can’t track the quote down now, but I heard a major proponent for finding an economic solution to climate change say the quote “Climate change is too important to be just a moral issue”, which I found to be a very interesting way to put it.

  70. suntzuanime says:

    How is this “contra” Askell? Your position seems at most 30 degrees deviated from hers.

  71. deciusbrutus says:

    Now suppose you could kill N people, for a cumulative X^N chance of curing all cancer for everyone. But you only get one chance to do so.

    for X=1, you kill one person.
    for X=.999999999, presumably you kill at least one person.

    How many people do you kill for X=.5? .1? .01? .001? .0001? [1/the number of cancer deaths expected in the next year]?[…next 10 years]?[…next 100 years]?[…ever (accounting for medical advances other than from malevolent medical science genie)]?

  72. Mary says:

    Oddly enough I was just reading Young Sentinels by Marion G. Harmon*, in which a young character, acquiring superpowers, accidentally kills someone. (Really accidentally — no negligence involved.)

    It is suggested to him that he can help make up for this by saving lives with his powers. To be sure, this is in part to help him calm down.

    But, of course, murder, manslaughter, negligence, and pure accident can all produce the same body count. What is the weight of intent in offset?

    *Third book in the series, you may want to read the first ones first.

    • uau says:

      Even without considering intent as such, if the victim of an accident is a “random bystander” then you could likely save someone from the same demographic the victim was selected from. This could offset the death in the sense that anyone would overall have at least as good a chance of surviving the year than they would’ve had without you existing. This would be equally true whether it was an honest accident or you selected a random victim to kill on purpose, but would not be the case for typical murder (“people I really want dead and could have ended up murdering first if not for chance” is likely not a large enough demographic).

      If the victim is not random enough then this does not work. If you for example kill one of just a couple of friends who hang around while you experiment on something, then unless your friends lead really risky lives you likely won’t have an opportunity to save one.

  73. Baeraad says:

    Interesting.

    I can’t help it feel that that’s an extremely cold way of looking at morality – “you can make up for any amount of objective harm, but going against peer pressure is unforgivable.” But on the other hand, I can’t deny that it does pretty much sum up how sane people (here defined as “people who don’t claim that refusing to donate money to save one life in Africa is morally equivalent to stabbing your neighbour”) tend to intuitively feel. Including me.

    So I suppose we can at least agree that this makes for an excellent model for how human intuitive morality works, irrespectively of whether the way it works makes logical sense or not?

  74. Christopher Hazell says:

    In my opinion both Alexander and Askell are making this insanely overcomplicated. I don’t know enough about utilitarianism to know if that’s the reason, but I noticed something in both of their essays, and in skimming the comments, I’ve only seen one other person address it.

    Basically, in both essays the nature of the examples changes as the essay goes on, but the change is not remarked on at all. Here’s the change:

    The early examples are things which are good, but which have bad side effects (e.g. tourism in a foreign country). Their later examples are things which are bad in and of themselves (e.g. murdering a stranger). I know that’s not the most rigorous way to put it, so let me sort of explain through example:

    One of Askell’s examples of moral offsetting is when you have to work late and miss a friend’s game, so you offer to buy them dinner to make up for it. So imagine the following scenario: Your whole team at work is about to finish a big project (Let’s stack the deck and say you work at a non-profit, and completing this project will make the world a better place.) In order to support the project, you have to work late. But you promised your best friend that you’d come to her game tonight! You’re about to call her, when all of a sudden she phones you, “Hey, I know you were going to come to my game tonight, but the stadium collapsed. Don’t worry, nobody was hurt, but obviously we won’t be playing tonight. Actually we’re having an emergency meeting about how to deal with this, and it’s going to last a while, so I don’t think we can hang out tonight, but lets totally get drinks and talk about it sometime later in the week!”

    So, here’s the intense moral dillema: Is it okay to work late in those circumstances?

    I think pretty much anybody would answer, “Uh, yeah, of course it is? Duh?”

    You can build a parallel example with vacationing in Europe. Suppose somebody invents a miracle solar powered plane that is exactly as quick and as safe as our normal planes. Obviously it emits no carbon. Is it morally acceptable for an environmentalist to ride this pollution free airplane?

    Okay, this is working well, so let’s go on to a later example, spitting on a stranger in the street. So if… um, like, say I want to, um… If, like, somebody has a being spit on by strangers fetish, and you, um…

    Okay, now it’s hard to even know where to start. To me, this indicates that spitting on a stranger, or murdering them, for that matter, is a different kind of thing then vacationing in Europe.

    It’s easy to come up with a hypothetical in which we get the benefits of a vacation without the drawbacks. It’s much harder to do that when it comes to spitting on a stranger, because, like, what even are the benefits of spitting on a stranger?

    I think the reason Alexander and Askell are having so much trouble figuring out how to be morally opposed to murder offsets is that they’ve fundamentally misunderstood what the offset ought to be. Here’s what strikes me as a much simpler and much more powerful approach to offsets that squares with our moral intuitions:

    A moral offset is not an effort to bribe your way out of bad behavior; a moral offset is an attempt to mitigate or eliminate the negative consequences of otherwise worthwhile behavior.

    If we combine this with the observation that several people here have made that human beings are not fungible, then everything snaps into place.

    A murder offset is not morally acceptable, because the offset does not mitigate the harm. For example, one of the harms of murder is the intense grief suffered by the victim’s loved ones. Donate to charity, save somebody else’s life, raise your children well, none of that will comfort the relatives of the stranger you murdered just to see what it would feel like to kill a man. The “offset” is not morally permissible because it has done literally nothing to mitigate the harm of the murder. All of the moral harms of the murder are still there, whether you “offset” or not.

    On the other hand, that carbon offset you bought, which reduces the net amount of carbon in the atmosphere even after your plane trip, that has mitigated, or arguably entirely eliminated, the negative consequences of the action, and left you with the beneficial ones.

    I feel that this is a much more elegant way to think through this issue.

    • Said Achmiz says:

      I basically agree with you, but want to note that yes, this is indeed a utilitarianism thing. Utilitarianism rejects the notion of the separateness of persons—which is one of the big reasons why I reject utilitarianism. To quote an old essay of mine on the subject:

      … if I torture Bob to save a million people, well, maybe in some aggregate sense that’s great. But it sure is pretty bad for Bob. The fact that a million people are saved as a result doesn’t make the outcome any better for Bob. No amount of aggregation makes that fact about the world any less bad. And so in an important sense, this world is worse than a world where Bob doesn’t get tortured. Is it, in a different sense, better than the world where Bob is fine, but that million people die? Yeah; for those million people, it sure is. But these measures of “how good the world is” cannot be reconciled by merely comparing the number 1 to the number 1,000,000. It is not obvious that the latter world is simply better, period, from some objective, impersonal viewpoint. (It’s not even obvious that such a viewpoint may be coherently postulated.)

  75. Matt M says:

    I’m pretty late and the comments are probably already dead, but I did want to address this specifically.

    Emitting carbon doesn’t violate any moral law at all (in the stricter sense of morality used above). It does make the world a worse place. But there’s no unspoken social agreement not to do it, it doesn’t violate any codes, nobody’s going to lose trust in you because of it, you’re not making the community any less cohesive.

    I don’t think this is quite right. I think that there are different moral codes. There are basically universal society moral codes, which include “obey the law” PLUS other fairly basic, non-controversial things such as “Don’t lie to people, don’t be a jerk for no reason, etc.” These are essentially mandatory and non-negotiable. You can’t opt-out of them without becoming a social pariah. But there are also more specific, voluntary, and restrictive moral codes which people can opt-in to.

    I think one of the reasons non-environmentalists object to the idea of say, carbon offsets for air travel, is that, in their view, environmentalists often do, in fact, speak of environmentalism in strictly moral terms. It’s certainly true that if you were in a public park in San Francisco, and you threw an aluminum can into a trash can instead of a recycle bin, you’d get plenty of dirty looks and maybe someone would even confront you about it. This is environmentalism turned moralism. And in the public policy debate, climate change related topics are clearly often referred to in moral phraseology. The same is true for animal rights. PETA uses phrases such as “fur is murder” and “the chicken holocaust” to clearly imply that, in their belief, these are moral issues and should be treated as such. And of course, this isn’t just a blue-tribe issue. The right-wing version of this is conservative/religious moralizing. A Senator who crusades against the evils of homosexuality who is caught having an encounter in a highway rest-stop will be raked over the coals, mostly by people who do not see homosexuality as inherently immoral. Because the nature of his public positions is such that he has self-identified as someone who should be judged by a more strict set of voluntary morals, ones that prohibit homosexuality. Just as someone who retweets PETA propaganda is essentially saying “I subscribe to a stricter moral code and think everyone else should too.”

    If you want to use the bludgeon of morality to advance the goals of your preferred policies, you should be prepared to be judged by that version of morality yourself. If you believe that emitting carbon is immoral, then carbon offsets are not any more acceptable than lying or stealing or anything else. And if you lecture others on the immorality of carbon emissions, the general public will assume that you do, in fact, believe that, and will judge you based on that standard, rather than the universal, society-wide standard.

    Edit: And any sort of admission that one feels uncomfortable about face-spitting offsets but is fine with carbon offsets is, essentially, an admission that carbon is NOT a moral issue. Which is a totally acceptable belief. So long as you don’t lecture others on the immorality of carbon.

    • People tend to use deontology in some circumstances, and consequentialism in others. That’s by no mean restricted to environmentalism.

      I think one of the reasons non-environmentalists object to the idea of say, carbon offsets for air travel, is that, in their view, environmentalists often do, in fact, speak of environmentalism in strictly moral terms.

      Arguments form morality always work , because everyone is a hypocrite, and never work, because everyone is a hypocrite.

  76. hopaulius says:

    “Emitting carbon doesn’t violate any moral law at all (in the stricter sense of morality used above). It does make the world a worse place.” I find this axial (?) clarity astonishing. In the first place, there is a lot left out of the intended meaning of “emitting carbon.” I emit carbon every time I exhale. Does this make the world a worse place? Should I stop going to the gym, because I breathe more there? Second, I don’t see how it can be a true statement. Is humanity worse off now that we have used fossil fuels to increase transportation and development, lifting billions of people out of poverty and prolonging life spans? Certainly “emitting carbon” hasn’t made humanity worse off. Ah, but we’re talking about “the world.” Is “the world” a worse place because humanity is in a better place? Doesn’t this train of logic lead toward voluntary, or if necessary, involuntary human extinction? Is axiology a species-wide suicide (or murder) pact?

  77. Netizen65793 says:

    Didn’t you just reinvent the deontological – consequentialist debate and call one axiology and the other morality? It is trivial that moral offsets only work in a consequentalist framework. (Unless we use it as some kind of repentance/redemption. But I’m not sure how that would work outside a religious context). So I’m not sure what your theory is exactly.

  78. carvenvisage says:

    Is this correct?

    axiology = what should be

    morality = what you should do

    law = what you should be compelled to do

  79. Angra Mainyu says:

    Scott,

    I think this matter needs some clarification.
    For example, law never trumps morality in the moral sense of “trump”, but this seems trivial: if your choice is between behaving immorally or behaving illegally, you should behave illegally of course, because that “should” is a moral “should”. If it’s not a moral “should” but a “should” that depends on something other than not behaving immorally, then what you should do depends on your goals.
    So, I would say that morality always and trivially morally trumps law, so you (morally) should – tautologically – not behave immorally even if your only alternative is to behave illegally.
    I guess you meant something else by saying that there is a “universally-agreed priority is that law trumps morality”, but I think that needs clarifying, because it’s not easy to figure it out from context.

    In any case, I have some objections – if I misunderstood something, please clarify.

    So first you do your legal duty, then your moral duty, and then if you have energy left over, you try to make the world a better place.

    That would be (tautologically) immoral behavior, if your legal duty is in conflict with your moral duty.
    Of course, that morally impermissible but legally obligatory behavior might be what you should do in order to achieve some goal that you have – of course, never what you morally should do -, but then again, if we’re talking about means and ends, depending on the goal, it may well be that some people should breach their legal duties, and their moral duty, and make the world a worse place in order to achieve some goal. I’m pretty sure Kim Jong-un has some goals like that, but in more ordinary cases, muggers have goals like that all the time.

    As for offsetting immorality, I agree one can’t ever offset immorality, but that also seems pretty straightforward, because in this context, “offset” seems to be a moral concept. More precisely, it seems to me that if you do X and offset Y, that means that by doing X, you can engage in Y in a morally permissible manner, and you would not have been able to do so (i.e., Y would have been impermissible) had you not done X (or something else to compensate for the bad results of X). Of course, if you do something immoral (even if just a bit immoral), then you cannot possibly offset it, because…it’s already immoral!

    Am I missing something here?

    I don’t have any special insight into these. My intuition (most authoritative source! is never wrong!) says that we should be very careful reversing the usual law-trumps-morality-trumps-axiology order, since the whole point of having more than one system is that we expect the systems to disagree and we want to suppress those disagreements in order to solve important implementation and coordination problems.

    I don’t agree that that’s the point of having more than one system. Morality is innate in humans (like, say, color vision, or just the sense of sight), and there is no specific point of having it (i.e., it’s not that one chooses to have it as a means to an end; it’s what we are).
    The same goes very probably to evaluations of good and bad things (I think this might be what you call “axiology”).
    On the other hand, laws are passed for many, many different purposes. But it’s not the case people generally make a choice to have a law for one reason or another. They’re born in a place with legal rules. It’s usually (but not always) immoral to break those rules (of course, it’s always immoral to break moral rules, but that’s tautological).

    But I also can’t deny that for enough gain, I’d reverse the order in a heartbeat. If someone told me that by breaking a promise to my friend (morality) I could cure all cancer forever (axiology), then f@$k my friend, and f@$k whatever social trust or community cohesion would be lost by the transaction.

    But if by doing so, you can cure cancer forever, then it’s not immoral for you to break the promise as a means of curing all cancer forever (probably; more below).

    Let me argue in a different way: let’s say that you break the promise in question, and by doing so, you cure cancer forever. Do you think that your friend would act in a morally permissible manner if she were to in any way punish you for your actions? (the punishment might consist in insulting you, or telling you that your behavior was immoral, or not talking to you, etc.).
    Do you think you would deserve punishment for your actions?

    I’m assuming, of course, that the promise is something of no big consequence. If, say, by breaking the promise you somehow send your friend to 10^1000 years of torture with the matrix overlords and you know it, of course you (morally) should not do it…all other things equal, as usual.

    That said, that’s not offsetting as far as I can tell. It’s a case in which you break your promise in order to cure cancer forever. A case of offsetting as described in the examples does not seem to be one of means to ends. For example, the person who emits CO2 does do in order to make money, not in order to do whatever good she does to compensate for the inconvenience she brings to others (e.g., paying carbon tax).

    Murdering someone does violate a moral law. The problem with murder isn’t just that it creates a world in which one extra person is dead. If that’s all we cared about, murdering would be no worse than failing to donate money to cure tropical diseases, which also kills people.

    I don’t think that not donating money kills people. Not donating money that would prevent deaths is not the same as killing as far as I can tell, in the usual sense of “kill” (or in a moral sense). But that aside, murder is also not the same as, say, killing in battle, or even executing somebody under certain conditions.
    I do agree that murder violates a moral law – though that might be part of the concept of “murder”, it seems to me. Maybe “murder” means something along the lines of “the immoral premeditated killing of another person”. Or maybe it has more than one common meaning.

    Anyway, if the moral claim is not implicit in the concept of murder, then I’m not sure whether murder is always immoral. I would need more info about the concept.

    The cost isn’t infinite, but it’s pretty hard to calculate. If we’re positing some ridiculous offset that obviously outweighs any possible cost – maybe go back to the example of curing all cancer forever – then whatever, go ahead.

    But going ahead would be immoral, so you shouldn’t (in the moral sense, of course!). For example, if you cured all cancer forever, you would still have a moral obligation not to murder; you still shouldn’t (okay, that might be tautological, let’s say that you would still have a moral obligation not to kill people for fun, which is not tautological unless some form of analytic reduction of moral terms to non-moral ones holds).

    For example, suppose I built an entire power plan that emitted one million tons of carbon per year. Sounds pretty serious! But if I offset that with environmental donations or projects that prevented 1.1 million tons of carbon somewhere else, I can’t imagine anyone having a problem with it.

    I’m pretty sure plenty of people would have a problem with it. In fact, plenty of people will have a problem even if you build a very safe nuclear power plant that emits no carbon and on top of that you make the donations – though I guess maybe you mean that people shouldn’t have a problem with it?

    Anyway, generally, I think that that depends on factors such as your motivations and the info available to you. If you do all that in order to make money, it seems morally permissible to me, so I tend to agree (though you should expect plenty of disagreement on that front too). But what I’d like to highlight is that when you say “I can’t imagine anyone having a problem with it” you seem to implicitly hold that it’s not immoral for you to behave in that matter, and that seems to support to my earlier assessment that in this context, “offset” seems to be a moral concept (and more precisely, what I described above).

    On the other hand, consider spitting in a stranger’s face. In the grand scheme of things, this isn’t so serious – certainly not as serious as emitting a million tons of carbon. But I would feel uncomfortable offsetting this with a donation to my local Prevent Others From Spitting In Strangers’ Face fund, even if the fund worked.

    Indeed. I agree that spitting in her face would be immoral…unless you have good reasons to, but the fund wouldn’t make it not immoral.

    Askell gave a talk where she used the example of giving your sister a paper cut, and then offsetting that by devoting your entire life to helping the world and working for justice and saving literally thousands of people. Pretty much everyone agrees that’s okay. I guess I agree it’s okay.

    I don’t. Whether it’s okay depends on factors such as your motivation for giving her a paper cut and the info available to you when you cut her. Whatever you do afterwards is not going to retroactively change the moral character of your action. And if you were already planning to devote your entire life as a means of offsetting an unjustified attack on your sister that you were planning, also your behavior is immoral.

    Heck, I guess I would agree that murdering someone in order to cure cancer forever would be okay.

    But assuming you’re correct (I do not agree even granting that immorality is not built-in the concept of murder unless specific features of the person makes it acceptable, but that’s a side issue), you wouldn’t be offsetting an immoral action, since in that case, murdering someone would be okay, which means “morally permissible” in this context.