Minutes From The Michigan Rationalist Meetup

0:58 – Discussion of unschooling. Two of the attendees were unschooled, and their opinions pretty much paralleled where the discussion went here – kids who are smart enough will learn stuff whether or not anyone is sending them to school; kids who aren’t may or may not, but they seem able to pick up what they need when they need it and no one ends up unable to read or anything like that (though some people take a very long time). The key phrase seems to be “steeper learning curve” – unschooled kids might wait until age 12 to pick up something schooled kids learn at 8, but when they want to they pick it up quickly and effectively. Some people never learn algebra, but then again if you asked me to do complicated things with polynomials I could only give you about 50-50 odds.

1:50 – Debate on cardinal vs. ordinal utility. Sure, ordinal utility is much more responsible and satisfies obscure technical criteria like “isn’t totally made up”. But on the other hand, not only would I prefer curing cancer and winning a Nobel Prize for it to getting one potato chip, but I would prefer it a lot. In fact, I would venture to say that in some important sense, the cancer cure + Nobel would be more than twice as good as the potato chip. Any theory of utility that doesn’t allow me to say this, which just says “Well, the cure + Nobel is better, but it’s meaningless to ask if it’s a lot better or only slightly better” doesn’t even come close to reflecting real human experience and is woefully inadequate.

1:59 – Mild concern over what if someone had a preference for having a round number of utils. Would such a preference be self-consistent? Would the universe stop?

2:41 – Discussion on how much should we respect the preferences of the dead. Aaron, channeling Robin Hanson, points out that exponential discounting means we consider preferences of people in the future much less important than those of people today – otherwise we would invest all our money to donate the interest to future folk. But this implies that people in the past had more important preferences. If a Neanderthal had preferences that everyone should make sacrifices to the fire god, should we sacrifice to the fire god more?

2:58 – Confusion over whether the wrong time preferences result in everyone eating grapefruit all the time. It was brought up that someone with infinite discounting ie total inability to identify with their future self would blow all their savings on drugs and prostitutes and grapefruit. We all figured grapefruit was some kind of metaphorical symbol of hedonism, and we discussed this for a while until finally I asked “Wait, why grapefruit?” and we realized that all of us had just kind of assumed the others were talking about it for some good reason. Turned out the first person had said “drugs, prostitutes and great food”, we had universally heard “grapefruit”, and just kind of taken it from there.

3:30 – Mild crisis. If we can’t decide on a topic, so we put it to a vote, but we can’t determine what voting system to use, so we put it to a vote, but we can’t determine what voting system to use, and so on, is there any way to end the infinite regress? There was a serious proposal to use Archipelago-logic and break up into subgroups with their own voting system, but the problem was eventually solved by me declaring myself dictator and going from there. I feel like this is a metaphor for something.

This entry was posted in Uncategorized and tagged , . Bookmark the permalink.

147 Responses to Minutes From The Michigan Rationalist Meetup

  1. blacktrance says:

    A preference for a round number of utils is impossible because utils aren’t quantifiable, they’re only an expression of relative preferences, i.e. something being assigned 5 utils really means that it’s preferred to something that’s assigned 4 utils, but you could scale it to 5000 and 4000 utils just as well, and it would make no difference. To tie this into cardinal vs ordinal utility, while it may not make sense to say that something is “twice as good” as something else, differences in magnitude are meaningful in the sense that a large difference in quality means that more can fit in between two goods – e.g. “A cure for cancer is much better than a potato chip” should be interpreted to mean something like “There are many goods that are better than a potato chip and worse than a cure for cancer”.

  2. Josh says:

    Here’s my answer to the “people who prefer round utils / weird amounts of grapefruit” issues… stop treating preferences / utility as axiomatically good! If in real life someone was arguing for a policy because they prefer round utils, you wouldn’t ask “okay how much do you prefer it”, you’d ask, “why????”

    I don’t think highly of utilitarianism as a normative theory, because it takes preference / utility as a given, whereas in the real world I think preference / utility is highly malleable and much of human interaction is actually about getting people to change their preferences (for instance, becoming friends with someone).

    I’m mildly baffled that so many smart people seem to think some form of utilitarianism is the obvious starting point for normativity. I would love to have a conversation with a smart utilitarian about why. I wrote up what I believe is a reasonable starting point for normativity here: I guess you could call it a form of deontology or maybe agent-relative consequentialism.

  3. Paul Torek says:

    I found that article on the (in)effectiveness of lecturing.

    The meta-analysis, published online today in the Proceedings of the National Academy of Sciences, concluded that teaching approaches that turned students into active participants rather than passive listeners reduced failure rates and boosted scores on exams by almost one-half a standard deviation.

  4. Vaniver says:

    In fact, I would venture to say that in some important sense, the cancer cure + Nobel would be more than twice as good as the potato chip.

    Um, the canonical approach here is probabilities. If you have cardinal utility over three outcomes, A>B>C, you can find the place where you are indifferent between B and a p% chance of A and a 1-p% of C. Then you can talk about what probability of curing cancer you would give up for a potato chip. (This number is probably so small that you’d need a scanning electron microscope to find it, so don’t be surprised if, with your eyes, it looks like 0.)

    Would such a preference be self-consistent?

    Don’t think of utility functions as functions; think of them as families of functions. This could be waved away as just a preference over which representative of the family is used to describe any particular decision, but should probably be addressed as a misunderstanding of what utility is and what it’s used for.

  5. Anthony says:

    Alternate theory on respect for the wishes of the dead:

    We respect the wishes of the dead because the living prefer it that way, to the extent that the living prefer it that way.

    We want our wishes to be carried out after our death, at least regarding disposition of our body and our property. Therefore we support a regime where the wishes of the already-dead are respected, to some extent. Since the future is unknowable, but is fairly predictable in the near term, we respect the wishes of the dead more for the period closer to the decedent’s death. We also interfere with those wishes to a similar extent that we interfere with the wishes of the living – it is my employer’s wish to give me a quantity of money for a year’s labor; but the government will take about 20% out of the transaction. We do the same regarding inheritances, though the quantities may vary. A perpetual trust may be made, but it requires a (living) trustee, who is granted increasing-over-time discretion to depart from the explicit instructions of the decedent.

    We respect the laws and meta-laws made by dead legislators, and so long as the meta-laws they’ve left allow us to make changes we believe necessary, we continue to follow the meta-laws regarding making changes to the laws and the meta-laws. In jurisdictions where the meta-laws interfere too much with the ability to change the laws (or meta-laws), the governing class will decide, sometimes suddenly or violently, to stop respecting the wishes of those dead legislators.

  6. Paul Torek says:

    Mathematicians, ahoy. Are there some scales in-between cardinal and ordinal scales that might allow statements like “curing cancer is a way better improvement than getting a potato chip”?

    • Vaniver says:

      Not usable ones. A cardinal scale has implied probabilistic meaning (if you give me the utilities for outcomes, I can tell you the utilities for any probabilistic combination of those outcomes, i.e. a gamble), which an ordinal scale does not. (If an ordinal scale has been expressed over all possible gambles, it’s either inconsistent with the laws of probability or the VNM axioms or is a cardinal utility function.) A cardinal scale maps onto the real numbers, and an ordinal scale maps onto a permutation of N objects.

      There are mathematical entities in between the two–the first is infinite and dense, and the second is finite and countable, and so you could try to use something infinite and countable, like the integers–but there’s no obvious meaning for them. Even if curing cancer is a 4,000 and a potato chip is a 1, that’s useless unless it implies that you would be indifferent between ten potato chips at 5 utilons and a 1/1000 chance of curing cancer and a 999/1000 chance of getting a potato chip.

  7. tommy says:

    2:58 just sounds like the best metaphor for religion I’ve ever heard.

  8. Drew Hardies says:

    I’d solve the ordinal/cardinal utility thing by splitting utility-generating sensations (e.g. “pleasure” / “satisfaction”) from our final conclusion about which we like more.

    People have preferences over outcomes. We could reserve the word for a very formal (binary) relationship where “I prefer X to Y if I would accept X in the place of Y”.

    This huge set of binary relations is hard to keep in our head or convey to people. So, often it’s convenient to imagine to abstract/stylize them to some utility function. That way, I can give you U(X), U(Y) and U(Z) and you can infer the relationships between them.

    It’d often be useful to stylize the preferences in a way that roughly corresponds to the absolute strength of the sensations, but either way, it’s an abstraction.

  9. Handle says:

    On Potato Chips: It’s an easier problem to solve if one of the items is commodity that is marketable at constant prices. Surely there is some number of potato chips that would be worth so much money that it would exceed the utility of curing cancer and winning the Nobel prize. Let’s say that number is one billion potato chips.

    You can’t just say that one billion potato chips gives you one billion times the utility of one potato chip, but you can say something like “Winning the Nobel Prize for curing cancer is so much better than having a potato chip, that it is equivalent to the utility I would get from a billion potato chips, or the wealth I would derive from selling those chips.”

    That’s probably both close enough and as close as you can get.

  10. The idea of “grapefruit”, as a metaphor for what an agent with infinite discounting tries to maximize, sounds to me like the kind of thing that would be a Less Wrong meme.

  11. Vadim Kosoy says:

    Preferences of the dead:

    The exponential discount model is wrong. The correct model is here. In short, the correct discount falls much slower than exponentially. Yes, this breaks time invariance but “fundamental metaphysics” is not time invariant. The point of minimal Kolmogorov complexity (big bang?) defines a reference time.

    The weight of people in the past is similar to the weight of present people since the age of humanity is small wrt the age of the universe. However, even if it weren’t so the paradox wouldn’t arise. This is because:

    * In “decodings” (similar to Egan’s dust arrangements) of the universe starting in the past, the present is suppressed much strongly than in decodings starting in the present.
    * The entire past decoding gets higher weight but it doesn’t mean that within it the past people’s preferences regarding the present get higher weight than other attributes of the present (unrelated to the preference-utilitarianism component of the utility function).

  12. Shmi Nux says:

    Most people intuitively discount utility in 4D spacetime with themselves in the center.

  13. moridinamael says:

    Perhaps I am fighting the hypothetical, but aren’t we investing in the far future, constantly, all the time?

    If I run a company, my goal is for my company to succeed not just today, but tomorrow, and as HJPEV would point out, by induction on the natural numbers, that means I want my company to succeed forever. Companies and individuals are continually reinvesting in themselves. Governments are continually reinvesting in their infrastructures. This is, in fact, the most efficient way to invest in the far future.

    There is no class of investments that I am aware of that pays off for people in the far future but no for people in the less-far future. As many people have pointed out elsewhere, you can’t legally set up so-called Methuselah trusts that grow faster than the economy, and they are a bad idea anyway – they would be parasitic and destructive and just plain wouldn’t work. They wouldn’t be paid.

    Yes, there is waste in the current system. Perhaps it is difficult to construe getting a haircut as an investment in our posthuman descendents. But I feel somewhat confident that the continual re-allocation of capital into hyper-optimized projects in a fast-paced global economy is a better way of growing global wealth for future generations than locking gold in a giant vault.

    I guess I kind of sound like a capitalist or something here.

    • Anthony says:

      There is no class of investments that I am aware of that pays off for people in the far future but no for people in the less-far future.

      Zero-coupon bonds. For some value of “far”.

      • Dib says:

        But they will experience capital appreciation in the meantime. Your choice to not realize these gains is equivalent to someone holding a conventional bond and re-investing the coupons.

  14. Douglas Knight says:

    If you need an initial voting system to choose a voting system, you end the infinite regress by choosing the Schelling point, Nomic.

  15. anon says:

    I expect a lot of voting systems would tend to converge onto other sorts of voting systems. It probably doesn’t matter which one you start with, provided you’re not intentionally choosing a terrible voting system, because eventually you’ll find your way to an optimal or near-optimal system. This isn’t machine-proof, or anything, it requires human intuitions about which sorts of systems are good or bad and maybe the problem you’re suggesting is supposed to disallow those? But it gets the job done for most practical purposes.

  16. anon says:

    Lets say length is objective. Then units are possible, and unit conversion is possible. What has to be objective for cardinal utility to obtain? Sure, YOU can say it is “twice” as valuable. This only seems to me to be an abuse of arithmetic.

  17. Dan Tobias says:

    I actually am not supposed to eat grapefruit, as it conflicts with one of my medications… no great loss, since I don’t care for it anyway. Orange is a much better citrus for my taste.

  18. roystgnr says:

    exponential discounting

    is an observation, not a postulate. The observation that you can use some resources now to acquire even more resources later been correct for centuries, sure, but it does start looking ridiculous when you extrapolate, and unless we find some way around Einstein/Bekenstein/Landauer/etc we’re probably doomed to cubic growth (at best) within a few millenia.

    means we consider preferences of people in the future much less important than those of people today ā€“ otherwise we would invest all our money to donate the interest to future folk

    The missing word here is *marginal*. And if we expect the future to be richer, then it’s not surprising that we expect satisfying their marginal preferences to be both more expensive and less valuable than charity in the present.

    the problem was eventually solved by me declaring myself dictator and going from there. I feel like this is a metaphor for something.

    At the very least it seems like there’s a good anti-neoreactionary (or anti-anti-neoreactionary) joke buried here.

    • anon says:

      That physicist is so arrogant. Makes me mad.

      • Anonymous says:

        And his arguments are premised on assuming that (a) no new physics is discovered and (b) we’re haven’t expanded beyond Earth. In four hundred years (ie longer than since Newton to now). While humanity continues developing at the current pace.

        • roystgnr says:

          Assuming we fill the solar system only gives us an extra millenium or two: sextillions of inconceivably rich people sounds like a lot, but the log base 1.02 is still only a couple thousand.

          Assuming we expand into the galaxy/universe gets us another factor of hundred billion or hundred billion squared, but again, take a log base 1.02 and that translates into a few more millennia. Worse: we can’t actually *get* very far even within our own galaxy within that few millennia, so our growth in accessible resources can’t outpace the (cubic-bounded) growth of our own light cone.

          New physics seems likely and radically new physics could be nice, though. “Making new universes turns out to be vastly easier than space colonization” would probably be the most wonderful possible answer to the Fermi Paradox.

        • anon says:

          Also he assumes a constant rate of growth. But it’s easy to imagine a rate of growth that decreases year after year, that avoids the problems he identifies. It still constitutes growth, it’s just slow growth.

          There was no effort on his part to steel man the economist’s arguments/perspective. That’s the core of why I disliked reading him, I think.

        • Anthony says:

          And when someone programs the bank’s AIs to start paying interest in paperclips…

  19. Ghatanathoah says:

    In regards to the “preferences of the dead” scenario, I believe that Derek Parfit resolved this with a variant of preference utilitarianism he called “Success Theory.” This theory only consider preferences that are about one’s own life and the kind of life you want to live to “count” when assessing your level of utility. I call these “eudaemonic preferences” because they are preferences about how to live a eudaemonic life.

    This makes intuitive sense. When someone sacrifices their life to save some other people we generally consider them to have made themselves “worse off” in order to make the other people “better off.” We don’t think they’re “better off dead” because their preference that the other people were saved has been fulfilled. We think they’re worse off, but that is acceptable because some other people are better off. This is because they have sacrificed their eudaemonic preferences to satisfy their moral ones.

    Success Theory neatly resolves the Neanderthal conundrum. His preference that you sacrifice to the Fire God is not a preference about his own eudaemonic life, it is a preference about the state of the world in general. So you need not respect it. However, if you got an opportunity to resurrect the Neanderthal his preference to come back to life would be a preference about his own life, so you should respect that one.

    Success Theory also resolves a few other problems with Utilitarianism. Take the “feedback loop” problem as an example: Suppose there are two altruists. One of them helps the other one out a little and increases her utility. This increases the helpers utility because his utility increases when someone else’s does. This increases the helpee’s utility because her utility increases when some else’s does. This increases the helper’s utility and so on and so on, resulting in infinite utility coming from one good deed. Success Theory resolves this by not counting any utility except the initial utility from the helpee being helped, because only the initial utility came from eudaemonic preferences.

    Success Theory also addresses how to think about inhuman creatures. Suppose a paperclip maximizer wants you to give it your resources to make paperclips. Should you give them to it? No, the clipper has no preferences about its own life, its desires are about the general state of the universe. Kill it with fire. Similarly, suppose a hedonic utilitarian wants to wirehead you. Should you take its desire to wirehead you into account? No, that is not a preference about the life it wants to live. Kill it with fire. You only need respect an inhuman creature’s preferences if it also has eudaemonic life goals.

    • Creutzer says:

      I’d like to see some argument that these “eudaimonic preferences” are, in fact, a well-delineated class. I don’t think I know what “being about one’s own life” means, really.

      • Ghatanathoah says:

        Being “about one’s own life” is certainly an unnatural category. The exact definition is probably rather complicated. But so what? I do not think “This cannot be easily stated in a short sentence” or “This has high Kolmogorov complexity” are valid arguments against an ethical theory.

        A some common parameters might be:
        1. Eudaemonic preferences are about things that have a close causal connection to you. They can interact with you and you can interact with them.
        2. Eudaemonic preferences contain you as a referent in some way. They cease being coherent if a description of you is removed from them. This is especially true of preferences over Cambridge Changes (For instance, a preference that I have a good reputation is Eudaemonic, a preference someone else has a good reputation is not).

        Some more concrete boundaries:
        1. Desiring to have a certain experience is Eudaemonic.
        2. Desiring to change or not change is Eudaemonic.
        3. Desiring that people think of you in a certain way is Eudaemonic.
        4. Desiring to have relationships with other people is Eudaemonic.
        5. Desiring that someone else live their life the way you want them to is not Eudaemonic.
        6. Having preferences about areas of the universe you will never interact with is not Eudaemonic.
        7. Desiring that other people have positive or negative welfare is not Eudaemonic.

        Parfit’s discussion of Success Theory can be found here.

    • Ken Arromdee says:

      Why would the feedback loop result in infinite utility, given the existence of converging series?

      Also, isn’t any preference that’s not “about one’s life” trivially capable of being converted to a preference about one’s life? “I want to live in a world with a lot of paperclips” or even “I want to be in the state of having-made-lots-of-paperclips”)

      • Ghatanathoah says:

        >Why would the feedback loop result in infinite utility, given the existence of converging series?

        Even if it doesn’t literally result in “infinite” utility it still seems like a problem for there to be a feedback loop at all.

        >Also, isnā€™t any preference thatā€™s not ā€œabout oneā€™s lifeā€ trivially capable of being converted to a preference about oneā€™s life? ā€œI want to live in a world with a lot of paperclipsā€ or even ā€œI want to be in the state of having-made-lots-of-paperclipsā€)

        Parfit discusses this in his paper on the subject. He argues that your first example is what one would call a “Cambridge Change,” and is not really about one’s life because it does not change anything about ones life. The second one “wanting to exist in a state of having made lots of paperclips” might actually count. However, one should note that such a creature would not be a true paperclip maximizer. It would be a “me making paperclips maximizer.” A real paperclip maximizer would only care about the total amount of paperclips in the universe, it wouldn’t care about who made them.

        Of course, we should probably precommit to not honoring the preferences of any paperclip maximizer that converts itself into a “me making paperclips maximizer,” in order to discourage it from doing such a thing in the first place.

        • anon says:

          It doesn’t seem like a problem for there to be any feedback loop at all. You assert that but I disagree. I think that such feedback loops are actually a normal characteristic of good relationships.

        • Ghatanathoah says:

          @anon

          I’m not talking about feedback loops in terms of a relationship where “if you’re happy I’m happy.”

          I’m talking about a scenario like: someone you don’t know, somewhere recovers from an illness. Since you want people to recover from illnesses, this technically raises your utility. Some altruist somewhere who has also never met you has their utility raised because your utility is higher. This raises your utility because this person you’ve never met’s utility is higher. And so on.

          When I mean that someone’s altruistic desires shouldn’t count as part of their welfare under Success Theory, I am talking about the abstract-sense-of-duty type of desires. Like the times where you really don’t want to do something because it will make you unhappy, but you do it anyway because you know it’s the right thing to do. Warm fuzzy feelings and friendship are something different, I think they do still count as part of your welfare.

        • Ken Arromdee says:

          and is not really about oneā€™s life because it does not change anything about ones life.

          Try as I might, I can’t read that in a non-circular fashion.

        • Ghatanathoah says:

          @Ken Arromdee

          A Cambridge Change is a change that doesn’t change you in any way considered meaningful, but can still be called a “change” in a weasely sort of way. The classic example is that if you cut yourself shaving you have changed Confucius because he now lived on a planet where one more person than before cut themselves shaving.

          Success Theory excludes preferences about Cambridge changes. Preferences have to be about your life in some respect to count as part of your welfare.

          So in terms of preferences of the dead, how people think of you when you’re dead does count, because that’s about you. What gods people worship after you’re dead doesn’t because it’s not about you.

        • Ken Arromdee says:

          A Cambridge Change is a change that doesnā€™t change you in any way considered meaningful, but can still be called a ā€œchangeā€ in a weasely sort of way.

          “Can be called a change in a weasely sort of way but isn’t meaningful” is no more helpful than “does not change anything about one’s life” (which in turn is no more helpful than “is not really about one’s life”). All you’re doing is explaining one vague description with another equally vague description.

        • Ghatanathoah says:

          @Ken Arromdee
          >All youā€™re doing is explaining one vague description with another equally vague description.

          I’ll try for something less vague. A Cambridge change is basically a change to something other than you and your life that can be made to sound like a change to you and your life by making that something else part of the description of you.

          For example, suppose I make a cake today. You could say that my making a cake changes Genghis Khan’s life because Genghis Khan now lives on the same planet as a person who made a cake today, when previously he didn’t. But this doesn’t change anything significant about Genghis Khan and what kind of life he had. It’s a Cambridge change. The only thing it changed was the description of something we can relate to Genghis Khan with carefully chosen language.

          Now, suppose I discover archaeological evidence that Genghis Khan was not as bad a person as we thought and publish a successful book based on it. This isn’t a Cambridge Change. Something about Genghis’ life has changed, namely how people regard it.

          The difference is that in the second instance the change is deeply about you, whereas in the first it is only about you by use of wordplay.

          Now can you kind of see the territory I am trying to describe, even if you (with plenty of justification I admit) think the map I am drawing is crude?

        • Paul Torek says:

          @ anon June 9 1:14

          I agree. Feedback loops (non-infinite) are one of the awesome things about good relationships.

        • Ken Arromdee says:

          A Cambridge change is basically a change to something other than you and your life that can be made to sound like a change to you and your life by making that something else part of the description of you.

          I don’t think that helps. “Sounds like a change to your life, but really is a change to something else” is just as vague as all the other versions. How do I determine that something really is a change to something else? I have no idea what that means.

          (Of course, I don’t literally have zero idea what it means, but I don’t have enough of an idea that I can use it to determine where problematic cases fit. Is it a eudaemonic preference to want something for my children? For my friends? For my grocer? For fellow members of my religion? For the country?)

        • Ghatanathoah says:

          @Ken Arromdee

          >How do I determine that something really is a change to something else? I have no idea what that means.

          The best way I can think of to describe it is to ask whether the description of the change becomes incoherent if you are removed from it.

          For instance, if you have a preference that your reputation improve, such a preference is utterly meaningless if you remove “you” from it. It only makes sense to ask what people think of you if you are a referent.

          By contrast, if you desire for no one to ever be tortured, that is still a coherent desire without reference to you. Asking “Are people being tortured?” is coherent without referring to you.

          >Is it a eudaemonic preference to want something for my children?

          Parfit addresses this in his original essay. The answer is generally no, with some exceptions.

          Parfit gives the example of a person who has been exiled from his country and will never see or hear about his children again. The exile cares about his children and wants them to do well in life. Parfit then posits that, without the exile’s knowledge, something bad that is beyond anyone’s control happens to the exile’s children. Parfit argues that in this case, the exile has not been made worse off. His desire for his children to do well in general is not a eudaemonic preference.

          However, Parfit posits a second scenario where we have an exile who both wants his children to do well in general, and who wants to have a positive relationship with his children (i.e. his children are better off because of their interaction with him). Now in this case, after this person has been exiled and will never interact with or hear about his children again, something bad happens to his children. But in this case it is the exile’s fault, before he was exiled he gave his children bad advice which led them to take the actions that made them worse off. In this case the exile has been made worse off. His preference that he have a positive relationship with his children was a eudaemonic one.

          So to condense those examples, having a general abstract desire that your children, grocer, etc do well is not a eudaemonic preference. But having a desire that their relationship with you will make them better off is a eudaemonic preference.

        • Ken Arromdee says:

          That definition I find acceptably non-vague, but it still isn’t enough because I can rephrase many non-eudaemonic prefences as eudaemonic ones. “I want to have a good relationship with my community” (or even with God). “I want people to remember me after I die, for being a good president” (Or I want my friends and family to remember me, for some more trivial thing). Furthermore, this definition distinguishes things which most people would not distinguish, so you’d have to justify why that’s a good distinction. (Few people would think that the difference between wanting your children to do well and wanting to have a positive relationship with your children is morally significant in the way you believe.)

        • Ghatanathoah says:

          @Ken Arromdee

          I promise you that I’m not trying to devise a new counterintuitive moral theory. I am trying to isolate what exactly we mean when someone is better off or worse off. What I am arguing is that our preferences can be divided into eudaemonic preferences and noneudaemonic ones (which are essentially the same as moral ones).

          When someone sacrifices their personal goals to help someone else, we tend to say they have harmed themselves to help another. However, a nitpicker can argue that from the standpoint of Von Neumann Morgenstern utility they are actually better off because they wanted to do the right thing and sacrifice themselves. The framework I am arguing for resolves this conflict between folk conceptions of utility and VNM utility. VNM utility includes both our eudaemonic preferences, and our moral goals. “Folk utility” only encompasses our eudaemonic preferences.

          >ā€œI want people to remember me after I die, for being a good presidentā€ (Or I want my friends and family to remember me, for some more trivial thing).

          This is a flat out eudaemonic preference. I don’t see why you’d consider it non-eudaemonic. If some dead person who never got the recognition they wanted and deserved got it posthumously I think you can definitely say they are better off than they were before.

          >(Few people would think that the difference between wanting your children to do well and wanting to have a positive relationship with your children is morally significant in the way you believe.

          I think that this is a fairly common distinction that pretty much everyone believes in. I probably am explaining it in unclear and abstract terms. I’ll try to be more clear and concrete:

          Sometimes when we interact with our children we do something that we wish we didn’t have to do, but we do anyway because it will benefit our children. Examples of this include changing diapers and making them dinner when we’re tired. Pretty much everyone agrees that this is an instance where you are harming yourself to benefit your children. This is “wanting your children to do well.”

          But there are other instances where when we do something that benefits our children it feels like we are benefiting too. Examples of this include playing with them and helping them work on a big project we both feel proud of when we’re done. This is “having a positive relationship with your children.”

          These differences are very morally significant when assessing someone’s welfare. For the first instance we conclude the parent is better off and the child is worse off. For the second instance we conclude both are better off.

  20. Richard Gadsden says:

    As for voting, the system usually adopted in meetings is to assign someone to reduce the decisions to a series of binary votes (a vote for which there is no dispute as to the correct system) and then take that series of votes by simple majority.

    This grants a lot of power to the person who reduces decisions to binary series – we generally call that person the “chair” of the meeting. This is the case for things like Robert’s Rules of Order, but also for the parliamentary rules of organisations like the House of Representatives or the Senate.

    If there is insufficient trust to agree on someone capable of performing that role then democracy may not be an appropriate decision-making system, and violence may be required.

  21. Jack says:

    Did you have a reason that the hypothetical person who preferred a round number of utils was tied to a particular base?

    And does anyone have a better way to write that question? Every time I read it the word that comes to mind is “herniated”.

  22. Anonymous says:

    Preferences are intransitive, so the cardinal/ordinal divide doesn’t matter. You can’t even map to a poset, much less add all the additional structure.

    • Daniel H says:

      I was fairly sure you could in fact map to a poset, but not a total ordering. All the examples I can find of intransitivity involve the situation changing at some point, which seems more because of hyperbolic discounting or “certainty bias” than an actual static intrasitivity between choices.

      • Anonymous says:

        Posets require transitivity (along with reflexivity and antisymmetry). I’ve also not seen an experiment on humans that adequately holds all external conditions constant and carefully regularizes internal state (I imagine a suitable experiment is likely unethical), however, it has been done with rats. We must then argue for why human preferences are more rational than those of rats (rational here being identified with the standard utility axioms, though this is debatable), or we need to actually engage with the possibility of intransitive preferences. Most people shy away from even considering the latter because it’s so intuitively distasteful.

        • Paul Torek says:

          Few people (maybe none) around here argue that Von Neumann-Morgenstern utilities describe humans accurately. The question is whether it’s a compelling normative standard, that one has to meet on pain of irrationality. I doubt that it is.

        • Anonymous says:

          To me, it seems as though most formulations assume that individuals come to the table with utility functions in hand. Then, the normative question is whether each individual ought to subjugate his own utility to some combination which can be said to be society’s utility. If the former breaks due to intransitivity, it is unlikely that we can even get off the ground in making sense of the latter.

  23. Gilbert says:

    So re 1:50, basically the argument is that cardinal utility saves an intuition ordinal utility doesn’t save, because the preference should have a strength.
    Fine, but: You could actually strengthen that intuition and say you wouldn’t exchange curing cancer and winning a Nobel Prize for any number of potato chip-satisfactions. In other words, you have a lexical preference – which can be expressed ordinally but not cardinally. Cardinal utility makes that kind of preference flat out impossible, which is why cardinalists must rave about how “rational” people should prefer torture to some number of dust specks.

    • Said Achmiz says:

      Quite so. I’ve gotten into some arguments on Lesswrong about just this question, which I usually refer to as “chickens vs. grandmother”: in my morality, how many chickens may be killed to save my grandmother’s life? Why, n chickens, where n can be any number you like, up to and including “all the chickens”. But it doesn’t make sense to say that I don’t value chickens at all; ceteris paribus, I prefer the chickens not to be killed (much less driven to extinction). A lexical ordering of preferences resolves this; a single cardinal utility scale does not.

      • Stuart Armstrong says:

        Would you be willing to give your grandmother a 1/10^100 chance of dying 1 second earlier than otherwise, to save all the chickens in the world?

        If you’re purely lexicographical, you’d kill all the chickens in that situation.

        • Said Achmiz says:

          Death to chickens

          However, your question encodes the assumption that preferences over outcomes can be unproblematically extended as preferences over lotteries. That’s certainly one of the assumptions of VNM rationality, but we’re already rejecting a part of VNM; why not other parts? At the very least, my preferences over lotteries certainly aren’t linear in probability. That my preferences aren’t obligated to be linear in something like time is almost too obvious to mention.

          Or, in other words: what if I said “yes” to your question? How, exactly, could you conclude from that that my preferences in my initially stated situation aren’t lexical?

        • Douglas Knight says:

          Said, what is the part of vNM you have already rejected?

          In the end, linearity over probability is the whole of vNM. The axioms are statements about ordinal preferences that imply it. The axiom that people reject is usually independence, which is an axiom of dynamic consistency. An example of people rejecting it is the Allais paradox; what do you think of that?

        • Stuart Armstrong says:

          Trying to answer Said:

          >Death to chickens

          In that case, your chicken preferences are almost certainly irrelevant, because almost every action you think of has a expected effect on your grandmother (generally tiny) and thus overwhelms any chicken considerations.

          >However, your question encodes the assumption that preferences over outcomes can be unproblematically extended as preferences over lotteries. Thatā€™s certainly one of the assumptions of VNM rationality, but weā€™re already rejecting a part of VNM; why not other parts?

          If you reject the Archimedean/continuity axiom (or substitute a weaker one), you can get lexicographical utility while keeping the other features of vNM utility. It’s just that lexicographical utility almost never notices any of the secondary considerations, apart from in unrealistic thought experiments.

        • Gilbert says:

          Part of the problem here is measuring utility in grandmas and chickens, which, among other problems, assumes that killing N grandmas with probability p is exactly equivalent to killing Np grandmas certainly. So comparing some risk to grandma to a number of chickens already presumes the dubious result by equalizing a risk to grandma to a fraction of grandma.

          Going beyond what Said Achmiz probably would agree with, the root error here is having preferences on outcomes rather than actions, i.e. consequentialism.

        • Douglas Knight says:

          assumes that killing N grandmas with probability p is exactly equivalent to killing Np grandmas certainly

          Bullshit.

        • Stuart Armstrong says:

          >assumes that killing N grandmas with probability p is exactly equivalent to killing Np grandmas certainly

          This assumption is not needed or used.

        • Ken Arromdee says:

          I’m a human being too. Taking many seconds of my life to kill a million chickens and give my grandmother a fraction of a second of her life is a bad bargain. Even taking a second to decide if it’s worth it isn’t worth it, so I make a policy of ignoring effects that are too small.

          Furthermore, since we’re talking about fractions of a second of life, killing a million chickens probably affects other people in some way (if one person has to reach one foot farther into a supermarket freezer because it isn’t stocked with as many chickens, they used a second of their life).

          If killing a million chickens would give my grandmother a fraction of a second of life, and it took me much less than that fraction to decide and to kill the million chickens, and killing the million chickens provably had no effect on other human beings, I would. But saying it that way doesn’t really emphasize how unusual those assumptions are.

        • Gilbert says:

          @Said Achmiz

          This assumption is not needed or used.

          Well, it’s a slight overstatement of the assumption actually used. Basically your question to Stuart Armstrong does assume that a very small risk to grandma converts to some very small fraction of grandma, if admittedly not necessarily the exact same fraction. Without such an assumption the conclusion If youā€™re purely lexicographical, youā€™d kill all the chickens in that situation. simply doesn’t hold.

    • Scott Alexander says:

      But then I lose the intuition that getting the Nobel isn’t *infinitely* greater than the potato chip, just some very large and finite amount.

      Even if you’re happy to bite the bullet and say no, it’s infinitely greater, I’m sure you can think of other examples where it’s clear there’s a small, medium-sized, or large difference between the amounts.

      • Alexander Stanislaw says:

        I think an ordinal ranking still lets you have those kinds of differences.

        For example let:

        A: saving grandma
        B: Saving your pet parrot
        C: Saving a dog

        Suppose you want A to be infinitely better than B and C but you want B to be only two times better than C. Then construct a ranking with the properties:

        A > nB, nC for all natural numbers n
        2C > B > C

        I feel like this is cheating though, if you’re allowed to do this then an ordinal ranking allows you to pretty much fudge whatever result you want making it less than ideal as a basis for ethics.

        • Daniel H says:

          If utilities are ordinal instead of cardinal, those multiplications don’t make sense and are not a legal thing you can do, unless you mean “doing B twice” or “doing C n times”, which is not the same thing. for at least two reasons. The first reason is that I only have a specific number of pet parrots. If I only have one such parrot, that last equation where I save two of my pet parrots doesn’t make sense. The second reason is diminishing returns: I might be indifferent between saving a dog and eating 100 potato chips, but that doesn’t mean I’ll be indifferent between saving two dogs and eating 200 potato chips. In fact, at some point the potato chip curve dips into the negative, at which point I’d rather not eat another potato chip even without tradeoffs involving canine, avian, or grandmatriarchal life.

      • Gilbert says:

        OK, I hadn’t counted on your intuitions being that specific on both ends.

        So, different try:
        Unaided by science, I would intuit that the hottest day of the year is in an important sense more than twice as hot than the coldest one, when actually the difference is about 15%-ish on the absolute temperature scale. And that fake intuition would clearly be induced by that range being essentially my range of experience.

        So that example shows we have an intuitive failure mode of inferring from many things we care about inbetween to a big ordinal differnce, which in turn should make your residual objection to Said Achmiz’s argument very suspicious.

        • Scott Alexander says:

          I don’t think your metaphor quite works because there’s a real, objective definition of “heat” opposed to a subjective feeling of such.

          Suppose that it is one day discovered that the brain represents temperature in terms of electrical activity in a certain cell, and that the hottest day of the year produces twice as much activity as the coldest day of the year. Then we might be able to say very precisely that the first day “feels” exactly 2.103 times as hot as the second day or something. The Kelvin scale has nothing to do with this.

          As far as I know, there’s no objective counterpart to utility. What we’re trying to formalize is my feeling that the Nobel is more than twice as good as the potato. If it were discovered that on the Cosmic Hierarchy Of Value, the Nobel is exactly the same as the potato (maybe because we’re nihilists and both are zero) then that’s interesting but not a reflection of the subjective utility of those two choices to me (which is what we’re interested in). And if it were discovered there’s some brain cell representing the choices that fires 1.1x more for the Nobel than for the potato, I would probably say “That brain cell’s firing isn’t a perfect mapping to how much I prefer one choice over the other”, and not “Well, I guess I only preferred the Nobel a tiny bit more than the potato after all.”

        • Alexander Stanislaw says:

          This is one of the more interesting discussions I’ve read on here in a while, because it highlights a fundamental difference in your worldviews.

          Gilbert starts off with the assumption that morality is a fundamental feature of reality, something that humans have an imperfect access to via our preferences and intuitions about right and wrong. Scott starts off the idea that morality simply is about human preferences and intuitions about right and wrong. Scott’s conclusion (utilitarianism) therefore seems ridiculous to Gilbert hence his repeated claims about utilitarianism being bunk.

        • Gilbert says:

          Great point Alexander Stanislaw, but I have a pedantic qualification:

          I don’t call utilitarianism bunk only because it seems ridiculous. There’s also my perennial objection that it’s the only proposed moral system, where it’s a known mathematical fact that it can never work.

          On the other hand, one meta level up perhaps the reason Scott is that of course formalizing inconsistent feelings and intuitions will end up inconsitent, but it works well enough most of the time, so what’s for dinner?

          On the third hand, perhaps that’s totally libelous kitchen table psychology. It sounds like that in some way, but I can’t formalize in what way exactly.

          On the fourth hand, that excuse would be totally out if someone wanted to create a Friendly AI and run it on utilitarianism, because that AI pretty much by definition wouldn’t be able to look at the limits of its defining formalism and ignore the stupid parts. (Not that I think anyone has the slightest hint of notion of a clue of an idea on how to build an AI in the foreseeable future, so it’s not something to be really worried about.)

  24. nydwracu says:

    2:58 ā€“ Confusion over whether the wrong time preferences result in everyone eating grapefruit all the time. It was brought up that someone with infinite discounting ie total inability to identify with their future self would blow all their savings on drugs and prostitutes and grapefruit. We all figured grapefruit was some kind of metaphorical symbol of hedonism, and we discussed this for a while until finally I asked ā€œWait, why grapefruit?ā€ and we realized that all of us had just kind of assumed the others were talking about it for some good reason. Turned out the first person had said ā€œdrugs, prostitutes and great foodā€, we had universally heard ā€œgrapefruitā€, and just kind of taken it from there.

    I feel like this is a metaphor for something.

  25. Also the exponential discounting argument seems to fall to basic universalisation – it’s only a good idea if a relatively small number of people do it, or people do it with a relatively small fraction of their wealth (and once you’re thinking of it like that it’s not obvious that you’re not getting better util interest rates by e.g. donating to charity now to improve environment and society than you would in the markets). If everyone just stuck their money in a high interest saving account to donate it to future generations then the economy would stagnate, the interest rates would fall and the future generations wouldn’t actually end up very well off from it.

    • Douglas Knight says:

      You have equated “trying to produce wealth in the future” with “sticking money in a high interest savings account.” The latter doesn’t universalize, but that doesn’t mean the former doesn’t. It takes a lot of effort to create wealth. The whole point of investments is to transfer money to the people who are putting in that effort, for a small cut of the results. Maybe it is useful to rearrange the money, but the important point is that people put in the effort.

      When Ben Franklin tried to harness exponential growth for charity, he didn’t say: put this money in a bank and hope that it magically grows. Instead he gave specific investment instructions: loans for apprenticeships.

      • Nornagest says:

        Note that in a modern fractional reserve system, sticking money in the bank is equivalent to making it available for loans from the bank, the proceeds of which fund interest on your account: it’s effectively a form of indirect investment in whatever the bank’s willing to loan out money for. All else equal, I’d expect more savings to imply greater availability of funds on the bank’s part to imply looser lending terms to imply more lending, so it’s not deadweight from a wealth-creation perspective.

        Of course, that won’t do you any favors if the interest rate is below inflation, as it often is.

        • Douglas Knight says:

          Sure, at the current margin, putting money in the bank is a form of investment, but David seemed to agree with that and was objecting to an extreme change. At some point money is not the limiting factor.

          (Actually, I’m not convinced that putting money in current day real banks is a form of investment, but that’s besides the point.)

  26. Stuart Armstrong says:

    >Mild concern over what if someone had a preference for having a round number of utils. Would such a preference be self-consistent? Would the universe stop?

    This is no longer a utility function, so breaks one of the vNM axioms – at first guess, Independence would be the one to break. You can be self-consistent without independence, but I don’t think this system would work in the real world, where there are probabilities of all sorts of universes (with non-round numbers of utils) all the time.

    Maybe you could get it to work this way: use a redefined utility where states of the world with round numbers of utils in the previous system get 1 util in the second; all other states get zero.

    >Aaron, channeling Robin Hanson, points out that exponential discounting means

    The solution Nick Beckstead of the FHI has convinced me to adopt: do not discount anyone’s preferences, in theory. There are then some practical things that cause discounting in practice – lack of knowledge, growing economies, selfishness of current people. Even there, it may be the fact that we choose to discount future lives just so that we can reach a decision, but that’s a practical justification, not a moral one.

    >Two of the attendees were unschooled

    People who attend rationalist meetings are so far from the norm, that their experiences are probably anti-useful when talking about unschooled people in general (and this is generally true of most members of subgroups that attend rationalist meetings).

    • ozymandias says:

      People who attend rationalist meetings may be far from the norm of people in general but that doesn’t necessarily mean they’re far from the norm of unschoolers, who are also not exactly a randomly selected population.

      • Stuart Armstrong says:

        To have that, the selection process to attend rationalist meetings would have to add no biases to the unschooled community. That seems very unlikely; do you have any evidence it’s true?

  27. Oligopsony says:

    Human time preferences aren’t just goofy because they favor the present over the future, they’re goofy because they’re hyperbolic: the discounting isn’t constant over time, meaning that the conversion ratio between two different points in absolute time is relative to the point in absolute time in which you’re at currently.

    The solution to this is probably either to acknowledge that humans are lots of agents over time or that getting from behavior to basic preferences is a bit more difficult than our usual intuitions about revealed preferences suggests (obviously the availability of information is at least part of the story, in terms of why it’s adaptive for us to hyperbolically discount if nothing else), and these responses may or may not imply each other idk.

  28. A voting system I like is Majority Judgement. Everyone assigns a grade (e.g. ‘good/neutral/bad’ but it can be on any ordinal scale you like). It then aggregates those grades by median + a tie breaker rule to elect the candidate that the majority assigned the highest grade to.

    I also quite like this as a resolution to the ordinal utility problem. You have a shared set of common labels which represent certain levels of value. Although just looking at the two of them ordinal utility only lets you say you like curing cancer better than potato chips, when adding in the set of common grades you can also say that you like potato chips as much as “eh, it’s pretty good” and curing cancer as much as “nearly the best thing ever”.

    The reason why this isn’t just reinventing cardinal utility is that the grades don’t form an interval scale – it makes sense to say that curing cancer is a lot better than potato chips, but it doesn’t really make any sense to say that e.g. it’s more than twice as good.

    • Gilbert says:

      But that’s basically a Sherry Bobbins solution. It looks fair for aggregating the scores but that’s because it splits the real problem into assigning the scores and aggregating them and then only looks at the second part.
      More precisely for voting:
      If you rank politicians from 1 to 10 and honestly report that everyone with a snowball chance is either a 3 or a 4, that’s literally equivalent to you only having one fifth of the voting power of someone ranking those a 1 and a 10. OTOH, if you rank everyone as 1 or 10 (as is always strategically optimal), the system decays into ordinary plurality voting.
      More precisely for honest utility aggregation
      – If you want to use this you need to be able to name actual best and worst things conceivable.
      – If you have only a few levels this will basically push any relevant choices to the same level, making the system useless.
      – If you have loads of levels, the system basically reduces to cardinal utility except for the upper and lower bounds.

      • I don’t think the optimal strategies for voting with this are what you think they are. In particular because of the way it works this is relatively insensitive to intense departures from the median: If the median vote for something is 5 it probably doesn’t matter whether you score it a 7 or a 10, just that you voted it better than median (there are tie breakers where it matters, but they tend to be very hard to trigger with large populaces). Ditto whether you voted it a 1 or a 3. There *are* cases where it discourages honest reporting of values, but that’s the case with just about every voting system, but it’s generally at least moderately resilient to manipulation.

        For the large number of rankings it doesn’t reduce to cardinal utility for the reason I said in the original post: Although you still have to / can provide a very large number of gradations, those grades don’t have the same set of permissible operations on them. You still have the difficulty of providing precise grades for things of course, but that’s pretty much true of any fine grained theory of utility.

        (Also if your “true” theory of utility is purely ordinal you can always treat each grade as a range and then do ordering within the grades if you need finer grained information)

        • Gilbert says:

          Oops, sorry I actually wanted to reply to your actual proposal, than forgot it mid-typing and retrieved my cached response to a more standard range voting proposal (i.e. average rather than median). So what I actually wanted to say is this:

          As a voting system:
          If I honestly rank the politicians with a snowball chance as somewhere between 2 and 4, while a majority ranks them somewhere between 5 and 7 then I get counted as below median for everyone, and my opinion that some are less bad than others doesn’t influence the result at all. Same if it was the other way around. In other words, voters not close to the median get disenfranchised entirely.

          It’s true that trying to fake closeness to the median is harder than faking extremism, because it requires prior information about where the median might end up. But that doesn’t sound like it’s worth adding a popularity contest among voters to the one we already have among candidates.

          for use as an honest-preference aggregation scheme:
          The point of the precise-grade-assignment problem is that the solution is part of the actual aggregation scheme. Depending on what solution you pick, the solution will be very different. So even if you think ignoring the weird people is an acceptable solution to the aggregation problem you still haven’t solved it, just pushed it under the carpet of how to assign the scores. Plus, that part got even harder, because you no longer can appeal to probability arguments.

      • Daniel H says:

        As stated elsewhere, the grandparent comment isn’t standard range voting. However, I find your statements about that to be contrary to what I’d previously heard, and would like to discuss range and approval voting.

        As I understand it, there are rare edge cases where completely strategic range does not degenerate to approval, and a very large number of situations where completely strategic approval does not degenerate to honest plurality. And in fact it’s provable that neither degenerate to honest plurality, because that further degenerates to strategic plurality.

        As an example of approval not degenerating to plurality, let’s say I’m a registered voter in US 2012, and I like Gary Johnson over Barack Obama over Mitt Romney (and everybody else has suddenly decided to stop running in order to make the hypothetical simpler). Gary Johnson had no chance of winning in this election (in the real thing, he got very close to 1% of the popular vote). In this case, my best strategy is to approve Johnson and Obama, and not approve Romney (or do the equivalent with max/min score for range voting). I approve Johnson because the only way it can theoretically make a difference to the results is if somehow Johnson wins (which is a good thing from my point of view), and otherwise it will give his party publicity. I approve Obama and disapprove Romney because it’s horrible strategy to give them the same vote. This is not equivalent to plurality, honest or strategic. If we add Virgil Goode of the Constitution Party (who I like worse than Romney) to the ballot, then my strategy is to approve two and disapprove two. This doesn’t degenerate to plurality or anti-plurality.

        In any case, neither would be completely strategic: currently, at least 2% of American voters vote non-strategically (according to this table), and I’d expect this number to increase in most non-plurality voting methods.

        • Gilbert says:

          Well, I think the advantages are often overstated, but of course it has some advantages.

          The standard strategy advice for approval is pretty much to approve of the candidate you would have voted for under plurality and of everyone you like better.

          Yes, this allows you to vote for a viable candidate and still be counted as someone who actually supported an outsider and that’s a good thing.

          But it pretty much brakes down the moment it goes beyond symbolism. Because if Gary Johnson ever becomes viable competition for Obama (don’t count on it, I think for a German I’m very well informed about American politics, but I had to google him up), then your approval of Obama actually does endanger Johnson’s election (because it might lift Obama above him) while not approving of Obama still risks electing Romney (whom Obama might have bet but for the Johnsonite spoilers).

          Let me put it like this: Every voting system has paradoxes and at some point it’s a question of which perverse features we’re least uncomfortable with. But approval, while somewhat better than plurality, is not at that frontier. I think it’s strictly worse than basically any rank-order system except Borda.

  29. suntzuanime says:

    The only sane way of handling utilitarianism is to respect neither the preferences of the dead nor the not-yet-living. This neatly answers the question of total vs. average utilitarianism by making them resolve to the same thing.

    • Daniel H says:

      But this would imply that it is morally neutral to create a being doomed to suffer, as long as once it was created you tried to stop that suffering if able. I don’t think I’d think it moral to have a child if the Devil had claimed the soul of such child and planned to give it eternal damnation in Hell (assuming that the Devil is more competent at creating Hell than a lot of humans are at imagining it). Creating a being that can only suffer doesn’t sound any better than not allieving the suffering of an already existing being.

      Also, I agree about the utility of the dead, to a point. And that point is the disagreement among the general population about whether dead people still exist and what if any connection they have to their earthly selves. If everybody agreed that there was no afterlife and that legal death is the same as information theoretic death, then I’d agree. Since there are people who believe in resurrection, reincarnation, and afterlives, I’m not sure I agree as a practical matter. If I believe that I can go to some sort of life-after-death iff my body is burnt in a specific ceremony, or iff I am buried with a certain scroll, or iff my head is stored at less than 153 Kelvins, then I want others to respect my desire for this to happen even if they don’t share my belief.

      • suntzuanime says:

        At some point you have to pick which Conclusion is least Repugnant. I choose the conclusion that lets children be born into a sinful world if their parents want them to be, rather than the one that wipes out most of the human race or grotesquely overpopulates the world.

        • g says:

          I don’t think Parfit’s Repugnant Conclusion is so very repugnant when looked at correctly.

          In particular, the utility level it puts everyone at is not “bad enough that people at that level are indifferent between living and dying” but “good/bad enough that, without any actual births/deaths/creations/… being involved, we are indifferent to a change in the number of people at that level”.

          For me, at least, I suspect that’s above the median utility level of people in the present world.

          (To be clear, though, this doesn’t mean that I am (worse than) indifferent to the survival of someone at median level. The fact of someone’s having died, or been killed, is itself a fact about the world. The fact of someone’s having survived is itself a fact about the world. I can have a strong preference for someone to remain alive, even if beforehand I’d have been indifferent between a world where they exist and a world where they never do.)

          In particular, if part of your reason for ignoring the utilities of not-presently-living people is to avoid Parfit’s RC then I don’t find that very compelling.

          Also: Parfit’s argument leads to gross overpopulation (of an objectionable sort, i.e., one where typical utilities are low or something like that) for certain sets of available options. For instance, suppose it turns out that according to some ethical theory — mine, perhaps — Parfit’s argument ends up showing that a world with 100 times more people, all at 10% lower utility, is “better”. OK, fair enough, so given the choice between just those worlds we should choose the latter. But when would that actually be our choice? It seems at least credible to me that in realistic situations where we have both those options we end up also having options that are “better” in the same sense and less Repugnant.

        • Ghatanathoah says:

          I think that it’s possible to have a system that eliminates all three of those conclusions.

          The one I developed I am calling “selective negative utilitarianism.” To state in very oversimplified terms, it initially considers the addition of (well-off) new people a positive, like normal utilitarianism does. However, as you start down the path to the RC, it starts considering them a minus, like negative utilitarianism does.

          This avoids the Repugnant Conclusion, because it starts treating adding new people as negative before you get there. It avoids antinatalism because it initially treats adding people as positive and doesn’t consider it a negative until one starts down the path to the RC. And it avoids the “it’s okay to create people who will suffer in the future” conclusion because it does consider the preferences of future people, it just varies on whether their addition is negative or positive in the grand scheme of things.

          It doesn’t avoid Arrenhius’ Sadistic Conclusion, but that’s a conclusion everyone implicitly accepts anyway.

        • suntzuanime says:

          That seems like a huge fudge factor to me. You’d have to have some notion of the “ideal” population of beings independent from any effect on the utility of those beings.

        • Ghatanathoah says:

          @suntzuanime

          That’s true. You do need some sort of notion of what an ideal population size is. I see trying to fix an ideal size-to-average-utility ratio as considerably more palatable than accepting any horrific conclusions.

          Because of this I actually believe in something like G.E. Moore’s “Ideal Utilitarianism” where in addition to maximizing personal value we maximize the quantity of certain ideals (i.e. beauty, curiosity, freedom, diversity, etc.). I differ from Moore, however, in that he seemed to think that these ideals existed independently of people (he seemed to think that a beautiful piece of art would remain beautiful if no one looked at it).

          I know that this is nonsense, beauty is in the eye of the beholder, so to create beauty we need to create beholders. In order to maximize ideals like curiousity, knowledge, beauty, friendship etc we need to create creatures that value these things and then maximize their utility.

          This is where my selective negative utilitarianism comes in again. The addition of a rational creature that does not value these things is a negative, while adding a creature that does value these things is a positive if it does not lead to the Repugnant Conclusion. So generally it’s okay for a happy normal human to be born, but if you think they might become a sociopath or a hedonic utilitarian you shouldn’t have them.

          So how do we determine the ideal population size? We take one ideal (having utility concentrated in a small population instead of diffused in a big one) and weigh it against other ideals, like diversity and friendship. Yes, it will be hard to pinpoint the exact amount. But it’s better than the repugnant alternatives.

      • g says:

        I think you can get a reasonable degree of respect for the wishes of the dead even while fundamentally only valuing the interests of the living. If, while still alive, Joseph wants to know that after he dies his body will be entombed in a stone vault on whose walls are inscribed the praises of Great Cthulhu, and Josephine wants to know that after she dies her head will be severed, perfused with antifreeze, and placed in a bottle of liquid nitrogen — why, then, Joseph and Josephine will both be much happier while alive if others can credibly commit themselves to doing those things after Joseph and Josephine die.

    • Oligopsony says:

      The same equality is also achieved by thinking of yourself as in a Tegmark universe where the only unknown variable is measure. This is goofy but also doesn’t say you shouldn’t care about people in the future.

      • suntzuanime says:

        Doesn’t that just shift the issue from the discrete realm to the continuous one? Any time someone says they’ve solved a problem with Tegmark universes and measure my assumption is that it’s probably just smoke and mirrors and the original problem remains intact underneath.

    • James says:

      Wait, what? That doesn’t seem to follow.

      Total utilitarianism would prefer (for example), a world with 10 people each with 10 utils over a world with one person who has 90 utils, and average utilitarianism prefers the opposite.

      In the first situation, an average-utilitarian would be willing to kill 9 of the people if it would give the tenth +80 utility. But a total-utilitarian wouldn’t.

      This is true even if they both followed your philosophy. (Wait. There is a way it could be consistent. Are you saying that if you plan on killing someone, you shouldn’t count them while doing your calculations of (current) utility? That seems ridiculously abhorrent. But if the calculation of utility at a certain time only counts the people alive at that time, the scenario above stands.)

      • suntzuanime says:

        The people who are currently alive are the ones that count, even in future states where they are not alive. When you think of what future states are good, states where currently-living people who do not want to be murdered have been murdered are not good states.

        There are either 10 people who are currently alive, or there is 1. You can’t compare worlds where the number of people who are currently alive is different, because in all worlds the number of people who are currently alive is the same.

        • g says:

          This introduces a time inconsistency akin to hyperbolic discounting, doesn’t it? Because it makes your preference now about what happens in the future after someone is born or dies, different from your preference after they are born or die. Even when you know that the birth or death is going to happen.

          So, e.g., suppose your dearly beloved Uncle Joe is on the point of death. He asks you to make a large donation in his name to the Bigoted Fascist Party, whose values you despise. But your desire for Uncle Joe’s happiness is so strong that you willingly commit yourself in some suitably irreversible way to making the donation. Then Uncle Joe dies, you no longer care at all about his wishes, and now your past commitment to make the donation strikes you as a terrible, pointless mistake.

          Or: You love having children around and learn of a treatment that will enable you and your spouse (nearing the end of your time of mutual fertility) to have quintuplets rather than a single child. Unfortunately this treatment will predictably give all the quintuplets a medical condition that causes them constant agony from about age 18 until their premature deaths at age about 30. Well, that’s OK. You can kick them out of the house at age 18, and they can endure their agony on their own; you’ll get to enjoy their company throughout their childhood. — And then you do it, and the quintuplets are born, and instantly you realise that you made a terrible terrible decision.

          (Of course your future regret is knowable in advance, and needs to be taken into advance. But that regret may well be outweighed for present-you by dear Uncle Joe’s delight at having his favourite political party helped, or by the anticipated delight of future-you at having five children to play with instead of one. And indeed future-you may derive more pleasure from those children’s existence than suffering at contemplating their awful future — but that doesn’t mean that future-you will think that present-you made a good decision.)

        • suntzuanime says:

          Yeah, it introduces a time inconsistency. I dunno that that makes it bad as a description of human ethics, though, because humans are demonstrably time-inconsistent.

  30. On ordinal vs cardinal utility:

    Virtually all confusions and controversies over every utility-related concept such as utilitarianism, utility monsters, ordinal and cardinal utility, measuring utility, and interpersonal utility comparisons, are rather easily dissolved by one single realization: utility doesn’t mean anything at all.

    The maximization paradigm in economics is bloody useful, which is why it has survived so much abuse and challenge. However, the awkward question soon surfaces: people are maximizers…of what?

    Some econophilosophers argued that people act to maximize pleasure. Others thought that people act to minimize pain. Of late the idea that people are or should be trying to maximize happiness seems popular. “Satisfaction” is a vague term that gets tossed around sometimes. But none of these answers pass muster. Can you really frame your own actions in terms of trying to maximize pleasure or happiness or trying to minimize pain? You will observe yourself and others trading off between these different and seemingly incomparable values.

    Trying to define a single consistent value or even a single function that explains the choices people make is nigh impossible. It might not even be the right question. Where is it written that people have a single consistent utility function?

    So do we throw out the maximization paradigm? Gods no, that would be most unscientific. But it’s so awkward saying, “People choose so as to maximize…something. Just trust me, they do, okay? Yes, I know it isn’t falsifiable. S-So what?”

    We need a word that sounds like it means something without in fact meaning anything at all. The word economists chose is…utility. People choose so as to maximize…utility. What’s utility? Never ask that question!

    Those of the aspiring rationalist bent will recognize this as a “fake explanation” in the ignoble line of magic, gods and phlogiston. Interestingly, this is a case where a fake explanation actually functioned to protect a valuable scientific paradigm, that of constrained maximization. It’s amazing how babies can surface in the bathtub just as one is about to tip it out the window….

    Of course it would be better if we could simply say, “People are maximizing I-don’t-know-what, but trust me it works, okay?” but people are not so rational, nor are the “thinkers” who set themselves against the validity of economic science interested in a fair fight. Nor is it the case that “utility” came to serve its function as part of some secret plan among economists. As a result, economists and philosophers sometimes have the most fascinating debates about nothing at all….

    When discussing utilitarianism, utility monsters, cardinal vs. ordinal utility, interpersonal comparisons of utility, etc., try to taboo “utility” and substitute instead “I-don’t-know-what.” Also bear in mind that “utility” substitutes for a wide range of values, as does “I-don’t-know-what.”

    Economics is the science of choice (and other things), and to grasp the value and worthlessness of utility, consider happiness vs. satisfaction, both of which are, though I am no expert, presumably measurable as real neurological responses to some chemical. Correct me if I’m wrong, but I see no reason why “happiness” and “satisfaction” shouldn’t be measurable as consistent quantities just like height or weight in principle, whereas economists are generally quite firm about utility being unmeasurable. Admittedly, they usually get the reason wrong, referring to “mental states” or some non-answer like that rather than the fact that utility is nothing, and nothing can’t be measured no matter how sensitive the instrument is. Nevertheless, consider then the question of comparing some unit of happiness to some unit of satisfaction. Is it not like comparing height to weight? In what sense can someone be taller than someone else is heavy? (as comparisons to a mean strikes me as the pedant’s response, but please, let’s have a polite conversation.)

    However, people do observably make choices between happiness, satisfaction, and a dozen other values. I find writing this quite satisfying, whereas I would probably be happier going to bed, and yet I would not sacrifice any amount of happiness for this amount of satisfaction. Somehow I am making a comparison between two incomparable quantities–I am solving the question of which gives me the most “utility,” but utility is not any one thing. We have only added another turtle to the growing pile. To settle-by-not-settling this odd question, economists frame all choices as tradeoffs between different utilities, which are as comparable to each other as any other made-up unit is.

    So to finally come back around to ordinal vs. cardinal utility, the question is only which tool seems most useful for solving some problem. Debating the properties of [nothing] is hardly likely to yield anything of use. Nevertheless, cardinal utility is generally considered less useful, and this is because it violates the very utility of utility, which is that it has no particular properties. Assigning utility any kind of quality that should be consistent and measurable defeats the point of the concept of utility in the first place. Similarly, apply this to the question of a preference for a round number of utils and see if the question still makes sense.

    tl;dr:

    If we thought that people maximized happiness, we would say they were happiness maximizers, not utility maximizers. Truth is, we don’t know what people maximize, but maximizing is useful so we say they maximize utility i.e. we-don’t-know-go-away-and-stop-asking-questions. When thinking about questions of utility, substitute “I-don’t-know-what” for “utility,” and remember that utility covers a wide range of values rather than being a substitute for a single one. Then listen to people debate utility monsters and sigh.

    Relevant quote:

    “Iā€™ll tell you a tale about an English economist, Ely Devons. I was at a conference and he said, ā€œLetā€™s consider what an economist would do if he wanted to study horses.ā€ He said, ā€œWhat would he do? Heā€™d go to his study and think, ā€˜What would I do if I were a horse?ā€™ And heā€™d come up with the conclusion that heā€™d maximize his utility.ā€ That wouldnā€™t take us very far if we were interested in horses….”

    –Ronald Coase, Nobel laureate, in his remarks at the University of Missouri in 2002.

    • Ben A says:

      Generally, bingo for this comment. Utilitarianism lacks a theory of value.

    • suntzuanime says:

      I vehemently disagree. You are making the mistake of dismissing as nonsense what is imprecise, but even imprecise concepts may be useful for thinking about the world. People often act as though they try to achieve specific sorts of worldstates – “utility” is a reification of that for which they strive. You complain that it’s “not any one thing”, but if it were any one thing, we would talk about that thing instead of talking about utility. It is precisely because utility is different from just happiness that we need the term “utility”. We should not be surprised utility is more than one thing – there are arguments for why we should expect this at http://lesswrong.com/lw/l3/thou_art_godshatter/

      I myself am a sort of heretic, in that I don’t believe utility is a fundamentally accurate description of human beings. It’s a flawed model. But it’s a flawed model like Newtonian Mechanics is a flawed model. Treating people like they’re maximizing something usually does pretty well by you.

      • Daniel H says:

        I believe you are saying that Maximizing Something + Hyperbolic Discounting is not a valid model of people, just like Newtonian Mechanics is not a valid model of the world and the Carot cycle is not a valid model of an engine. In all cases, the model is still useful in a lot of cases, though.

        How does that make you a heretic? I thought this was commonly accepted. Even economists know that Homo Sapiens and Homo Economicus are not the same species.

      • JohannesD says:

        I certainly hope nobody here gives a particularly high probability estimate for utility being a fundamentally accurate description of human beings.

        • Daniel H says:

          I apparently wasn’t clear in that comment. I know that cardinal utility is not a good model of human behavior or cognition; I meant that in some cases where all the actions and payoffs relate to immediate experience, expected utility is directly encoded. I didn’t mean that there was some neuron or brain structure that I could read with an advanced MRI and get “The utility of one potato chip is 3.72319 utils” (especially if I hadn’t mentioned potato chips to the person in the MRI); I meant that if you were to put a potato chip and an apple in front of me, and asked me to raise a hand indicating which one I wanted, you’d be able to read some sort of comparison from the parts of my brain in charge of the relevant muscle groups. Moreover, I think that information would include more than just a strict binary prefence. I didn’t elaborate because it wasn’t the main point of the comment.

    • Mark says:

      Utilitarianism is about what we ought to do, not what we actually do. More specifically, utilitarians say that we ought to maximize utility, not that we do maximize utility. So I think you’re a bit confused. (That wasn’t meant to be condescending at all.)

      You could still try a similar criticism – that any utilitarian theory of what we ought to do is doomed, because utility isn’t any one thing that can be maximized. But I’m not sure how true or important this is. Maybe we can define utility in terms of preference satisfaction. Maybe it’s enough to just come up with a sufficiently good approximation of what we intuitively recognize as happiness and maximize that. Maybe we needn’t come up with a definitive moral verdict for cases where the tradeoffs seem sufficiently incomparable. I don’t know, but it’s complicated.

      • Take the sentence “utilitarians say that we ought to maximize utility,” and change it to “utilitarians say that we ought to maximize I-don’t-know-what.” It doesn’t seem like an answer to the question but only a restatement of it.

        I think there’s some confusion here–the utility of utility is not to substitute a single term of value for many but rather to represent and hide our ignorance as to what those values are (preference satisfaction is no more of an answer. What are the preferences?). However, it is also true that whenever people choose they somehow successfully compare apples to oranges according to some criteria of “value.” Utility negates the question of how incomparable qualities can be compared by representing them all as the same quality, but they are not the same qualities and the mystery remains.

        People differ as to what they will intuitively recognize as happiness. Anyway, people aren’t happiness maximizers. Writing this doesn’t make me happy, but if you stopped me, my utility would be lower than if you allowed me to proceed.

        • Desertopa says:

          I don’t think this amounts to a restatement of the problem, and for that matter, if it seems like one to you, I think that suggests that you probably have some fundamental dissatisfactions with most or all proposed alternatives to utilitarianism as well. Most moral theories do not suggest that we “maximize something,” but that we follow some particular set of rules or principles, whether or not the consequences seem favorable in any particular case. Within the narrower set of moral theories that are actually consequentialist, there are are alternatives that don’t involve “maximizing something,” but they generally retain the “something” component, and lead to the question “if the something is so good, wouldn’t it be better to have more of it?”

          We don’t have a precise description of what utility is. But we don’t have a precise description of the complete set of divine commands, or categorical imperatives, or correct virtue ethics, or whatever else either. But we still understand enough about the general shape of these models to recognize certain suggestions that they make, and ways that they differ from each other.

        • “What should we maximize?”

          “Maximize I-don’t-know-what.”

          “…What is that?”

          “I don’t know. Maximize it.”

          “Gee, thanks!”

          How does that not strike you as a mere restatement of the question?

        • Vulture says:

          Because Deontology, to take an example, doesn’t involve maximizing anything. It doesn’t even involve maximizing deontic-rule-adherence – as I understand it, a deontologist with an injunction against killing would not consider it right to shoot someone who was about to force-feed them a pill that would turn them into a serial killer, even if doing so was unambiguously the action which would result in them killing the least number of times. Deontology does not, as a rule (heh), project into the future.

        • I think you’re suggesting that there are questions other than “What should we maximize?” Fine, but that’s beside the point. Mark thought I was confused as to whether utilitarianism is a prescriptive or descriptive theory, and my point is that it doesn’t matter because a non-answer fails to answer both questions equally.

        • blacktrance says:

          Utility isn’t “I-don’t-know-what”, it unpacks to “the expression of the rankings of world-states made by agents in [ideal mental state]”, where [ideal mental state] is a state of internal consistency, no akrasia, etc.

        • Phlogiston isn’t “I-don’t-know-what,” it unpacks “the burning-substance which must exist as observed in the process of combustion.”

          But what advance predictions does it imply? Not advance predictions of constrained maximization, which has general properties that allow utility-maximization to be informative despite the meaninglessness of utility, but advanced predictions implied by utility itself apart from the admirable service it performs in defense of the reputation of economists.

        • Desertopa says:

          If you’ve reached a point where the problem looks like “maximize something,” I think you’ve pretty much already adopted the basic premises of utilitarianism.

          If the problem looks like “Follow the divine rules,” then you’ll follow into similar loops of “what are the divine rules?” “The things we’re supposed to follow.” Likewise with pretty much any other moral theory. The fact that we don’t have a precise formulation of the principle that rests at the foundation of utilitarianism isn’t something that actually *distinguishes* it from other moral theories. We still have approximate formulations of the principles underlying our moral theories, and enough insight into the general shape of the theories to pick out important places where they differ, in practical application as well as in theory.

      • Said Achmiz says:

        mylittleeconomy was not, repeat: not, talking about utilitarianism, nor about any other ethical theory.

        Utilitarianism ā€” the ethical theory that says we ought to maximize utility across persons, is not the same thing as the economist’s (and game theorist’s) idea of utility, that-which-an-individual-can-be-construed-to-maximize (if and only if that individual’s preferences satisfy certain criteria).

        The economist’s notion of utility is a descriptive notion of how individuals (with preferences satisfying certain criteria) act with respect to their own preferences and goals. The ethical notion of utilitarianism is (as you say) a normative theory of how individuals should act with respect to other people’s preferences and goals.

        • Mark says:

          Whoops, looks like you’re right! I saw the word “utilitarianism” at the beginning and my brain thought he was incorrectly identifying it with the use of utility in economic models (a mistake I’ve seen people make before). My bad.

    • scaphandre says:

      Great comment!

      I agree that utility is a imprecisely defined concept, something used as a kludge term by economists and philosophers alike. Many do think of utility as a messy construct of happiness and satisfaction.

      I disagree that that it then follows that “utility doesn’t mean anything at all”.

      Utility is a thing. But it is a thing as a concept. It’s not something that we can easily externally access or test, other than through behaviour. Perhaps better to say something like utility is a concept that doesn’t always act as people expect it to.

      Utility exists as utility can be gained from considering utility in a model šŸ™‚

    • scaphandre says:

      To stress a point you alluded to – happiness is *not* simply the application of some chemical to the brain. Happiness comes from the state of the mind, which is dependant on millions of neurons and trillions of synapses – and so consequentially harder to quantify. I see utility as an necessary abstraction above these things as we can’t easily measure them.

      Utility is a whole heap more complex in that it can be a ‘score’ achieved from any possible scenario for how ‘good’ it is for the agent, but it is also somewhat more accessible as we can observe the actions of agents. And then sit on our armchairs…

      • Good–happiness yields a physical anticipation. A happy brain means something to you. What does a utilitized brain imply?

        • scaphandre says:

          Good point. A brain being happy *is* physically realised in the world – the agent feels happy. Personally, I had always taken ‘utility’ be a different level of abstraction. It gives a score for the the state of this agent in its current universe.

          I don’t really think of utility as being internal to agent’s brain as intimately as happiness is. It can’t be entirely within the agent’s head, as the capability to get what it wants is part of its utility, and not all of that is known to the agent.

          If you live in a world where your partner has won a lottery, but you don’t know, nothing changes inside you, but you nonetheless have more utilitons.

          I would not be at all surprised if a neuroimaging study (maybe using fMRI or EEG) were to report high accuracy at externally predicting human subjective moment-to-moment happiness. But they can’t get at utility so easily – because utility isn’t *inside* the brain. It’s an abstraction, seen to an omniscient observer, or subtly revealed by the actions of the agent.

          Having read this, I realise I’m not sure how much everyone here would agree. What do you think?

        • I think that it sounds like you’re trying to make sense of a concept that is really meaningless, which is the point of utility. Saying “people maximize utility” rather than “I don’t know what people maximize but I’m going to go with constrained maximization anyway, okay?” allows us to move past the problem we can’t solve to the problem we can.

          What do you think the defenders of vitalism sounded like?

          The idea that utility has anything to do with happiness is simply wrong, full stop. If we meant to say happiness when we say utility, we would just say happiness. Yes, it is a common fallacy even economists make. Nevertheless.

          I find it interesting that the Lesswrong crowd (or is this not that audience?) haven’t immediately recognized another fake explanation upon it being pointed out. It’s a surprise and a good lesson.

        • scaphandre says:

          I hear you, but I think you might be placing higher expectations on the rigorousness of the utility than many of those who use the term place on it.

          If you make up a woolly metric of happiness-satisfaction-life-winningness that lets you build workable models of the world, which are independently useful and can make novel predictions, then you have something that is not entirely meaningless.

          We know utilitons are not actually out there like atoms in the world, just as your soul is not in your heart. I don’t think what we have now is a fake explanation – rather an acknowledged lack of a rigorousness explanation as of yet. That does not mean that utility is automatically an incoherent or useless concept.

        • The term is meaningless, the model is not, the term allows us to use the model without people getting upset. That’s the utility of utility, as I’ve said.

          Of course it’s a fake explanation. “What do people maximize?” “Why, their utility, of course!” It’s an explanation, and a fake one, ergo….

          Phlogiston and elan vital can also be thought of as an acknowledged lack of an explanation, and indeed that’s what I originally said utility is. The problem is, people often don’t acknowledge this. See e.g. most of the comments in this thread….

    • Alexander Stanislaw says:

      This is way late but if I understand your point. Utilitarians think that humans act in such as a way to maximize utility but utility is a fake explanation, it doesn’t explain why people act in the way that they do.

      I would say that utility is not an explanation at all but a definition. If people have self consistent preferences of varying strengths (not a trivial premise!), then for each preference P1, P2, … define U(Pi) as a number or ordinal that satisfies the same properties as the preferences. So if an agent prefers A to B then U(A) > U(B). So rather than saying that humans act so as to maximize utility, we could say that whenever a human chooses an action we define that action as having the highest utility among its alternatives.

  31. Douglas Knight says:

    People throw around the names von Neumann and Morgenstern a lot, but I think most discussion misses the point, which is very simple. If you have consistent ordinal preferences (which is a big assumption, but not my problem) over lotteries, then you can measure your preferences in units of probability, which are real numbers. Thus you really have cardinal utilities.

    Probability is fungible, measured by cardinals, and you ought to value it linearly. That last part is a hard sell, but it is a form of dynamic consistency.

    Specifically, if you prefer A to B, and C is somewhere in the middle, then there should be lottery that has probability p of resulting in A and 1-p of resulting in B and which you are indifferent between that lottery and C. Then you say that u(B)=0, u(A)=1, and u(C)=p.

    (I think Sniffoy’s bringing up Savage is overkill here.)

  32. Desertopa says:

    “Aaron, channeling Robin Hanson, points out that exponential discounting means we consider preferences of people in the future much less important than those of people today ā€“ otherwise we would invest all our money to donate the interest to future folk. But this implies that people in the past had more important preferences. If a Neanderthal had preferences that everyone should make sacrifices to the fire god, should we sacrifice to the fire god more?”

    I think that the conclusion that we would invest all our money to serve the interests of future people if not for exponential discounting assumes much more consistency of human reasoning than we actually have. I think the reason people overwhelmingly do not do this is that it doesn’t occur to them, and if it did occur to them they would write the conclusion not to do it at the bottom line before deciding why, not because they care less about future people, but because it’s inconvenient and seems weird because we have no cultural or evolutionary basis for behaving that way.

    Also, I think that if we’re extrapolating the conclusion that humans apply discounting to the interests of future people through observation of their behavior, I think that we’d equally have to come to the conclusion that humans apply discounting to the interests of past people. The discount rate operates bidirectionally from the present.

    But I don’t think that discount rates accurately describe human preferences over time in any but extremely narrow domains.

  33. James Miller says:

    Your potato chip comparison can be shown by ordinal utility when you include money: For what dollar amount X are you indifferent between [having one potato chip and getting X] and [curing cancer and winning a Nobel Prize]. Or it could be represented by using lotteries: For what probability p are you indifferent between getting with certainty [one potato chip] or getting with probability p [curing cancer and winning a Nobel Prize].

    • ADifferentAnonymous says:

      The lottery thing. In particular, I think following the VNM axioms gives you all the expressive power of a cardinal utility function; if you’re indifferent between 100% chance of a potato chip and a (100/X)% chance of curing cancer, then you’ve basically said curing cancer is X times the utility of a potato chip.

      • Paul Torek says:

        But what about the value of risk? Real people often choose in a way that suggests that risk itself has a negative (or in some cases, positive) value. In which case you haven’t said that curing cancer is X times the utility of a potato chip.

    • Vaniver says:

      Or it could be represented by using lotteries: For what probability p are you indifferent between getting with certainty [one potato chip] or getting with probability p [curing cancer and winning a Nobel Prize].

      For this to be consistent, it needs to map to a cardinal utility system.

    • Scott Alexander says:

      I think you’re reinventing cardinals, not bypassing them.

      • James Miller says:

        I’m getting cardinal intuition with ordinal assumptions. It’s not cardinal utility because all I ever ask of consumers is to tell me if one basket of goods is >,<, or = to another in terms of utility. Cardinal utility is "bad" in part because it's unnecessary.

        • Scott Alexander says:

          But money works on a cardinal system. If you’re getting all your work out of pegging ordinals to money, it’s the cardinalness of money doing the work, not the ordinalness of utility.

        • James Miller says:

          This is a reply to Scott above.

          Yes, which is great because money is cardinal whereas it is difficult to establish by experiment that human happiness is cardinal.

  34. Said Achmiz says:

    Not that I’m necessarily a fan of ordinal utility, but is the problem you describe really a problem? And does cardinal utility really solve it?

    About the latter: ok, so you think A would be more than twice as good as B. But we seem to run into problems if we try to go further. Do you think A would be more than ten times as good as B? A hundred times? A million times? 3^^^3 times? Pinning down a number seems hard; even getting to the right order of magnitude seems hard. And then, if you do get a number, what does it mean? That you’d trade N potato chips for a cancer cure and a Nobel prize, but not N+1? Seems weird… That you’d get N times as much utility? But what the heck does that mean? Is “utility” here some measure of psychological satisfaction? Or an abstract construct? But then we have to operationalize that

    About the former: maybe saying “a cancer cure plus a Nobel prize is better than a potato chip” doesn’t adequately convey your sense of their relative importance. But what about: “a cancer cure plus a Nobel prize is better than just a cancer cure; which in turn is better than achieving peace in the Middle East; which in turn is better than becoming a world-famous movie star; which in turn is better than getting a basketful of cute kittens; which in turn is better than a three-course lobster dinner at my favorite restaurant; which in turn is better than… ” and then the series eventually gets to a potato chip (and maybe we can insert arbitrarily many intermediate steps between each of the ones I listed). Doesn’t that go pretty far in conveying the intuition about how much better the cancer cure and Nobel are better than a chip, and without introducing any pesky numbers?

    • Scott Alexander says:

      Hmmmm. That’s the best solution I’ve seen so far. But to an alien (or a computer that’s being run on this system), isn’t it possible that the utility of all these things are very, very similar?

      Like, 10^24 atoms of potato chips is better than 10^24-1 atoms of potato chips is better than…1 atom of potato chips, but in the end I don’t care that much about any of these very many options.

      And I could imagine a person whose only two pleasures (more or less) are eating chips and protecting his young child’s life, but that doesn’t mean these two things are necessarily similar in value.

      • Said Achmiz says:

        Without cardinal utility, what does it even mean to say that the utility of all these things are very, very similar?

        We can say that you don’t care much between the various amounts of potato chips, because there are many things about which you (ordinally speaking) care more than about any amount of potato chips, and those many things are also ordered among themselves.

        • Daniel H says:

          I feel like this is probably a time to check the psychology or cognitive science literature instead of armchair philosophizing. I think that we (as a species, not as a collection of commenters on a blog or a collection of people who went to the blog host’s house) pretty much know how utility of immediately available things is encoded in the brain, and although this might not generalize to utility of hypothetical options that aren’t immediately available or happen in the future, I am not a cognitive scientist or a psychologist and don’t know.

          If you’re wondering how an ideal brain stores its utility, I think I’ve seen proofs that it’s based on the VNM utility theory. If you’re wondering about humans, the only way to know is to get some actual humans and find out, without counting yourself in the set of observed humans. People have already done this. If they haven’t answered the question satisfactorily, then can you start hypothesizing (preferably in a way that can lead to an experiment you can do, but that seems unlikely unless you have an EEG).

          TL;DR: This is a time to exercise the virtue of scholarship.

        • Said Achmiz says:

          I’m pretty sure there isn’t any such thing as “utility” that is in any meaningful sense “encoded in the brain”, except in the sense that we “have preferences” (more realistically, act in such a way that we can be construed to have “revealed” preferences, and also sometimes speak about our stated preferences, which may not match our “revealed” preferences), and those preferences (maybe! possibly!) satisfy the VNM axioms and thus allow the construction of a utility scale which predicts those preferences.

          In short, the only things “encoded in the brain” are the psychological tendencies and proclivities to behavior and so forth, and our beliefs and biases and such that lead to us stating certain preferences.

          We know all of this, so in a sense the question has been answered, as satisfactorily as it’s going to be answered. The discussion about utility has nothing anymore to do with how utility is stored in the brain, since we know it isn’t, period. The discussion is about which sort of utility scale is most matematically/theoretically isomorphic to the psychological structure of our preferences, or in other words, which sort of utility scale allows us to reason correctly about human beliefs, values, behaviors, etc. This is not a question to be answered by looking at the brain.

        • Daniel H says:

          In at least some cases, in at least some primates, expected utility or a variant thereof is encoded directly in the brain, as explained here starting in the Expected Utility section.

          It probably doesn’t generalize to comparing cancer and potato chips (or maybe even to comparing apples and oranges), but it is there to some extent. I should probably re-read that article to be more clear on the subject.

          In either case, there is something in the brain that makes Scott say that curing cancer and getting a Nobel would be more than twice as good as getting a potato chip, meaning that ordinal utility is not a perfect model of human cognition. I would be at least slightly surprised if that something hadn’t yet been studied.

    • TeaMug says:

      I betcha that utility, like a lot of human experience, is graded on a logarithmic scale. So, n vs n+1 potato chips only matters for very small n (like, 3 or less); I’d bet that if perceived value is x, actual value is more like 2^x.

      I’m not sure I buy my claim though… time for silly math:

      I still rate [Cancer Cure] as far better than [Potato Chip], but I personally rate [another moon, say the size of Ganymede, but with lots of nice organics, like you might get from Potato Chips] about equally.

      If a potato chip is ~1/4 gram, then
      1 [Ganymede] = 6*10^26 [Potato Chip] ~= 2^89 [Potato Chip]

      This implies that I value [Cancer Cure] at least 89 times as much as I value [Potato Chip]. That seems… low. Possible error sources include: brain can’t comprehend very large numbers too well; I’m actually just that selfish; making a moon out of potato chips is a bad idea when calculating utility unit-conversions.

      • Anonymous says:

        It’s not necessarily logarithmic, but I think there are definitely nonlinear functions of numbers-of-each-of-different-goods involved before you get to reasonable cardinal utility (which is the Obvious Right Answer). Take complements and substitutes from basic microeconomics, picture them as source code, and extrapolate to code with more flow control structures…

        • nemryn says:

          Diminishing marginal returns on utility is a thing, isn’t it?

        • Anonymous says:

          Diminishing marginal returns on many other things is often shown in terms of utility. I don’t think utility itself has diminishing marginal returns (in what?), besides that if everything else has diminishing marginal returns in utility, things kind of act like more-naive-and-linear utility has diminishing marginal returns.

    • MugaSofer says:

      Don’t forget how much two potato chips are worth, two Nobel Prizes, two cancer cures, two cancer cures and one nobel prize …

  35. Sniffnoy says:

    Was this posted early?

    This is also where I point out that (as was discussed at te meetup) there are quite good reasons for cardinal utility — it comes from ordinal utility over actions with unknown consequences (alternatively, probabilistic combinations of consequences) satisfying reasonable conditions, such as…