<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Slate Star Codex &#187; morality</title>
	<atom:link href="http://slatestarcodex.com/tag/morality/feed/" rel="self" type="application/rss+xml" />
	<link>http://slatestarcodex.com</link>
	<description>In a mad world, all blogging is psychiatry blogging</description>
	<lastBuildDate>Fri, 24 Jul 2015 12:18:22 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.2.3</generator>
	<item>
		<title>Extremism In Thought Experiment Is No Vice</title>
		<link>http://slatestarcodex.com/2015/03/26/high-energy-ethics/</link>
		<comments>http://slatestarcodex.com/2015/03/26/high-energy-ethics/#comments</comments>
		<pubDate>Fri, 27 Mar 2015 02:55:28 +0000</pubDate>
		<dc:creator><![CDATA[Scott Alexander]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[morality]]></category>
		<category><![CDATA[philosophy]]></category>

		<guid isPermaLink="false">http://slatestarcodex.com/?p=3590</guid>
		<description><![CDATA[[content warning: description of fictional rape and torture.] Phil Robertson is being criticized for a thought experiment in which an atheist&#8217;s family is raped and murdered. On a talk show, he accused atheists of believing that there was no such &#8230; <a href="http://slatestarcodex.com/2015/03/26/high-energy-ethics/">Continue reading <span class="pjgm-metanav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p><font size="1"><i>[content warning: description of fictional rape and torture.]</i></font></p>
<p>Phil Robertson is being criticized for a thought experiment in which an atheist&#8217;s family is raped and murdered. On a talk show, he accused atheists of believing that there was no such thing as objective right or wrong, then continued:<br />
<blockquote>I’ll make a bet with you. Two guys break into an atheist’s home. He has a little atheist wife and two little atheist daughters. Two guys break into his home and tie him up in a chair and gag him.</p>
<p>Then they take his two daughters in front of him and rape both of them and then shoot them, and they take his wife and then decapitate her head off in front of him, and then they can look at him and say, ‘Isn’t it great that I don’t have to worry about being judged? Isn’t it great that there’s nothing wrong with this? There’s no right or wrong, now, is it dude?’</p>
<p>Then you take a sharp knife and take his manhood and hold it in front of him and say, ‘Wouldn’t it be something if [there] was something wrong with this? But you’re the one who says there is no God, there’s no right, there’s no wrong, so we’re just having fun. We’re sick in the head, have a nice day.’</p>
<p>If it happened to them, they probably would say, ‘Something about this just ain’t right’.</p></blockquote>
<p>The media has completely proportionally described this as Robinson <A HREF="http://unvis.it/www.rawstory.com/rs/2015/03/duck-dynasty-star-fantasizes-about-atheist-familys-brutal-rape-and-murder-to-make-point-about-gods-law/">&#8220;fantasizing about&#8221;</A> raping atheists, and there are the usual calls for him to apologize/get fired/be beheaded.</p>
<p>So let me use whatever credibility I have as a guy with a philosophy degree to confirm that Phil Robertson is doing moral philosophy exactly right.</p>
<p>There&#8217;s a tradition at least as old as Kant of investigating philosophical dilemmas by appealing to our intuitions about extreme cases. Kant, remember, proposed that it was always wrong to lie. A contemporary of his, Benjamin Constant, made the following objection: suppose a murderer is at the door and wants to know where your friend is so he can murder her. If you say nothing, the murderer will get angry and kill you; if you tell the truth he will find and kill your friend; if you lie, he will go on a wild goose chase and give you time to call the police. Lying doesn&#8217;t sound so immoral <i>now</i>, does it?</p>
<p>The brilliance of Constant&#8217;s thought experiment lies in its extreme nature. If a person says they think lying is always wrong, we have two competing hypotheses: they&#8217;re accurately describing their own thought processes, which will indeed always output that lying is wrong; or they&#8217;re misjudging their own thought processes and actually there are some situations in which they will judge lying to be ethical. In order to distinguish between the two, we need to come up with a story that presents the strongest possible case for lying, so that even the tiniest shred of sympathy for lying can be dragged up to the surface.</p>
<p>So Constant says &#8220;It&#8217;s a murderer trying to kill your best friend&#8221;. And even this is suboptimal. It should be a mad scientist trying to kill everyone on Earth. Or an ancient demon, whose victory would doom everyone on Earth, man, woman, and child, to an eternity of the most terrible torture. If some people&#8217;s hidden algorithm is &#8220;lie when the stakes are high enough&#8221;, there we can be sure that the stakes are high enough to tease it out into the light of day.</p>
<p>Compare Churchill:<br />
<blockquote>
<b>Churchill:</b> Madam, would you sleep with me for five million pounds?<br />
<b>Lady:</b> Well, for five million pounds&#8230;well&#8230;that&#8217;s a lot of money.<br />
<b>Churchill:</b> Would you sleep with me for five pounds?<br />
<b>Lady:</b> <i>(enraged)</i> What kind of a woman do you think I am‽<br />
<b>Churchill:</b> We&#8217;ve already established what kind of a woman you are, now we&#8217;re just haggling over the price</p></blockquote>
<p>The woman thinks she has a principle, &#8220;Never sleep with a man for money&#8221;. In fact, deep down, she believes it&#8217;s okay to sleep with a man for enough money. If Churchill had merely stuck to the five pounds question, she would have continued to believe she held the &#8220;never&#8230;&#8221; principle. By coming up with an extreme case (5 million Churchill-era pounds is about £250 million today) he was able to reveal that her apparent principle was actually a contingent effect of her real principle plus the situation.</p>
<p>In fact, compare physics. Physicists are always doing things like cooling stuff down to a millionth of a degree above absolute zero, or making clocks so precise they&#8217;ll be less than a second off by the time the sun goes out, or acclerating things to 99.99% of the speed of light. And one of the main reasons they do is to magnify small effects to the point where they can measure them. All movement is causing a <i>little</i> bit of time dilation, but if you want to detect it you need the world&#8217;s most accurate clock on the Space Shuttle when it&#8217;s traveling 25,000 miles per hour. In order to figure out how things really work, you need to turn things up to 11 so that the effect you want is impossible to miss. Everything in the universe has been exerting a gravitational effect on light all the time, but if you want to see it clearly you need to use <A HREF="http://en.wikipedia.org/wiki/Arthur_Eddington#Relativity">the Sun during a solar eclipse</A>, and if you <i>really</i> want to see it clearly your best bet is a black hole.</p>
<p>Great physicists and great philosophers share a certain perversity. The perversity is &#8220;Sure, this principle works in all remotely plausible real-world situations, but WHAT IF THERE&#8217;S A COMPLETELY RIDICULOUS SCENARIO WHERE IT DOESN&#8217;T HOLD??!?!&#8221; Newton&#8217;s theory of gravity explained everything from falling apples to the orbits of the planets impeccably for centuries, and then Einstein asked &#8220;Okay, but what if, when you get objects thousands of times larger than the Earth, there are tiny discrepancies in it, then we&#8217;d have to throw the whole thing out,&#8221; and instead of running him out of town on a rail scientists celebrated his genius. Likewise, moral philosophers are as happy as anyone else not to lie in the real world. But they wonder whether they might be revealed to be only simplifications of more fundamental principles, principles that can only be discovered by placing them in a cyclotron and accelerating them to 99.99% of the speed of light.</p>
<p>Sometimes this is even clearer than in the Kant example. Many people, if they think about it at all, believe that value aggregates linearly. That is, two murders are twice as much of a tragedy as one murder; a hundred people losing their homes is ten times as bad as ten people losing their homes.</p>
<p><A HREF="http://lesswrong.com/lw/kn/torture_vs_dust_specks/">Torture vs. Dust Specks</A> is beautiful in its simplicity; it just takes this assumption and creates the most extreme case imaginable. Take a tiny harm and aggregate it an unimaginably high number of times; then compare it to against a big harm which is nowhere near the aggregated sum of the tiny ones. So which is worse, 3^^^3 (read: a number higher than you can imagine) people getting a single dust speck in their eye for a fraction of a second, or one person being tortured for fifty years?</p>
<p>Almost everybody thinks their principle is &#8220;things aggregate linearly&#8221;, but when you put it into relief like this, almost everybody&#8217;s intuition tells them the torture is worse. You can &#8220;bite the bullet&#8221; and admit that the dust specks are worse than the torture. Or you can throw out your previous principle saying that things aggregate linearly and try to find another principle about how to aggregate things (good luck).</p>
<p>Moral dilemmas are extreme and disgusting precisely because those are the only cases in which we can make our intuitions strong enough to be clearly detectable. If the question was just &#8220;Which is worse, a thousand people stubbing their toe or one person breaking their leg?&#8221; neither side would have been obviously worse than the other and our true intutition wouldn&#8217;t have come into sharp relief. So a good moral philosopher will <i>always</i> be talking about things like murder, torture, organ-stealing, Hitler, <A HREF="https://www.psychologytoday.com/blog/experiments-in-philosophy/200804/what-s-the-matter-little-brothersister-action">incest</a>, <A HREF="http://www.utilitarian.net/singer/by/199704--.htm">drowning children</A>, <A HREF="http://lesswrong.com/lw/tn/the_true_prisoners_dilemma/">the death of four billion humans</A>, et cetera.</p>
<p>Worse, a good moral philosopher should be constantly agreeing &#8211; or tempted to agree &#8211; to do horrible things in these cases. The whole point of these experiments is to collide two of your intuitions against each other and force you to violate at least one of them. In Kant&#8217;s example, either you&#8217;re lying, or you&#8217;re dooming your friend to die. In Jarvis&#8217; <A HREF="http://en.wikipedia.org/wiki/Trolley_problem#Transplant">Transplant Surgeon</A> scenario, you&#8217;re either killing somebody to harvest their organs, or letting a whole hospital full of people die.</p>
<p>I once had someone call the torture vs. dust specks question &#8220;contrived moral dilemma porn&#8221; and say it proved that moral philosophers were kind of crappy people for even considering it. That bothered me. To look at moral philosophers and conclude &#8220;THESE PEOPLE LOVE TO TALK ABOUT INCEST AND ORGAN HARVESTING, AND BRAG ABOUT ALL THE CASES WHEN THEY&#8217;D BE OKAY DOING THAT STUFF. THEY ARE GROSS EDGELORDS AND PROBABLY FANTASIZE ABOUT HAVING SEX WITH THEIR SISTER ON THE HOSPITAL BED OF A PATIENT DYING OF END-STAGE KIDNEY DISEASE,&#8221; is to utterly miss the point.</p>
<p>So let&#8217;s talk about Phil Robertson.</p>
<p>Phil Robertson believes atheists are moral nihilists, or moral relativists, or something. He&#8217;s not quite right &#8211; there are a lot of atheists who are very moral realist &#8211; Objectivists, as their name implies, believe morality and everything else up to and including the best flavor of ice cream, is Objective &#8211; and even the atheists who aren&#8217;t <i>quite</i> moral realist usually hold some sort of compromise position where it&#8217;s meaningful to talk about right and wrong even if it&#8217;s not <i>cosmically</i> meaningful.</p>
<p>On the other hand &#8211; and I say this as the former secretary of a college atheist club who got to meet <i>all sorts</i> &#8211; there are a bunch of atheists who very much claim not to believe in morality. Less Wrong probably has fewer of them than the average atheist hangout, because we skew so heavily utilitarian, but <A HREF="http://lesswrong.com/lw/lhg/2014_survey_results/">our survey</A> records 4% error theorists and 9% non-cognitivists. When <A HREF="http://www.patheos.com/blogs/friendlyatheist/2015/03/25/when-phil-robertson-fantasized-about-the-rape-and-murder-of-an-atheist-family-what-part-did-he-leave-out/">Friendly Atheist says</A> he &#8220;doesn’t know a single atheist or agnostic who thinks that terrorizing, raping, torturing, mutilating, and killing people is remotely OK&#8221;, I can believe that he doesn&#8217;t know one who would say so in those exact words. But I&#8217;m not sure how, for example, the error theorists could consistently argue against that position.</p>
<p>And what Phil Robertson does is exactly what I would do if I were debating an error theorist. I&#8217;d take the most gratuitiously horrible thing I could think of, describe it in the most graphic detail I could, and say &#8220;But don&#8217;t you think there&#8217;s something wrong with <i>this</i>?&#8221; If the error theorist says &#8220;no&#8221;, then I congratulate her for definitely being a real honest-to-goodness error theorist, and unless I can suddenly think up a way to bridge the is-ought dichotomy we&#8217;re finished. But if she says &#8220;Yes, it does seem like there should be something wrong there,&#8221; then we can start exploring what that means and whether error theory is the best framework in which to capture that intuition.</p>
<p>On the other hand, if I were debating Phil Robertson, I would ask him where <i>he</i> thinks morality comes from. And if he suggested some version of divine command theory, I could use an example of the graphic-horrifying-extreme-thought-experiment genre even older than Kant &#8211; namely, Abraham&#8217;s near-sacrifice of Isaac. If God commands you to kill your innocent child, is that the right thing to do? What if God commands you to rape and torture and mutilate your family? And it wouldn&#8217;t work if it were anything less extreme &#8211; if I just said &#8220;What if God told you to shoplift?&#8221; it would be <i>easy</i> to bite that bullet and he wouldn&#8217;t have to face the full implication of his views. But if I went with the extreme version? Maybe Robertson would find he&#8217;s not as big on divine command theory as he thought.</p>
<p>But this sort of discussion would only be possible if we could trust each other to take graphic thought experiments in the spirit in which they were conceived, and not as an opportunity to score cheap points.</p>
<p>[EDIT: This post was previously titled &#8220;High Energy Ethics&#8221;, but I changed it after realizing it was <A HREF="http://slatestarcodex.com/2015/03/26/high-energy-ethics/#comment-193097">unintentionally lifted from elsewhere</A>]</p>
]]></content:encoded>
			<wfw:commentRss>http://slatestarcodex.com/2015/03/26/high-energy-ethics/feed/</wfw:commentRss>
		<slash:comments>590</slash:comments>
		</item>
		<item>
		<title>A Series Of Unprincipled Exceptions</title>
		<link>http://slatestarcodex.com/2015/03/04/a-series-of-unprincipled-exceptions/</link>
		<comments>http://slatestarcodex.com/2015/03/04/a-series-of-unprincipled-exceptions/#comments</comments>
		<pubDate>Wed, 04 Mar 2015 10:16:12 +0000</pubDate>
		<dc:creator><![CDATA[Scott Alexander]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[charity]]></category>
		<category><![CDATA[morality]]></category>

		<guid isPermaLink="false">http://slatestarcodex.com/?p=3571</guid>
		<description><![CDATA[The consistently one attempts to adhere to an ideology, the more one&#39;s sanity becomes a series of unprincipled exceptions. &#8212; graaaaaagh (@graaaaaagh) February 5, 2015 Meeting with a large group of effective altruists can be a philosophically disconcerting experience, and &#8230; <a href="http://slatestarcodex.com/2015/03/04/a-series-of-unprincipled-exceptions/">Continue reading <span class="pjgm-metanav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<blockquote class="twitter-tweet" lang="en"><p>The consistently one attempts to adhere to an ideology, the more one&#39;s sanity becomes a series of unprincipled exceptions.</p>
<p>&mdash; graaaaaagh (@graaaaaagh) <a href="https://twitter.com/graaaaaagh/status/563306761751257089">February 5, 2015</a></p></blockquote>
<p><script async src="//platform.twitter.com/widgets.js" charset="utf-8"></script></p>
<p>Meeting with a large group of effective altruists can be a philosophically disconcerting experience, and my recent meetup with Stanford Effective Altruist Club was no exception.</p>
<p>Buck forced me to pay attention to an argument I&#8217;ve been carefully avoiding. Most people intuitively believe that animals have non-zero moral value; it&#8217;s worse to torture a dog than to not do that. Most people also believe their moral value is some function of the animal&#8217;s complexity and intelligence which leaves them less morally important than humans but not <i>infinitely</i> less morally important than humans. Most people then conclude that probably the welfare of animals is moderately important in the same way the welfare of various other demographic groups like elderly people or Norwegians is moderately important &#8211; one more thing to plug into the moral calculus.</p>
<p>In reality it&#8217;s pretty hard to come up with way of valuing animals that makes this work. If it takes a thousand chickens to have the moral weight of one human, the importance of chicken suffering alone is probably within an order of magnitude of all human suffering. You would need to set your weights <i>remarkably</i> precisely for the values of global animal suffering and global human suffering to even be in the same ballpark. Barring that amazing coincidence, either you shouldn&#8217;t care about animals at all or they should totally swamp every other concern. Most of what would seem like otherwise reasonable premises suggest the &#8220;totally swamp every other concern&#8221; branch.</p>
<p>So if you&#8217;re actually an effective altruist, the sort of person who wants your do-gooding to do the most good per unit resource, you should be focusing entirely on animal-related charities and totally ignoring humans (except insofar as humans actions affect animals; worrying about x-risk is probably still okay).</p>
<p>I acknowledged the argument was very convincing, but told Buck that I waS basically going to safe-word out of that level of utilitarian reasoning, for the sake of my sanity.</p>
<p>Buck pointed out that this shouldn&#8217;t be too scary, given that many utilitarians have already had to go through a similar process. Peter Singer talks about widening circles of concern. First you move from total selfishness to an understanding that your friends and family are people just like you and need to be treated with respect and understanding. Then you go from just your friends and family to everyone in your community. Then you go from just your community to all humanity. Then you go from just humanity to all animals.</p>
<p>By the time most people figure out what they&#8217;re doing they already accept at least friends, family, and community. But going from &#8220;just my community&#8221; to &#8220;also foreigners&#8221; is a difficult step that&#8217;s kind of at the heart of the effective altruism movement. In the same way that allowing animals into the circle of concern totally pushes out the value of all humans, allowing starving Third World people into the circle of concern totally pushes out most First World charities like art museums and school music programs and holiday food drives. This is a scary discovery and most people shy away from it. Effective altruists are the people who are selected for not having shied away from it. So why shy away from doing the same with animals?</p>
<p>It&#8217;s a good question. After thinking about it for a while, I think my answer is that I never actually completed the process of widening my circles of concern and neither has anybody else, and because I&#8217;m thinking about this one in an abstract intellectual way I&#8217;m imagining actually completing it, which would be much scarier than the incomplete things I&#8217;ve done before.</p>
<p>Like, although I acknowledge my friends and family as important people whom I should try to help, in reality I don&#8217;t treat them as <i>quite</i> as important as myself. If my brother asked me for money, I&#8217;d lend it to him, but I wouldn&#8217;t give him exactly half my money no-strings-attached on the grounds that he is exactly as important to me as I am.</p>
<p>Likewise, although I acknowledge strangers as important people whom I should try to help, in reality I don&#8217;t treat them as <i>quite</i> as important as my friends. We all raised a lot of money to help Multi when she was in a bad situation, but there are thousands of other people in the same exact same bad situation and we&#8217;re not putting nearly as much effort into them.</p>
<p>You can try to justify this in terms of &#8220;well, I know myself better than I know my brother, and I know Multi better than I know strangers, so I&#8217;m more <i>effective</i> at helping me and Multi, so I&#8217;m just rationally doing the things that would have the most impact&#8221;. But I think if I bothered to dream up some thought experiment where that wasn&#8217;t true, I would prefer to help me and Multi to my brother and random strangers even after that factor had been controlled away.</p>
<p>This doesn&#8217;t come as a surprise to me and I&#8217;m not sorry. But&#8230;well&#8230;I guess my worry about the animal charity thing wasn&#8217;t that I was inconsistent,  so much as that I was being meta-inconsistent; that is, I didn&#8217;t even have a consistent set of rules for deciding whether I was going to want to be consistent or not.</p>
<p>And now I think I might have a consistent policy of allowing <i>some</i> of my resources into each new circle of concern while also holding back the rest of it for the sake of my sanity. Thus my <A HREF="http://slatestarcodex.com/2014/12/19/nobody-is-perfect-everything-is-commensurable/">endorsement</A> of GiveWell&#8217;s principle that you should donate at least 10% of your income to charity, but then feel okay about not donating more if you don&#8217;t want to. I am allowed to balance resources devoted to sanity versus morality and decide how much of what I have I want to send into each new circle of concern &#8211; without denying that the circle exists.</p>
<p>I think that armed with this idea I am willing to accept Buck&#8217;s argument about animal welfare being more important than human welfare, insofar as this means I should donate <i>some</i> resources to animal welfare without necessarily having to give up caring about human welfare completely. I don&#8217;t think I can make a principled defense of doing this. But I think I can claim I&#8217;m being unprincipled in a meta-consistent and effectively sanity-protecting way.</p>
]]></content:encoded>
			<wfw:commentRss>http://slatestarcodex.com/2015/03/04/a-series-of-unprincipled-exceptions/feed/</wfw:commentRss>
		<slash:comments>642</slash:comments>
		</item>
		<item>
		<title>Everything Not Obligatory Is Forbidden</title>
		<link>http://slatestarcodex.com/2015/02/06/everything-not-obligatory-is-forbidden/</link>
		<comments>http://slatestarcodex.com/2015/02/06/everything-not-obligatory-is-forbidden/#comments</comments>
		<pubDate>Fri, 06 Feb 2015 20:07:15 +0000</pubDate>
		<dc:creator><![CDATA[Scott Alexander]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[fiction]]></category>
		<category><![CDATA[medicine]]></category>
		<category><![CDATA[morality]]></category>

		<guid isPermaLink="false">http://slatestarcodex.com/?p=3547</guid>
		<description><![CDATA[[seen on the New York Times&#8217; editorial page, February 6 2065, written by one &#8220;Dr. Mora LeQuivalence&#8221;] It&#8217;s 2065. Not giving your kids super-enhancement designer baby gene therapy isn&#8217;t your &#8220;choice&#8221;. If you don&#8217;t super-enhance your kids, you are a &#8230; <a href="http://slatestarcodex.com/2015/02/06/everything-not-obligatory-is-forbidden/">Continue reading <span class="pjgm-metanav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p><i>[seen on the New York Times&#8217; editorial page, February 6 2065, written by one &#8220;Dr. Mora LeQuivalence&#8221;]</i></p>
<p>It&#8217;s 2065. Not giving your kids super-enhancement designer baby gene therapy isn&#8217;t your &#8220;choice&#8221;. If you don&#8217;t super-enhance your kids, you are a bad parent. It&#8217;s that simple. </p>
<p>Harsh? Maybe. But consider the latest survey, which found that about five percent of parents fail to super-enhance their children by the time they enter kindergarten. These aren&#8217;t poor people who can&#8217;t afford super-enhancement designer baby gene therapy. These are mostly rich, highly educated individuals in places like California and Oregon who say they think it&#8217;s more &#8220;natural&#8221; to leave their children defenseless against various undesirable traits. &#8220;I just don&#8217;t think it&#8217;s right to inject retroviral vectors into my baby&#8217;s body to change her from the way God made her,&#8221; one Portland woman was quoted by the <i>Times</i> as saying earlier this week. Other parents referred to a 2048 study saying the retroviral injections, usually given in the first year of life, increase the risk of various childhood cancers &#8211; a study that has since been soundly discredited.</p>
<p>These parents will inevitably bring up notions of &#8220;personal freedom&#8221;. But even if we accept the dubious premise that parents have a right to sacrifice their children&#8217;s health, refusing super-enhancement designer baby gene therapy isn&#8217;t just a personal choice. It&#8217;s a public health issue that affects everybody in society.</p>
<p>In 2064 there were almost 200 murders nationwide, up from a low of fewer than 50 in 2060. Why is this killer, long believed to be almost eradicated, making a comeback? Criminologists are unanimous in laying the blame on unenhanced children, who lack the improved  impulse-control and anger-management genes included in every modern super-enhancement designer baby gene therapy package. </p>
<p>There were over a dozen fatal car accidents on our nation&#8217;s roads last year. The problem is drivers who weren&#8217;t enhanced as children and who lack the super-reflexes the rest of us take for granted. This is compounded when they drink before getting on the road, since unenhanced people become impaired by alcohol and their already inferior reflexes deteriorate further. Since the promise of self-driving cars continues to be tied up in regulatory hassles, we can expect many more such needless deaths as long as irresponsible parents continue to consider science &#8220;optional&#8221;.</p>
<p>And finally, there was a recent outbreak of measles at Disneyland Europa &#8211; even though we thought this disease had been eradicated decades ago. Scientists traced the problem to unvaccinated tourists. They further found that all of these unvaccinated individuals were unenhanced. Lacking the cognitive optimization that would help them understand psychoneuroimmunology on an intuitive level, they were easy prey for discredited ideas like &#8220;vaccines cause autism&#8221;. </p>
<p>So no, super-enhancing your kids isn&#8217;t a &#8220;personal choice&#8221;. It&#8217;s your basic duty as a parent and a responsible human being. People in places like India and Neo-Songhai and Venus which suffer from crime and disease make great personal sacrifices to get their children to gene therapy clinics and give them the super-enhancement designer baby gene injection that ensures them a better life. And you start off in a privileged position in America, benefitting from the superenhancement of millions of your fellow citizens, and you think you can just say &#8220;No thanks&#8221;?</p>
<p>So I don&#8217;t want to hear another word from the &#8220;but my freedom!&#8221; crowd. Unenhanced kids shouldn&#8217;t be allowed in school. They shouldn&#8217;t be allowed to drive. They shouldn&#8217;t be allowed in public places where they can cause problems. And parents who refuse to enhance their children should be put in jail, the same as anyone else whose actions lead to death and suffering. Because not super-enhancing your kids isn&#8217;t a &#8220;choice&#8221;. It&#8217;s child abuse.</p>
<p><i>Mora LeQuivalence is an Assistant Professor of Bioethics at Facebook University. Her latest book, &#8220;A Flight Too Far&#8221;, argues that the recent Danish experiment with giving children wings is a disgusting offense against the natural order and should be banned worldwide and prosecuted in the International Criminal Court. It is available for 0.02Ƀ on Amazon.com</i></p>
<p><b>Related:</b> <A HREF="http://www.yudkowsky.net/singularity/simplified/">Transhumanism Is Simplified Humanism</A>, <A HREF="http://luminousalicorn.tumblr.com/tagged/au-social-justice-series">Alicorn&#8217;s Alternate Universe Social Justice Series</A></p>
]]></content:encoded>
			<wfw:commentRss>http://slatestarcodex.com/2015/02/06/everything-not-obligatory-is-forbidden/feed/</wfw:commentRss>
		<slash:comments>595</slash:comments>
		</item>
		<item>
		<title>Ethics Offsets</title>
		<link>http://slatestarcodex.com/2015/01/04/ethics-offsets/</link>
		<comments>http://slatestarcodex.com/2015/01/04/ethics-offsets/#comments</comments>
		<pubDate>Mon, 05 Jan 2015 03:21:56 +0000</pubDate>
		<dc:creator><![CDATA[Scott Alexander]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[morality]]></category>

		<guid isPermaLink="false">http://slatestarcodex.com/?p=3506</guid>
		<description><![CDATA[I. Some people buy voluntary carbon offsets. Suppose they worry about global warming and would feel bad taking a long unnecessary plane trip that pollutes the atmosphere. So instead of not doing it, they take the plane trip, then pay &#8230; <a href="http://slatestarcodex.com/2015/01/04/ethics-offsets/">Continue reading <span class="pjgm-metanav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p><b>I.</b></p>
<p>Some people buy voluntary carbon offsets. Suppose they worry about global warming and would feel bad taking a long unnecessary plane trip that pollutes the atmosphere. So instead of not doing it, they take the plane trip, then pay for some environmental organization to clean up an amount of carbon equal to or greater than the amount of carbon they emitted. They&#8217;re happy because they got their trip, future generations are happy because the atmosphere is cleaner, everyone wins.</p>
<p>We can generalize this to ethics offsets. Suppose you really want to visit an oppressive dictatorial country so you can see the <A HREF="http://www.dailymail.co.uk/news/article-2638213/Tourist-took-camera-inside-North-Korea-expected-really-really-sad-people-shocked-seemingly-ordinary-lives-citizens.html">beautiful tourist sights there</A>. But you worry that by going there and spending money, you&#8217;re propping up the dictatorship. So you take your trip, but you also donate some money to opposition groups and humanitarian groups opposing the dictatorship and helping its victims, at an amount such that you are confident that the oppressed people of the country would prefer you take both actions (visit + donate) than that you take neither action.</p>
<p>I know I didn&#8217;t come up with this concept, but I&#8217;m having trouble finding out who did, so no link for now.</p>
<p>A recent post, <A HREF="http://slatestarcodex.com/2014/12/19/nobody-is-perfect-everything-is-commensurable/">Nobody Is Perfect, Everything Is Commensurable</A>, suggests that if you are averse to activism but still feel you have an obligation to improve the world, you can discharge that obligation by giving to charity. This is not quite an ethics offset &#8211; it&#8217;s not exchanging a transgression for a donation so much as saying that a donation is a better way of helping than the thing you were worried about transgressing against anyway &#8211; but it&#8217;s certainly pretty similar.</p>
<p>As far as I can tell, the simplest cases here are 100% legit. I can&#8217;t imagine anyone saying &#8220;You may not take that plane flight you want, even if you donate so much to the environment that in the end it cleans up twice as much carbon dioxide as you produced. You must sit around at home, feeling bored and lonely, and letting the atmosphere be  more polluted than if you had made your donation&#8221;.</p>
<p>But here are two cases I am less certain about.</p>
<p><b>II.</b></p>
<p>Suppose you feel some obligation to be a vegetarian &#8211; either because you believe animal suffering is bad, or you have enough moral uncertainty around the topic for the ethical calculus to come out against. Is it acceptable to continue eating animals, but also donate money to animal rights charities?</p>
<p>A simple example: you eat meat, but also donate money to a group lobbying for cage=free eggs. You are confident that if chickens could think and vote, the average chicken would prefer a world in which you did both these things to a world in which you did neither. This seems to me much like the cases above.</p>
<p>A harder example. You eat meat, but also donate money to a group that convinces people to become vegetarian. Jeff Kaufman and Brian Tomasik <A HREF="http://www.jefftk.com/p/pay-other-people-to-go-vegetarian-for-you">suggest</A> that about $10 to $50 is enough to make one person become vegetarian for one year by sponsoring what are apparently very convincing advertisements.</p>
<p>Eating meat is definitely worth $1000 per year for me. So if I donate $1000 to vegetarian advertising, then eat meat, I&#8217;m helping turn between twenty and a hundred people vegetarian for a year, and helping twenty to one hundred times as many animals as I would be by becoming vegetarian myself. Clearly this is an excellent deal for me and an excellent deal for animals. </p>
<p>But I still can&#8217;t help feeling like there&#8217;s something really wrong here. It&#8217;s not just the low price of convincing people &#8211; even if I was 100% guaranteed that the calculations were right, I&#8217;d still feel just as weird. Part of it is a sense of duping others &#8211; would they be as eager to become vegetarian if they knew the ads that convinced them were sponsored by meat-eaters?</p>
<p>Maybe! Suppose we go to all of the people convinced by the ads, tell them &#8220;I paid for that ad that convinced you, and I still eat meat. Now what?&#8221; They answer &#8220;Well, I double-checked the facts in the ad and they&#8217;re all true. That you eat meat doesn&#8217;t make anything in the advertisement one bit less convincing. So I&#8217;m going to stay vegetarian.&#8221; Now what? Am I off the hook?</p>
<p>A second objection: universalizability. If <i>everyone</i> decides to solve animal suffering by throwing money at advertisers, there is no one left to advertise to and nothing gets solved. You just end up with a world where 100% of ads on TVs, in newspapers, and online are about becoming vegetarian, and everyone watches them and says &#8220;Well, I&#8217;m doing my part! I&#8217;m paying for these ads!&#8221;</p>
<p>Counter-objection: At that point, no one will be able to say with a straight face that every $50 spent on ads converts one person to vegetarianism. If I follow the maxim &#8220;Either be vegetarian, or donate enough money to be 90% sure I am converting at least two other people to vegetarianism&#8221;, this maxim <i>does</i> universalize, since after animal suffering ads have saturated a certain percent of the population, no one can be 90% sure of convincing anyone else.</p>
<p>As far as I can tell, this is weird but ethical.</p>
<p><b>III.</b></p>
<p>The second troublesome case is a little more gruesome.</p>
<p>Current estimates suggest that $3340 worth of donations to global health causes saves, on average, one life.</p>
<p>Let us be excruciatingly cautious and include a two-order-of-magnitude margin of error. At $334,000, we are <i>super duper sure</i> we are saving at least one life.</p>
<p>So. Say I&#8217;m a millionaire with a spare $334,000, and there&#8217;s a guy I <i>really</i> don&#8217;t like&#8230;</p>
<p>Okay, fine. Get the irrelevant objections out of the way first and establish the <A HREF="http://lesswrong.com/lw/2k/the_least_convenient_possible_world/">least convenient possible world</A>. I&#8217;m a criminal mastermind, it&#8217;ll be the perfect crime, and there&#8217;s zero chance I&#8217;ll go to jail. I can make it look completely natural, like a heart attack or something, so I&#8217;m not going to terrorize the city or waste police time and resources. The guy&#8217;s not supporting a family and doesn&#8217;t have any friends who will be heartbroken at his death. There&#8217;s no political aspect to my grudge, so this isn&#8217;t going to silence the enemies of the rich or anything like that. I myself have a terminal disease, and so the damage that I inflict upon my own soul with the act &#8211; or however it is Leah always phrases it &#8211; will perish with me immediately afterwards. There is no God, or if there is one He respects ethics offsets when you get to the Pearly Gates.</p>
<p>Or you know what? Don&#8217;t get the irrelevant objections out of the way. We can offset those too. The police will waste a lot of time investigating the murder? Maybe I&#8217;m <i>very</i> rich and I can make a big anonymous donation to the local police force that will more than compensate them for their trouble and allow them to hire extra officers to take up the slack. The local citizens will be scared there&#8217;s a killer on the loose? They&#8217;ll forget all about it once they learn taxes have been cut to zero percent thanks to an anonymous donation to the city government from a local tycoon.</p>
<p>Even what seems to me the most desperate and problematic objection &#8211; that maybe the malarial Africans saved by global health charities have lives that are in some qualitative way just not as valuable as those of happy First World citizens contributing to the global economy &#8211; can be fixed. If I&#8217;ve got enough money, a few hundred thousand to a million ought to be able to save the life of a local person in no way distinguishable from my victim. Heck, since this is a hypothetical problem and I have infinite money, why not save <i>ten</i> local people?</p>
<p>The best I can do here is to say that I am crossing a <A HREF="http://lesswrong.com/lw/ase/schelling_fences_on_slippery_slopes/">Schelling fence</A> which might also be crossed by people who will be less scrupulous in making sure their offsets are in order. But perhaps I could offset that too. Also, we could assume I will never tell anybody. Also, anyone can just go murder someone right now without offsetting, so we&#8217;re not exactly talking about a big temptation for the unscrupulous.</p>
]]></content:encoded>
			<wfw:commentRss>http://slatestarcodex.com/2015/01/04/ethics-offsets/feed/</wfw:commentRss>
		<slash:comments>537</slash:comments>
		</item>
		<item>
		<title>Bottomless Pits Of Suffering</title>
		<link>http://slatestarcodex.com/2014/09/27/bottomless-pits-of-suffering/</link>
		<comments>http://slatestarcodex.com/2014/09/27/bottomless-pits-of-suffering/#comments</comments>
		<pubDate>Sat, 27 Sep 2014 23:44:30 +0000</pubDate>
		<dc:creator><![CDATA[Scott Alexander]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[morality]]></category>

		<guid isPermaLink="false">http://slatestarcodex.com/?p=2946</guid>
		<description><![CDATA[I. A friend on Facebook recently posted the following dilemma, which of course I cannot find right now so I have to vaguely quote my recollection of it: Would you rather the medieval Church had spent all of its money &#8230; <a href="http://slatestarcodex.com/2014/09/27/bottomless-pits-of-suffering/">Continue reading <span class="pjgm-metanav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p><b>I.</b></p>
<p>A friend on Facebook recently posted the following dilemma, which of course I cannot find right now so I have to vaguely quote my recollection of it:<br />
<blockquote>Would you rather the medieval Church had spent all of its money helping the poor, rather than supporting the arts. So that maybe there were fewer poor people back in medieval times, but we wouldn&#8217;t have any cathedrals or triptychs or the Sistine Chapel?</p></blockquote>
<p>I was surprised to see so many people choosing the cathedrals. I mean, I guess this question is kind of unfair, in that it&#8217;s really hard to figure out what it means, moral value wise, for there to <i>have been</i> less suffering in the past. This is especially true if you choose to believe Robin Hanson &#8211; as always a decision that starts a mini-civil-war between the rational and intuitive parts of my brain &#8211; when he says we should <A HREF="http://www.overcomingbias.com/2010/03/ancestor-worship-is-efficient.html">give much more weight the preferences of past individuals</A>.</p>
<p>I think maybe choosing the cathedrals is so appealing because they&#8217;re right there, you can touch them, but the starving peasants are  hidden all the way in the past where you can&#8217;t see them. So it feels like you&#8217;re being asked to sacrifice something you really like for something that you would otherwise not have to think about.</p>
<p>This is one of the biggest and scariest problems with utilitarianism. Utiltarianism is at least kind of easy when it&#8217;s asking you to trade off some things in your normal world for other things in your normal world. But when it asks you to make everything you consider your normal world unambiguously worse to help some other domain you would otherwise never have to think about, then it starts to become unintuitive and scary.</p>
<p>Imagine a happy town full of prosperous people. Every so often they make nice utilitarian decisions like having everyone chip in a few dollars to help someone who&#8217;s fallen sick, and they feel pretty good about themselves for this.</p>
<p>Then one day an explorer discovers a BOTTOMLESS PIT OF ENDLESS SUFFERING on the outskirts of the town. There are hundreds of people trapped inside in a state of abject misery. The pit gods agree to release some of their prisoners, but only for appropriately sumptuous sacrifices.</p>
<p>Suddenly the decision isn&#8217;t just &#8220;someone in town makes a small sacrifice to help other people in town&#8221;. Suddenly it&#8217;s about the entire town choking off its luxury and prosperity in order to rescue people they don&#8217;t even know, from this pit they didn&#8217;t even know was there a week ago. That seems kind of unfair.</p>
<p>So they tell the explorer to cover the lid of the pit with a big tarp that blends in with the surrounding grass, so they don&#8217;t have to see it, and then go on with their lives.</p>
<p><b>II.</b></p>
<p>The developing world is <i>sort of</i> a bottomless pit of suffering if some First Worlder didn&#8217;t expect it to be there. But I think most people do expect it to be there, most people are happy to help (a little), and it doesn&#8217;t really confuse or alarm us too much when we are reminded they still exist and still need help.</p>
<p>But what about nursing homes? Most of the doctors I have talked to agree most nursing homes are terrible. I get a steady trickle of psychiatric patients who are perfectly happy to be in the psychiatric hospital but who <i>freak out</i> when I tell them that they seem all better now and it&#8217;s time to send them back to their nursing home, saying it&#8217;s terrible and they&#8217;re abused and neglected and they refuse to go. I very occasionally get elderly patients who have attempted suicide solely because they know doing so will get them out of their nursing home. I don&#8217;t have a strong feeling for <i>exactly</i> how bad nursing homes are, but everything I have seen is consistent with at least some of them being very bad.</p>
<p>Solving this would be really expensive &#8211; I am perpetually surprised at how quietly and effortlessly we seem to soak up nursing home costs that already can run into the tens of thousands of dollars a year. Solving this would also produce no visible gain, in that bedridden old people are very very bad at complaining in ways anyone else can notice, and if we don&#8217;t want to think about them we don&#8217;t have to. If we as a country decided to concentrate on decreasing abuse in nursing homes, we might have to take that money away from important causes in our everyday visible world, like welfare and infrastructure and education funding. We would have to take limited Public Attention And Outrage Resources from causes like human rights and gay marriage and what beverages the President is holding while he salutes people. I think everyone agrees it&#8217;s a lot easier not to think about it, and nobody can make us. </p>
<p>Prisons are an even uglier case. Not only is prison inherently pretty miserable, but there seems to be rampant abuse and violence going on, including at least 5% of prisoners being raped per year. Every couple of weeks there&#8217;s a new story about how, for example, <A HREF="http://time.com/6672/prison-phone-rates/">prisoners are gouged on phone bills</A> because someone can do it and nobody is stopping them, or how they&#8217;re kept in cells without air conditioning in 110 degree weather in Arizona because no one has any incentive to change that.</p>
<p>Now the reason this is so ugly is&#8230;well, a lot of this is due to prison overcrowding. And a lot of people have very reasonably suggested imprisoning fewer people &#8211; ending the drug war would be a good start, but the past thirty years have also seen a momentous lowering of the threshhold for imprisoning people in general and a ballooning of America&#8217;s prison population. Which is awkward, because the last thirty years have also seen an unprecedented drop in violent crime.</p>
<p>It would be absolutely lovely if this were confirmed to be the result of some very clever policy like <A HREF="http://slatestarcodex.com/2014/02/18/proposed-biological-explanations-for-historical-trends-in-crime/">reducing lead exposure</A>, or even if <A HREF="http://en.wikipedia.org/wiki/Legalized_abortion_and_crime_effect">Levitt&#8217;s theory about abortion were proven true</A>. But the <A HREF="http://wiki.lesswrong.com/wiki/Least_convenient_possible_world">least convenient possible world</A> is that the recent drop in crime is mostly due to the recent rise in imprisonment and the recent lengthening of prison sentences &#8211; everybody with even the slightest bit of criminal tendency is already safely locked up [<b>EDIT</b>: <A HREF="http://slatestarcodex.com/2014/09/27/bottomless-pits-of-suffering/#comment-148824">strong argument against this</A>]</p>
<p>Think about what a moral nightmare that would be. Sure, you can do something about the bottomless pit of suffering where people are packed together into 110 degree cells and raped for ten or twenty years &#8211; but it&#8217;s going to raise crime back to the horrible 1990s levels we&#8217;re all pretty relieved to have escaped. Or you can just whistle, pretend not to notice, and continue to enjoy nice low-crime 21st century society.</p>
<p>And then there&#8217;s a broader worry.</p>
<p>Conservatives like to talk about how much better we all had it back in the 1950s with traditional this and traditional that, and how you can just <i>tell</i> from listening to stories from people of that time. or reading media from that time, that things were a lot calmer and more pleasant.</p>
<p>And the left likes to talk about how we are widening the circle of empathy and bringing in new and finally starting to pay attention to the concerns of downtrodden groups.</p>
<p>What if they&#8217;re both right? What if progress since the 1950s has been about opening one bottomless pit of suffering after another, trading off the well-being of the nice prosperous town for getting people out of the pits, and then moving on to another pit somewhere else?</p>
<p>I mean, this is kind of the <i>standard</i> view of history. Except that in the standard view, conservatives tack on &#8220;But really, the bottomless pit wasn&#8217;t so bad, and the sulfurous flames gave you a nice, warm feeling inside.&#8221; And leftists tack on &#8220;but in the end, everyone including the people in the nice town benefitted from the increased understanding and diversity this created, so really history was just this series of obvious win-win propositions that everyone was just too stupid to figure out, until now.&#8221;</p>
<p>Although there has been a lot of interesting argument against the conservative proposition that things in the nice town have gotten worse since the 50s &#8211; some of which I <A HREF="http://slatestarcodex.com/2014/07/21/no-skyscraper-stagnation/">have participated in</A>, it seems important to note that even if the proposition is 100% correct, progress might still have been morally correct.</p>
<p><b>III.</b></p>
<p>A lot of the paradoxes of utilitarianism, the things that make it scary and hard to work with, involve philosophers who compulsively seek out bottomless pits and shout at you until you pay attention to them.</p>
<p>Utility monsters are basically one-man bottomless pits.</p>
<p>Pascal&#8217;s Wager (or Pascal&#8217;s Mugging, if you prefer) splits the universe into a billion Everett branches, then points out that one of these Everett branches is a bottomless pit and asks the others to make sacrifices to help it.</p>
<p>A lot of the addition paradoxes treat a pool of &#8220;potential people&#8221; as a bottomless pit.</p>
<p>This seems to be the easiest way to break utilitarianism &#8211; point to a bottomless pit, real or imagined, and make everyone in the world lose utility to solve it, forever. It&#8217;s not always easy to come up with solutions that successfully rule out these problems, while preserving our intuition that we should continue to worry about people in nursing homes or jails.</p>
<p>Contractualism scares me a little because it offers <i>too easy</i> an out from bottomless-pit type dilemmas. It seems really easy to say &#8220;All of us people not in jail, we&#8217;ll agree to look out for one another, and as for those guys, screw them&#8221;. You would need to have something like a veil of ignorance, or at least a <A HREF="http://slatestarcodex.com/2014/09/04/cooperation-un-veiled/">good simulation of one</A>, to even begin to care.</p>
]]></content:encoded>
			<wfw:commentRss>http://slatestarcodex.com/2014/09/27/bottomless-pits-of-suffering/feed/</wfw:commentRss>
		<slash:comments>458</slash:comments>
		</item>
		<item>
		<title>Cooperation Un-Veiled</title>
		<link>http://slatestarcodex.com/2014/09/04/cooperation-un-veiled/</link>
		<comments>http://slatestarcodex.com/2014/09/04/cooperation-un-veiled/#comments</comments>
		<pubDate>Fri, 05 Sep 2014 02:13:40 +0000</pubDate>
		<dc:creator><![CDATA[Scott Alexander]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[morality]]></category>

		<guid isPermaLink="false">http://slatestarcodex.com/?p=2773</guid>
		<description><![CDATA[Related to: The Invisible Nation &#8211; Reconciling Utilitarianism And Contractualism Contractualism tries to derive morality from an agreement that even selfish agents would willingly sign if they knew about it. In theory, you would gain from such an agreement, since &#8230; <a href="http://slatestarcodex.com/2014/09/04/cooperation-un-veiled/">Continue reading <span class="pjgm-metanav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p><b>Related to:</b> <A HREF="http://slatestarcodex.com/2014/08/24/the-invisible-nation-reconciling-utilitarianism-and-contractualism/">The Invisible Nation &#8211; Reconciling Utilitarianism And Contractualism</A></p>
<p>Contractualism tries to derive morality from an agreement that even selfish agents would willingly sign if they knew about it. In theory, you would gain from such an agreement, since the costs of not being able to behave unethically towards others would be at least balanced by the benefits of other people not behaving unethically to you.</p>
<p>Such attempts crash into the brick wall that not everybody would, in fact, sign such an agreement. For example, the King might reasonably argue that he is able to reap the benefits of oppressing lots of people, but almost nobody can oppress him. To give another example, rich people might feel no need to give to charity, since they don&#8217;t need anyone else to give charity to them.</p>
<p>One classic solution to the problem is Rawls&#8217; &#8220;veil of ignorance&#8221;. Rawls asks: what if we have to make the agreement before we know who exactly we&#8217;re going to be? The future King, not knowing he will be born a King, will agree oppression is bad along with everyone else; the future rich, not knowing they will be rich, will want to create a strong social safety net and tradition of charitable giving.</p>
<p>The great thing about this thought experiment is that it works pretty well to get us what we want &#8211; assuming a veil at just the right spot, we end up with something like utilitarianism being in everyone&#8217;s best interests.</p>
<p>The bad thing about the thought experiment is that there is not, in fact, a veil of ignorance. There&#8217;s just a King, who when asked will tell you he knows perfectly well he&#8217;s a King and would like to keep on oppressing people. So what can we do with the universe we actually have?</p>
<p>Here&#8217;s a model I have been playing around with recently.</p>
<p>Suppose there is a society of one hundred men, conveniently named Mr. 1, Mr. 2, and so on to Mr. 100. Higher-numbered people are stronger than lower-numbered people, such that a higher-numbered person can always win fights against a lower-numbered person at no danger to themselves. Further, suppose this society has a god who enforces all oaths and agreements, but who otherwise stays out of the picture.</p>
<p>(in order to avoid finicky math distinctions between choosing with replacement and choosing without replacement, it might help to think of these as arbitrarily large clans of people with with specified strength instead. Whatever.)</p>
<p>This society is marked by interactions where two randomly selected people meet each other. Sometimes the people nod at each other and pass each other by. Other times, the stronger of the two people overpowers the weaker one and oppresses them in some way, where an oppression is an interaction where the stronger person gains and the weaker person loses some utility.</p>
<p>One person proposes a rule: &#8220;no oppressing anyone else.&#8221; How much support does the rule get?</p>
<p>Well, that depends on the character of the oppression. Some oppression can give the oppressor exactly as much utility as it costs the victim &#8211; for example, I steal $10 from you, making me $10 richer and you $10 poorer. Other oppression can cost the victim more than it benefits the oppressor &#8211; for example, I steal your wallet, which gives me only whatever small change you have in there, but you have to replace all your credit cards and licenses and so on. Still other oppression could help the oppressor more than it hurts the victim &#8211; for example, starving Jean Valjean steals a loaf of bread from a rich man.</p>
<p>So let&#8217;s be more specific. One person proposes a rule: &#8220;No zero-sum oppression.&#8221; Who agrees?</p>
<p>Naively &#8211; and I&#8217;ll challenge this later &#8211; Mr. 1 through Mr. 50 agree,  but Mr. 51 through Mr. 100 refuse. Analyzing Mr. 25&#8217;s thought process should explain: &#8220;In 25% of interactions, I will be the oppressor. In 75%, I will be oppressed. Assuming one of my utils for one of their utils, that means in a hundred interactions I will on average lose fifty utils. Therefore, I should ban this type of interaction.&#8221;</p>
<p>Mr. 99, on the other hand, likes this kind of oppression. He thinks &#8220;In 99% of interactions, I will gain. In 1%, I will lose. So in a hundred zero-sum interactions, I will on average a gain of 98 utils. Therefore, I like this type of interaction.&#8221;</p>
<p>But Mr. 99 might have a different rule he <i>would</i> agree to. He might say &#8220;No oppression so bad that it hurts the victim >100x as much as it helps the oppressor.&#8221;</p>
<p>It&#8217;s easy to think of examples of this kind of oppression. For example, if I&#8217;m having a really bad day and just want to beat someone up, breaking your ribs might make me feel a <i>little</i> bit better, but probably not even one percent as much as it makes you feel worse.</p>
<p>Mr. 99 thinks &#8220;In 99% of interactions I will be the oppressor; in 1% I will be the victim. Each time I am the oppressor, I gain one util; each time I am the victim, I lose 100. Therefore, in 100 interactions I will lose on average one util. Therefore, I don&#8217;t like this kind of oppression.&#8221;</p>
<p>And it&#8217;s easy to see that Mr. 1 through Mr. 98 will agree with him and be able to sign this contract.</p>
<p>The logical conclusion is a hierarchy of agreements. Mr. 1 signs an agreement banning all oppression, Mr. 1 and 2 together sign an agreement banning oppression that helps the oppressor less than 50 times as much as it hurts the victim, Mr. 1 and 2 and 3 together sign an agreement banning oppression that helps the oppressor less than 33 times as much as it hurts the victim, and so on all the way to everyone except Mr. 100 signing an agreement banning oppression that helps the oppressor less than 1/100 as much as it helps the victim. Mr. 100 signs no agreements &#8211; why would he?</p>
<p>Before I explain why this <i>doesn&#8217;t work</i>, I want to think about what it means in real world terms.</p>
<p>It would replace the one-size-fits-all principle of utilitarianism with the idea of power-based utility ratios. This seems to <i>kind of</i> map on to real life experience. For example, the King may order his servant to spend hours getting the floor polished absolutely spotlessly. Having a perfectly spotless floor (rather than a very clean floor with exactly one spot) gives the king only a tiny utility gain, but may require many more hours of the servant&#8217;s time and labor. That the King can command a large amount of the servant&#8217;s utility to improve his own utility only a tiny bit seems a lot like what it <i>means</i> to say there&#8217;s a power differential between the King and the servant. If the servant tried to reduce the King&#8217;s utility by a large amount in order to improve his own utility by a tiny amount, he would be in <i>big trouble</i>.</p>
<p>I notice this in my own life as well. Last year I worked under a doctor who was consistently late. The way it would work was that he would say &#8220;I have a meeting at 8 AM every morning, so you should be in by 9 so we can start work together.&#8221; Then his meeting would invariably run to 10, and I would be left sitting around for an hour doing nothing. It might seem that the smart choice would have been for me to just sleep late and arrive at 10 anyway, but suppose one day a week, my boss&#8217; meeting finishes exactly on time. Then if I&#8217;m not there, he has to wait for <i>me</i>, and he considers this unacceptable. So if my boss and I value an hour of our times the same amount, it would seem this arrangement implies my boss&#8217; utility is worth at least seven times as much as my own.</p>
<p>There are some features of this power-ratio utilitarianism that are repugnant: the rich seem to be held to a very low standard, whereas the poorer you are, the more exacting a moral standard you&#8217;ve got to live up to. That seems like if anything the opposite of how it should be. But other features actually seem better than our current morality &#8211; if giving charity to the poor improves their utility 100x as much as it decreases yours, then the 1% have to donate, probably quite a lot.</p>
<p>Enough of that. The reason this doesn&#8217;t work is simple. Mr. 1 through Mr. 50 would want to sign the zero-sum agreement. But if he knows the rules of the thought experiment, Mr. 50 can predict that Mr. 51 through Mr. 100 <i>won&#8217;t</i> sign the agreement. None of the people who could conceivably oppress him will consider themselves bound by the rule. So he&#8217;s not trading his right to oppress others in exchange for others&#8217; right to oppress him, he&#8217;s giving up his right to oppress others but should still expect exactly the same amount of oppression as he had before. Therefore, he does not sign.</p>
<p>But now Mr. 49 is in the same such position. He knows nobody stronger than he is, including Mr. 50, will sign the agreement. Thus the agreement is useless to him.</p>
<p>And so on by induction all the way to Mr. 2 refusing to sign (it doesn&#8217;t matter much for poor Mr. 1 either way).</p>
<p>This produces some weird results. Mr. 99 is no longer willing to accept his &#8220;No breaking people&#8217;s ribs just to let out some stress&#8221; agreement that banned utility exchanges worse than 1:100, because the only person whose help he wants, Mr. 100, isn&#8217;t going to sign. That means Mr. 98 won&#8217;t sign, Mr. 97 won&#8217;t sign, and again, so on all the way down to Mr. 2.</p>
<p>In other words, even the second weakest person in a society has no interest in signing an agreement not to punch people weaker than you when you&#8217;re having a bad day.</p>
<p>But this is a <i>stupid</i> result!</p>
<p>It reminds me of a problem noticed in Iterated Prisoner&#8217;s Dilemma. Conventional wisdom says the best thing to do is to cooperate on a tit-for-tat basis &#8211; that is, we both keep cooperating, because if we don&#8217;t the other person will punish us next turn by defecting.</p>
<p>But it has been pointed out there&#8217;s a flaw here. Suppose we are iterating for one hundred games. On Turn 100, you might as well defect, because there&#8217;s no way your opponent can punish you later. But that means both sides should always play (D,D) on Turn 100. But since you know on Turn 99 that your opponent <i>must</i> defect next turn, they can&#8217;t punish you any worse if you defect now. So both sides should always play (D,D) on turn 99. And so on by induction to <i>everyone defecting the entire game</i>. I don&#8217;t know of any good way to solve this problem, although it often doesn&#8217;t turn up in the real world because no one knows exactly how many interactions they will have with another person. Which suggests one possible solution to the original problem is for nobody to know the exact number of people.</p>
<p>(now I want to write a science fiction novel about a planet full of aliens who are perfect game theorists, but who always behave kindly and respectfully to one another. Then some idiot performs a census, and the whole place collapses into apocalyptic total war.)</p>
<p>It seems like there ought to be some kind of superrational basis on which the two sides in the iterated-100 prisoners dilemma can cooperate. And along the same lines there ought to be some kind of superrational basis upon which everyone in the society of 100 people should stick to some basic utility-ratio principles. But I&#8217;m not sure what it would be.</p>
<p>Some other variations of this problem might be more interesting, but I don&#8217;t think I&#8217;ve got the math ability or the time to think about them as carefully as they deserve:</p>
<p>1. What if all fights contained a random element? For example, suppose your chance of overpowering someone else (and thus being able to oppress them) was your_strength/(your_strength + opponent_strength)? In societies of this type, agreements to ban strongly negative-sum interactions would be more salient for everyone, since even Mr. 100 would have some chance of being beaten in a typical interaction.</p>
<p>2. How about a meta-agreement, in which people say &#8220;I agree to sign the agreements requested by people weaker than myself if and only the people above me agree to sign the agreements benefitting people weaker than <i>they</i>?&#8221; Such an agreement wouldn&#8217;t make sense for Mr. 100, and so Mr. 99 would not sign, and so on down, but is there a superrational solution?</p>
<p>3. What if one type of agreement people were allowed to make was a coalition to gang up against opponents? This seems one of the most important real-world considerations &#8211; one of the things that <i>does</i> make Kings behave at least somewhat morally is the knowledge that they will be overthrown if they do not; likewise, some countries implement social welfare systems with the explicit goal of decreasing the poor&#8217;s incentive to overthrow the rich (I think Bismarck tried this). On the other hand, it also gives the powerful an incentive to band together to better oppress the weak. I&#8217;m pretty sure the effects of this would be impossible to really calculate, but might we lump them together into saying &#8220;This is so nondeterministic that no one can ever be sure they&#8217;ll end up in the winning as opposed to the losing coalition, therefore they are less certain of victory, therefore they should be more likely to agree to rules against oppression&#8221;?</p>
]]></content:encoded>
			<wfw:commentRss>http://slatestarcodex.com/2014/09/04/cooperation-un-veiled/feed/</wfw:commentRss>
		<slash:comments>246</slash:comments>
		</item>
		<item>
		<title>The Invisible Nation &#8211; Reconciling Utilitarianism And Contractualism</title>
		<link>http://slatestarcodex.com/2014/08/24/the-invisible-nation-reconciling-utilitarianism-and-contractualism/</link>
		<comments>http://slatestarcodex.com/2014/08/24/the-invisible-nation-reconciling-utilitarianism-and-contractualism/#comments</comments>
		<pubDate>Sun, 24 Aug 2014 21:31:45 +0000</pubDate>
		<dc:creator><![CDATA[Scott Alexander]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[long post is long]]></category>
		<category><![CDATA[morality]]></category>
		<category><![CDATA[philosophy]]></category>
		<category><![CDATA[politics]]></category>

		<guid isPermaLink="false">http://slatestarcodex.com/?p=2681</guid>
		<description><![CDATA[[Attempt to derive morality from first principles, totally ignoring that this should be impossible. Based on economics and game theory, both of which I have only a minimal understanding of. And mixes complicated chains of argument with poetry without warning. &#8230; <a href="http://slatestarcodex.com/2014/08/24/the-invisible-nation-reconciling-utilitarianism-and-contractualism/">Continue reading <span class="pjgm-metanav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p><i><font size="1">[Attempt to derive morality from first principles, totally ignoring that this should be impossible. Based on economics and game theory, both of which I have only a minimal understanding of. And mixes complicated chains of argument with poetry without warning. So, basically, it&#8217;s philosophy. And it&#8217;s philosophy I get the feeling David Gauthier may have already done much better, but I haven&#8217;t read him yet and wanted to get this down first to avoid bias towards consensus]</font></i></p>
<p><b>Related to:</b> <A HREF="http://slatestarcodex.com/2013/04/08/whose-utilitarianism/">Whose Utilitarianism?</A>, <A HREF="http://slatestarcodex.com/2014/05/16/you-kant-dismiss-universalizability/">You Kant Dismiss Universalizability</A>, <A HREF="http://slatestarcodex.com/2014/07/30/meditations-on-moloch/">Meditations on Moloch</A></p>
<p>Imagine the Economists&#8217; Paradise.</p>
<p>In the Economists&#8217; Paradise, all transactions are voluntary and honest. All game-theoretic problems are solved. All Pareto improvements get made. All Kaldor-Hicks improvements get converted into Pareto improvements by distributing appropriate compensation, and then get made. In all cases where people could gain by cooperating, they cooperate. In all tragedies of the commons, everyone agrees to share the commons according to some reasonable plan. Nobody uses force, everyone keeps their agreements. Multipolar traps turn to gardens, <A HREF="http://slatestarcodex.com/2014/07/30/meditations-on-moloch/">Moloch is defeated</A> for all time.</p>
<p>The Economists&#8217; Paradise is stronger than the Libertarians&#8217; Paradise, which is just a place where no one initiates force and all economic transactions are legal, because the Libertarians&#8217; Paradise might still have a bunch of Prisoner&#8217;s Dilemmas and the Economists&#8217; Paradise wouldn&#8217;t. But it is weaker than Utilitarians&#8217; Paradise, because people with more power and money still get more of the eventual utility.</p>
<p>From a god&#8217;s-eye view, it seems relatively easy to create the Economists&#8217; Paradise. It might be hard to figure out how to solve game theoretic problems in absolutely ideal ways, but it&#8217;s often very easy to figure out how to solve them in a much better way than the uncoordinated participants are doing right now (see the beginning of Part III of <A HREF="http://slatestarcodex.com/2014/07/30/meditations-on-moloch/">Meditations on Moloch</A>). At the extreme of this way of thinking, we have Formalism, where just solving the problem, even in a very silly way, is still better then having the question remain open.</p>
<p>(a coin flip is the epitome of unintelligent problem solving, but flipping a coin to decide whether the Senkaku/Diaoyu Islands go to Japan or China still beats having World War III, by a large margin)</p>
<p>The Economists&#8217; Paradise is a pretty big step of the way toward actual paradise. Certainly there won&#8217;t be any wars or crime. But can we get more ambitious?</p>
<p>Will the Economists&#8217; Paradise solve world hunger? I say it will. The argument is essentially the one in Part 2.4 of <A HREF="http://raikoth.net/libertarian.html#coordination_problems">the Non-Libertarian FAQ</A>. Suppose solving world hunger costs $50 billion per year, which I think is people&#8217;s actual best-guess estimate. And suppose that half the one billion people in the First World are willing to make some minimal contribution to solving world hunger. If each of those people can contribute $2 per week, that suffices to raise the necessary amount. On the other hand, the $50 billion cost is the cost in <i>our</i> world. In the Economists&#8217; Paradise, where there are no corrupt warlords or bribe-seeking bureaucrats, and where we can just trust people to line themselves up in order of neediest to least needy, the whole task gets that much easier. In fact, it&#8217;s not obvious that the First World wouldn&#8217;t come up with their $50 billion only to have the Third World say &#8220;Thanks, but we kind of sorted out our problems and became an economic powerhouse.&#8221;</p>
<p>Let&#8217;s get <i>more</i> ambitious. Will there be bullying in the Economists&#8217; Paradise? I just mean your basic bullying, walking over to someone who&#8217;s ugly and saying &#8220;You&#8217;re ugly, you ugly ugly person!&#8221; I say there won&#8217;t be. How would a perfect solution to all coordination problems end bullying? Simple! If the majority of the population disagrees with bullying, they can sign an agreement among themselves not to bully, and to ostracize anyone who does. Everyone will of course keep their agreement (by the definition of Economists&#8217; Paradise) and anyone who reports to the collective that Bob is a bully will always be telling the truth (by the definition of Economists&#8217; Paradise). The collective will therefore ostracize Bob, and faced with the prospect of never being able to interact with the majority of human beings ever again, Bob will apologize and sign an agreement never to bully again (which he will keep, by the definition of Economists&#8217; Paradise). Since everyone knows this will happen, no one bullies in the first place.</p>
<p>So the Economists&#8217; Paradise is actually a <i>very</i> big step of the way toward actual paradise, to the point where the differences start to look like splitting hairs.</p>
<p>The difference between us and the Economists&#8217; Paradise isn&#8217;t increased wealth or fancy technology or immortality. It&#8217;s rule-following. If God were to tell everybody the rules they needed to follow to create the Economists&#8217; Paradise, and everyone were to follow them, that would suffice to create it.</p>
<p>That suggests two problems with setting up Economists&#8217; Paradise. We need to know what the rules are, and we need to convince people to follow them.</p>
<p>These are more closely linked than one would think. For example, both Japan and China might prefer that the Senkaku Islands be clearly given to the other according to a fair set of rules which might benefit themselves the next time, than that they fight World War III over the issue. So if the rules existed, people might follow them <i>for the very reason that they exist</i>. This is why, despite the Senkaku Island conflict, <i>most</i> islands are not the object of international tension &#8211; because there are clear rules about who should have them and everybody prefers following the rules to the sorts of conflicts that would happen if the rules didn&#8217;t exist.</p>
<p><b>II.</b></p>
<p>There&#8217;s a hilarious tactic one can use to defend consequentialism. Someone says &#8220;Consequentialism must be wrong, because if we acted in a consequentialist manner, it would cause Horrible Thing X.&#8221; Maybe X is half the population enslaving the other half, or everyone wireheading, or people being murdered for their organs. You answer &#8220;Is Horrible Thing X good?&#8221; They say &#8220;Of course not!&#8221;. You answer &#8220;Then good consequentialists wouldn&#8217;t act in such a way as to cause it, would they?&#8221;</p>
<p>In the same spirit: should the State legislate morality?</p>
<p>&#8220;Of course not! I don&#8217;t want the State telling me whom I can and can&#8217;t sleep with.&#8221;</p>
<p>So do you believe that it&#8217;s immoral, genuinely immoral, to sleep with the people whom you want to sleep with? Do you think sleeping with people is morally wrong?</p>
<p>&#8220;What? No! Of course not!&#8221;</p>
<p>Then the State legislating morality isn&#8217;t going to restrict whom you can sleep with, is it?</p>
<p>&#8220;But if the State legislated everything, I would have no freedom left!&#8221;</p>
<p>Is taking away all your freedom moral?</p>
<p>&#8220;No!&#8221;</p>
<p>Then the State&#8217;s not going to do that, is it?</p>
<p>By this sort of argument, it seems to me like there are no good philosophical objections to a perfect State legislating the correct morality. Indeed, this seems like an ideal situation; the good are rewarded, the wicked punished, and society behaves in a perfectly moral way (whatever that is).</p>
<p>The arguments against the State legislating morality are in my opinion entirely contigent ones, based around the fact that the State <i>isn&#8217;t</i> perfect and the correct morality <i>isn&#8217;t</i> known with certainty. Get rid of these caveats, and moral law and state law would be one and the same.</p>
<p>Letting the State enforce moral laws has some really big advantages. It means the rules are publicly known (you can look them up in a lawbook somewhere) and effectively enforced (by scary men with guns). This is great.</p>
<p>But using the State to enforce rules also fails in some very important ways.</p>
<p>First, it means someone has to decide in what cases the rules were broken. That means you either need to depend on fallible, easily biased human judgment &#8211; subject to all its racism, nepotism, tribalism, and whatever &#8211; or algorithmize the rules so that &#8220;be nice&#8221; gets formalized into a two thousand page definition of niceness so rigorous that even a racist nepotist tribalist judge doesn&#8217;t have any leeway to let your characteristics bias her assessment of whether you broke the niceness rules.</p>
<p>Second, transaction costs. Suppose in every interaction you had with another person, you needed to check a two thousand page algorithm to see if their actions corresponded to the Legal Definition of Niceness. Then if they didn&#8217;t, you needed to call the police to get them arrested, have them sit in jail for two weeks (or pay the appropriate bail) until they can get to trial. The trial itself is a drawn-out affair with celebrity lawyers on both sides. Finally, the judge pronounces verdict: you <i>really</i> should have said &#8220;please&#8221; when you asked her to pass the salt. Sentence: twelve milliseconds of jail time.</p>
<p>Third, it is written: &#8220;If you like laws and sausages, you should never watch either one being made.&#8221; The law-making apparatus of most states &#8211; stick four hundred heavily-bribed people who hate each other&#8217;s guts in a room and see what happens &#8211; fails to inspire full confidence that its results will perfectly conform to ideal game theoretic principles.</p>
<p>Fourth, most states are somewhere on a spectrum between &#8220;socially contracted regimes enforcing correct game theoretic principles among their citizens&#8221; and &#8220;violent psychopaths killing everybody and stealing their stuff&#8221;, and it has been historically kind of hard to get the first part right without also empowering the proponents of the second.</p>
<p>So it&#8217;s &#8211; surprise, surprise &#8211; a tradeoff.</p>
<p>There&#8217;s a bunch of rules which, followed universally, would lead to the Economists&#8217; Paradise. If the importance of keeping these rules agreed-upon and well-enforced outweighs the dangers of algorithmization, transaction costs, poor implementation, and tyranny, we make them State Laws. In an ideal state with very low transaction costs, minimal risk of tyranny, and legislave excellence, the cost of the tradeoff goes down and we can reap gains by making more of them State Laws. In a terrible state with high transaction costs that has been completely hijacked by self-interest, the cost of the tradeoff goes down and fewer of them are State Laws.</p>
<p><b>III.</b></p>
<p>Let&#8217;s return to the bullying example from Part I.</p>
<p>It would seem there ought not to be bullying in the Economists&#8217; Paradise. For if most people dislike bullying, they can coordinate an alliance to not bully one another, and to punish any bullies they find.</p>
<p>On the contrary, suppose there are two well-delineated groups of people, Jocks and Nerds. Jocks are bullies and have no fear of being bullied themselves; they also don&#8217;t care about social exclusion by the Nerds against them. Nerds are victims of bullies and never bully others; their exclusion does not harm the Jocks. Now it seems that there might be bullying, for although all the Nerds would agree not to bully, and to exclude all bullies, and although all the Jocks might coordinate an alliance not to bully other Jocks, there is nothing preventing the Jocks from bullying the Nerds.</p>
<p>I answer that there are several practical considerations that would prevent such a situation from coming up. The most important is that if bullying is negative-sum &#8211; that is, if it hurts the victim more than it helps the bully &#8211; then it&#8217;s an area ripe for Kaldor-Hicks improvement. Suppose there is <i>anything at all</i> the Nerds have that the Jocks want. For example, suppose that the Nerds are good at fixing people&#8217;s broken computers, and that a Jock gains more utility from knowing he can get his computer fixed whenever he needs it than from knowing he can bully Nerds if he wants. Now there is the opportunity for a deal in which the Nerds agree to fix the Jocks&#8217; computers in exchange for not being bullied. This is Pareto-optimal: the Nerds&#8217; lives are better because they avoid bullying, and the Jocks&#8217; lives are better because they get their computers fixed.</p>
<p>Objection: numerous problems prevent this from working in real life. Nerds and Jocks aren&#8217;t coherent blocs, bullies are bad negotiators. More fundamentally, this is essentially paying tribute, and on the &#8220;millions for defense, not one cent for tribute&#8221; principle, you should never pay tribute or else you encourage people who wouldn&#8217;t have threatened you otherwise to threaten you just for the tribute. But the assumption that Economists&#8217; Paradise solves all game theoretic problems solves these as well. We&#8217;re assuming everyone who should coordinate can coordinate, everyone who should negotiate does negotiate, and everyone who should make precommittments does make precommittments.</p>
<p>A more fundamental objection: what if Nerds can&#8217;t fix computers, or Jocks don&#8217;t have them? In this case, the tribute analogy saves us: Nerds can just pay Jocks a certain amount of money not to be bullied. Any advantage or power whatsoever that Nerds have can be converted to money and used to prevent bullying. This sounds morally repugnant to us, but in a world where blackmail and incentivizing bad behavior are assumed away by fiat, it&#8217;s just another kind of Pareto-optimal improvement, certainly better than the case where Nerds waste their money on things they want less than not being bullied yet are bullied anyway. And because of our Economists&#8217; Paradise assumption, Jocks charge a fair tribute rate &#8211; exactly the amount of money it really costs to compensate them for the utility they would get by beating up Nerds &#8211; and feel no temptation to extort more.</p>
<p>Now, I&#8217;m not sure bullying would even come up as an option in an Economists&#8217; Paradise, because if it&#8217;s a zero- or negative- sum game trying to get status among your fellow Jocks, the Jocks might ban it on their own as a waste of time. But even if Jocks do get some small amount of positive utility out of it directly, we should expect bullying to stop in an Economists&#8217; Paradise as long as Nerds control even a tiny amount of useful resources they can use to placate the Jocks. If Nerds control no resources whatsoever, or so few resources that they don&#8217;t have enough left to pay tribute after they&#8217;ve finished buying more important things, then we can&#8217;t be <i>sure</i> there won&#8217;t be bullying &#8211; this is where the Economists&#8217; Paradise starts to differ from the Utilitarians&#8217; Paradise &#8211; but we&#8217;ll return to this possibility later.</p>
<p>Now I want to highlight a phrase I just used in this argument.</p>
<p><i>&#8220;If bullying is negative-sum &#8211; that is, if it hurts the victim more than it helps the bully &#8211; then it&#8217;s an area ripe for Kaldor-Hicks improvement&#8221;</i></p>
<p>This looks a lot like (naive) utilitarianism!</p>
<p>What it&#8217;s saying is &#8220;If bullying decreases utility (by hurting the Nerd more than it helps the Jock) then bullying should not exist. If bullying increases utility (by helping the Jock more than it hurts the Nerd) then maybe bullying should exist. Or, to simplify and generalize, &#8220;do actions that increase utility, but not other actions.&#8221;</p>
<p>Can we derive utilitarian results by assuming Economists&#8217; Paradise? In many cases, yes. Suppose trolley problems are a frequent problem in your society. In particular, about once a day there is a runaway trolley in heading on a Track A with ten people, but divertable to a Track B with one person (explaining why this happens so often and so consistently is left as an exercise for the reader). Suppose you&#8217;re getting up in the morning and preparing to walk to work. You know a trolley problem will probably happen today, but you don&#8217;t know which track you&#8217;ll be on.</p>
<p>Eleven people in this position might agree to the following pact: &#8220;Each of us has a 91% chance of surviving if the driver chooses to flip the switch, but only a 9% chance of surviving if the person chooses not to. Therefore, we all agree to this solemn pact that encourages the driver to flip the switch. Whichever of us will be on Track B hereby waives his right to life in this circumstance, and will encourage the driver to switch as loudly as all of the rest of us.&#8221;</p>
<p>If the driver were presented with this pact, it&#8217;s hard to imagine her not switching to Track B. But if the eleven Trolley Problem candidates were permitted to make such a pact before the dilemma started, it&#8217;s hard to imagine that they wouldn&#8217;t. Therefore, the Economists&#8217; Paradise assumption of perfect coordination produces the correct utilitarian result to the trolley problem. The same methodology can be extended to utilitarianism in a lot of other contexts.</p>
<p>Now we can go back to that problem from before: what if Nerds have <i>literally</i> nothing Jocks want, and Jocks haven&#8217;t decided among themselves that bullying is a stupid status game that wastes their time, and we&#8217;re otherwise in the <A HREF="http://lesswrong.com/lw/2k/the_least_convenient_possible_world/">Least Convenient Possible World</A> with regards to stopping bullying. Is there any way assuming Economists&#8217; Paradise solves the problem <i>then</i>?</p>
<p>Maybe. Just go around to little kids, age two or so, and say &#8220;Look. At this point, you really don&#8217;t know whether you&#8217;re going to grow up to be a Jock or a Nerd. You want to sign this pact that everyone who grows up to be a Jock promises not to bully everyone who grows up to be a Nerd?&#8221; Keeping the same assumption that bullying is on net negative utility, we expect the toddlers to sign. Yeah, in the real world two-year olds aren&#8217;t the best moral reasoners, but good thing we&#8217;re in Economists&#8217; Paradise where we assume such problems away by fiat.</p>
<p>Is there an Even Less Convenient Possible World? Suppose bullying is racist rather than popularity-based, with all the White kids bullying the Black kids. You go to the toddlers, and the white toddlers retort back &#8220;Even at this age, we know very well that we&#8217;re White, thank you very much.&#8221;</p>
<p>So just approach them in the womb, where it&#8217;s too dark to see skin color. If we&#8217;re letting two year olds sign contracts, why not fetuses?</p>
<p>Okay. One reason might be because we&#8217;ve just locked ourselves into being fanatically pro-life merely by starting with weird assumptions. Another reason might be that we could counterfactually mug fetuses by saying stuff &#8220;You&#8217;re definitely a human, but for all you know the world is ruled by Lizardmen with only a small human slave population, and if Lizardmen exist then they will torture any humans who did not agree in the womb that, if upon being born and finding that Lizardmen did not exist, they would spend all their time and energy trying to create Lizardmen.&#8221;</p>
<p>(Frick. I think I just created a new basilisk by breeding the Rokolisk and <A HREF="http://raikoth.net/Stuff/story1.html">the story of 9-tsiak</A>. Good thing it only works on fetuses.)</p>
<p>(I wonder if this is the first time in history anyone has ever used the phrase &#8220;counterfactually mug fetuses&#8221; as part of a serious intellectual argument.)</p>
<p>So I&#8217;m not saying this theory doesn&#8217;t have any holes in it. I&#8217;m just saying that it seems, at least in principle, like the idea of Economists&#8217; Paradise might be sufficient to derive Rawls&#8217; Veil of Ignorance, which in turn bridges the chasm that separates it from Utilitarians&#8217; Paradise.</p>
<p><b>IV.</b></p>
<p>I think this is the solution to the various questions raised in <A HREF="http://slatestarcodex.com/2014/05/16/you-kant-dismiss-universalizability/">You Kant Dismiss Universalizability</A>. The reason universalizability is important is that the universal maxims are the agreements that everyone or nearly everyone would sign. This leads naturally to something like utilitarianism for the reasons mentioned in Part III. And it doesn&#8217;t produce the weird paradoxes like &#8220;If morality is universalizability, how do you know whether a policeman overpowering and imprisoning a criminal universalizes to &#8216;police should be able to overpower and imprison criminals&#8217; or &#8216;everyone should be able to overpower and imprison everyone else&#8217;?&#8221; Everyone would sign an agreement allowing the first, but not the second.</p>
<p>But before we <i>really</i> explore this, a few words on &#8220;everyone would sign&#8221;.</p>
<p>Suppose one very stubborn annoying person in Economists&#8217; Paradise refused to sign an agreement that police should be allowed to arrest criminals. Now what?</p>
<p>&#8220;All game theory is solved perfectly&#8221; is a <i>really</I> powerful assumption, and the rest of the world has a lot of leverage over this one person. Suppose everyone else said &#8220;You know, we&#8217;re all signing an agreement that none of us are going to murder one another, but we&#8217;re not going to let you into that agreement unless you also sign this agreement which is very important to us.&#8221;</p>
<p>Actually, that sounds too evil and blackmailing. There&#8217;s a better way to think of it. Suppose there are one hundred agreements. 99% of the population agrees to each, and in fact it&#8217;s a different 99% each time. That is, divide the population into one hundred sets of 1%, and each set will oppose exactly one of the agreements &#8211; there is no one who opposes two or more. Each agreement only works (or works best) when one hundred percent of the population agrees to it.</p>
<p>Very likely everyone will strike a deal that each of the one hundred 1% blocs agrees to to give up its resistance to the one agreement they don&#8217;t like, in exchange for each of the other ninety nine 1% blocs giving up its resistance to the agreements <i>they</i> don&#8217;t like.</p>
<p>Now we&#8217;re getting into meta-level Pareto improvements. If a pact would be positive-sum for people to agree on, the proponents of the pact can offer everyone else some compensation for them signing the pact. In theory it could be money or computer-fixing, but it might also be agreement with some of <i>their</i> preferred pacts.</p>
<p>There are a few possible outcomes of this process in Platonic Economists&#8217; Paradise, both interesting.</p>
<p>One is a patchwork of agreements, where everyone has to remember that they&#8217;ve signed agreements 5, 12, 98, and 12,671, but their next-door neighbor has signed agreements 6, 12, 40, and 4,660,102, so they and their neighbor are bound to cooperate on 12 but no others.</p>
<p>Another is that everyone is able to get their desired pacts to cohere into a single really big pact that they are all able to sign off upon. Maybe there are a few stragglers who reject it at first, but this ends up being a terrible idea because now they&#8217;re not bound by really important agreements like &#8220;don&#8217;t murder&#8221; or &#8220;don&#8217;t steal&#8221;, so eventually they give in.</p>
<p>A third possibility combining the other two offers a unifying principle behind <A HREF="http://slatestarcodex.com/2013/04/08/whose-utilitarianism/">Whose Utilitarianism</A> and <A HREF="http://slatestarcodex.com/2014/06/07/archipelago-and-atomic-communitarianism/">Archipelago and Atomic Communitarianism</A>. Everyone agrees to some very basic principles of respecting one another (call them &#8220;Noahide Laws&#8221;) but smaller communities agree to stricter rules that allow them to do their own thing.</p>
<p>But we don&#8217;t live in Platonic Economists&#8217; Paradise. We live in the real world, where transaction costs are high and people have limited brainpower. Even if we were to try to instantiate Economists&#8217; Paradise, it couldn&#8217;t be the one where we all have the complex interlocking patchwork agreements between one another. People wouldn&#8217;t sign off on it. Heck, <i>I</i> wouldn&#8217;t sign off on it. I would say &#8220;I&#8217;m not signing this until I have something that makes sense to me and can be implemented in a reasonable amount of time and doesn&#8217;t require me to check the List Of Everybody In The World before I know whether the guy next to me is going to murder me or not.&#8221; Practical concerns provide a very strong incentive to reject the patchwork solution and force everyone to cohere. So in practice &#8211; and I realize how hokey it is to keep talking about game-theoretically-perfect infinitely-rational infinitely-honest agents negotiating all possible agreements among one another, and then add on the term &#8220;in practice&#8221; to represent that they have trouble remembering what they decided &#8211; but in practice they would all have very large incentives to cohere upon a single solution that balances out all of their concerns.</p>
<p>We can think of this as moving along an axis from &#8220;Platonic&#8221; to &#8220;practical&#8221;. As we progress further, complicated agreements collapse into simpler agreements which are less perfect but easier to enforce and remember. We start to make judicious use of Schelling fences. We move from everyone in the world agreeing on exactly what people can and can&#8217;t do to things like &#8220;Well, you know your intuitive sense of niceness? You follow that with me, and I&#8217;ll follow that with you, and we&#8217;ll assume everyone else is in on the deal until they prove they aren&#8217;t.&#8221;</p>
<p>A metaphor: in a dream, your soul goes to Economists&#8217; Paradise and agrees on the perfect patchwork of maxims with all the other souls there. But as dawn approaches, you realize when you awaken you will never remember all of what you agreed upon, and even worse, all the other souls there are going to wake up and not remember what <i>they</i> agreed upon either. So all of you together frantically try to compress your wisdom into a couple of sentences that the waking mind will be able to recall and follow, and you end up with platitudes like &#8220;Use your intuitive sense of niceness&#8221; and &#8220;do unto others as you would have others do unto you&#8221; and &#8220;try to maximize utility&#8221; and &#8220;anybody who treats you badly, assume they&#8217;re not in on the deal and feel free to treat them badly too, but not so badly that you feel like you can murder them or something.&#8221;</p>
<p>A particularly good platitude/compression might be &#8220;Work very hard to cultivate the mysterious skill of figuring out what people in the Economists&#8217; Paradise would agree to, then do those things.&#8221; If you&#8217;re Greek, you can even compress it into a single word: <i>phronesis</i>.</p>
<p><b>V.</b></p>
<p>So by now it&#8217;s probably pretty obvious that this is an attempt to ground morality. I think the general term for the philosophical school involved is &#8220;contractualism&#8221;.</p>
<p>Many rationalists seem to operate on something like R.M. Hare&#8217;s <A HREF="http://en.wikipedia.org/wiki/Two-level_utilitarianism">two-level utilitarianism</A>. That is, utilitarianism is the correct base level of morality, but it&#8217;s very hard to do, so in reality you&#8217;ve got to make do with less precise but more computationally tractable heuristics, like deontology and virtue ethics. Occasionally, when deontology or virtue ethics contradict themselves, each other, or your intuitions, you may have to sit down and actually do the utilitarianism as best you can, even though it will be inconvenient and very philosophically difficult.</p>
<p>For example, deontology may say things like &#8220;You must never kill another human being.&#8221; But in the trolley problem, the correct deontological action seems to violate our moral intuitions. So we go up a level, calculate the utility (which in this case is very easy, because it&#8217;s a toy problem invented entirely for the purposes of having easy utility calculation) and say &#8220;Huh, this appears to be one of those rare places where our deontological heuristics go wrong.&#8221; Then you switch the trolley.</p>
<p>But utilitarianism famously has problems of its own. You need a working definition of utility, which means not only distinguishing between hedonic utilitarianism, preference utilitarianism, etc, but coming up with a consistent model for measuring the strength of happiness and preferences. You need to distinguish between total utilitarianism, average utilitarianism, and a couple of other options I forget right now. You need a discount rate. You need to know whether creating new people counts as a utility gain or not, and whether removing people (isn&#8217;t <i>that</i> a nice euphemism) can even be counted as a negative if you make sure to do it painlessly and without any grief to those who remain alive. You need a generalized solution to Pascal&#8217;s Wagers and utility monsters. You need to know whether to accept or fudge away weird results like that you may be morally obligated to live your entire life to maximize anti-malaria donations. All of this is easy at the tails and near-impossible at the margins.</p>
<p>My previous philosophy was &#8220;Yeah, it&#8217;s hard, but I bet with sufficient intelligence, we can think up a consistent version of utilitarianism with enough epicycles that it produces an answer to all of these issues that most people would recognize as at least kind of sane. Then we can just go with that one.&#8221;</p>
<p>I still believe this. But that consistent version would probably fill a book. The question is: what is the person who decides what to put in this book doing? On what grounds are they saying &#8220;total utilitarianism is a better choice than average utilitarianism&#8221;? It can&#8217;t be on <i>utilitarian</i> grounds, because you can&#8217;t use utilitarian grounds until you&#8217;ve figured out utilitarianism, which you haven&#8217;t done until you&#8217;ve got the book. When God was deciding what to put in the Bible, He needed some criteria other than &#8220;make the decision according to Biblical principles&#8221;.</p>
<p>The standard answer is &#8220;we are starting with our moral intuitions, then simplifying them to a smaller number of axioms which eventually produce them&#8221;. But if the axioms fill a book and are full of epicycles to address individual problems, we&#8217;re not doing a very good job.</p>
<p>I mean, it&#8217;s still better than just trying to sort out all individual issues like &#8220;what is a just war?&#8221; on their own, because people will answer that question according to their personal prejudices (is my tribe winning it? Then it is <i>so, so just</i>) and if we force them to write the utilitarianism book at least they&#8217;ve got to come up with consistent principles and stick to them. But it is <i>highly suboptimal</i>.</p>
<p>And I wonder whether maybe the base level, the one that actually grounds utilitarianism, is contractualism. The idea of a Platonic parliament in which we try to enact all beneficial agreements. Under this model, utilitarianism, deontology, and virtue ethics would all be <i>different</i> heuristics that we use to approximate contractualism, the fragments we remember from our beautiful dream of Paradise.</p>
<p>I realize this is kind of annoying, especially in the sense of &#8220;the next person who comes along can say that utiltiarianism, deontology, virtue ethics, <i>and contractualism</i> are heuristics for whatever moral theory <i>they</i> like, which is The Real Thing&#8221;. But the idea can do work! It particular, it might help esolve some of the standard paradoxes of utilitarianism.</p>
<p>First, are we morally obligated to wirehead everyone and convert the entire universe into hedonium? Well, would <i>you</i> sign that contract?</p>
<p>Second, is there anything wrong with killing people painlessly if they won&#8217;t be missed? After all, it doesn&#8217;t seem to cause any pain or suffering, or even violate any preferences &#8211; at least insofar as your victim isn&#8217;t around to have their preferences violated. Well, would you sign a contract in which everyone agrees not to do that?</p>
<p>Third, are we morally obligated to create more and more people with slightly above zero utility, until we are in an overcrowded slum world with everyone stuck at just-above-subsistence level (the <A HREF="http://en.wikipedia.org/wiki/Repugnant_Conclusion">Repugnant Conclusion</A>)? Well, if you were making an agreement with everyone else about what the population level should be, would you suggest we do that? Or would you suggest we avoid it?</p>
<p>(this can be complicated by asking whether potential people get a seat in this negotiation, but Carl Shulman has <A HREF="http://reflectivedisequilibrium.blogspot.com/2012/07/rawls-original-position-potential.html">a neat way to solve that problem</A>)</p>
<p>Fourth, the classic problem of defining utility. If utility can be defined ordinally but not cardinally (ie you can declare that stubbing your toe is worse than a dust speck in the eye, but you can&#8217;t say something like it&#8217;s exactly 2.6 negative utilons) then utilitarianism becomes very hard. But contractualism doesn&#8217;t become any harder, except insofar as it&#8217;s harder to use utilitarianism as a heuristic for it.</p>
<p>I am not actually sure these problems are being solved, and I&#8217;m not just being led astray by contractualism being harder to model than utilitarianism and so it is easier for me to <i>imagine</i> them solved. But at the very least, it might be that contractualism is a different angle from which to attack these problems.</p>
<p>Of course, contractualism has problems of its own. It might be that different ways of doing the negotiations would lead to very different results. It might also be that the results would be very path-dependent, so that making one agreement first would end with a totally different result than making another agreement first. And this would be a good time to admit I don&#8217;t know that much formal game theory, but I do know there are multiple Nash equilibria and Pareto-optimal endpoints in a lot of problems and that in general there&#8217;s no such thing as &#8220;the correct game theoretic solution to this problem&#8221;, only solutions that fit more or fewer desirability criteria.</p>
<p>But to some degree this maps onto our intuitions about morality. One of the harder to believe things about utilitarianism was that it suggested there was exactly one best state of the universe. Our intuitions are very good at saying that certain hellish dystopias are very bad, and certain paradises are very good, but extrapolating them out to say there&#8217;s a single best state is iffy at best. So maybe the ability of rigorous game theory to end in a multitude of possible good outcomes is a feature and not a bug.</p>
<p>I don&#8217;t know if it&#8217;s possible for certain negotiation techniques to end in extreme local minima where things don&#8217;t end out as a paradise <i>at all</i>. I mean, I know there&#8217;s lots of horrible game theory like the Prisoner&#8217;s Dilemma and the Pirate&#8217;s Dilemma and so on, but I&#8217;m defining the &#8220;good game theory&#8221; of the Economists&#8217; Paradise to mean exactly the rules and coordination power you need to not do those kinds of things.</p>
<p>But there&#8217;s also a meta-level escape vent. If a certain set of negotiation techniques would lead to a local minimum where everything is Pareto-optimal but nobody is happy, then everyone would coordinate to sign a pact <i>not to use those negotiation techniques</i>.</p>
<p><b>VI.</b></p>
<p>To sum up:</p>
<p>The Economists&#8217; Paradise of solved coordination problems would be enough to keep everyone happy and prosperous and free. We ourselves could live in that paradise if we followed its rules, which involve negotiation of and adherence to agreements according to good economist and game theory, but these rules are hard to determine and hard to enforce.</p>
<p>We can sort of guess at what some of these rules can be, and when we do that we can try to follow them. Some rules lend themselves to State enforcement. Others don&#8217;t and we have to follow them quietly in the privacy of our own hearts. Sometimes the rules include rules about ostracizing or criticizing those who don&#8217;t follow the rules effectively, and so even the ones the State can&#8217;t enforce are sorta kinda enforceable. Then we can spread them through <A HREF="http://slatestarcodex.com/2014/02/23/in-favor-of-niceness-community-and-civilization/">a series of walled gardens and <s>spontaneous order</s> divine intervention.</A></p>
<p>The exact nature of the rules is computationally intractable and so we use heuristics most of the time. Through practical wisdom, game theory, and moral philosophy, we can improve our heuristics and get to the rules more closely, with corresponding benefits for society. Utilitarianism is one especially good heuristic for the rules, but it&#8217;s <i>also</i> kind of computationally intractable. Utilitarianism helps us approximate contractualism, and contractualism helps us resolve some of the problems of utilitarianism.</p>
<p>One problem of utilitarianism I didn&#8217;t talk about is that it isn&#8217;t very inspirational. Following divine law is inspirational. Trying to become a better person, a heroic person, is inspirational. Utilitarianism sounds too much like <i>math</i>. I think contractualism solves this problem too.</p>
<p>Consider. There is an Invisible Nation. It is not a democracy, per se, but it is something of a republic, where each of us is represented by a wiser, stronger version of ourselves who fights for our preferences to be enacted into law. Its legislature is untainted by partisanship, perfectly efficient, incorruptible, without greed, without tyranny. Its bylaws are the laws of mathematics; its Capitol Building stands at the center of Platonia.</p>
<p>All good people are patriots of the Invisible Nation. All the visible nations of the world &#8211; America, Canada, Russia &#8211; are properly understood to be its provinces, tasked with executing its laws as best they can, and with proper consideration to the unique needs of the local populace. Some provinces are more loyal than others. Some seem to be in outright rebellion. The laws of the Invisible Nation contain provisions about what to do with provinces in rebellion, but they are vague and difficult to interpret, and its patriots can disagree on what they are.</p>
<p>Maybe one day we will create a superintelligence that tries something like Coherent Extrapolated Volition &#8211; which I think we have just rederived, kind of by accident. The various viceroys and regents will hand over their scepters, and the Invisible Nation will stand suddenly revealed to the mortal eye. Until then, we see through a glass darkly. As we learn more about our fellow citizens, as we gain new modalities of interacting with them like writing, television, the Internet &#8211; as we start crystallizing concepts like rights and utility and coordination &#8211; we become a little better able to guess.</p>
]]></content:encoded>
			<wfw:commentRss>http://slatestarcodex.com/2014/08/24/the-invisible-nation-reconciling-utilitarianism-and-contractualism/feed/</wfw:commentRss>
		<slash:comments>216</slash:comments>
		</item>
		<item>
		<title>Nobody Likes A Tattletale</title>
		<link>http://slatestarcodex.com/2014/08/23/nobody-likes-a-tattletale/</link>
		<comments>http://slatestarcodex.com/2014/08/23/nobody-likes-a-tattletale/#comments</comments>
		<pubDate>Sat, 23 Aug 2014 08:26:23 +0000</pubDate>
		<dc:creator><![CDATA[Scott Alexander]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[morality]]></category>

		<guid isPermaLink="false">http://slatestarcodex.com/?p=2675</guid>
		<description><![CDATA[Today at work, one of my heroin addict patients getting treated in inpatient rehab for heroin addiction managed to smuggle&#8230;well, you want to take a wild guess? Yeah, he smuggled in some heroin and got high in the hospital. Another &#8230; <a href="http://slatestarcodex.com/2014/08/23/nobody-likes-a-tattletale/">Continue reading <span class="pjgm-metanav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>Today at work, one of my heroin addict patients getting treated in inpatient rehab for heroin addiction managed to smuggle&#8230;well, you want to take a wild guess? Yeah, he smuggled in some heroin and got high in the hospital. Another patient saw him do it and told me. I had a long talk with him and took measures to make sure it wouldn&#8217;t happen again.</p>
<p>I wouldn&#8217;t say I&#8217;m disappointed in the addict. Anyone who expects heroin addicts to follow rules that result in them getting less heroin is going to be so consistently frustrated that they will eventually lower their standards.</p>
<p>But I <i>am</i> a little disappointed in the patient who told me. Come on, man! Nobody likes a tattletale!</p>
<p>I realize this feeling is totally one hundred percent irrational. The patient was absolutely correct that using heroin in rehab is bad, we enforce our anti-heroin rules fairly and don&#8217;t have any draconian punishments when people break them, and most of these people come into rehab at least sorta-voluntarily and agree to the rules. Telling me was absolutely the correct decision.</p>
<p>But I <i>still</i> feel a little disappointed in him.</p>
<p>This feeling is not born of any kind of personal experience. I&#8217;ve never had much trouble with authority. I follow most of the important laws, I never got in trouble in school. No one&#8217;s ever tattled on <i>me</i>. Although my association with many libertarians provides me with a lot of examples of authority overreaching itself, I&#8217;m pretty sure the rule &#8220;don&#8217;t use heroin in a drug rehab&#8221; isn&#8217;t one one of those.</p>
<p>As far as I can tell, my only two consistent positions are &#8220;disagree with the existence of rules&#8221; and &#8220;agree with rules and be happy when people help enforce those rules&#8221;, and I&#8217;m definitely not pushing for the first.</p>
<p>And yet I&#8217;m <i>still</i> kind of annoyed with that guy.</p>
<p>Dislike of tattletales seems to be, if not a human universal, at least a human very-common, arising in the absence of obvious social pressure and seeming attractive even to people whose social position ought to naturally turn them against it. My impression from old mobster movies is that even the police had contempt for people who ratted on organized crime, even though those people were obviously doing good for society.</p>
<p>This seems to be a clear case of virtue ethics versus utilitarianism. A rat who betrays the mob is helping society by getting rid of criminals, but he&#8217;s also proving himself an untrustworthy person who betrays his friends and who might not be a good choice to associate with. Fine.</p>
<p>But my patient? He never promised anybody he wasn&#8217;t going to tell on them. He had no association with the addict besides being a patient in the same hospital as him. If he had any duty at all, it was to his doctors, who were working really hard to help him, and he discharged this duty admirably by helping them enforce their rules.</p>
<p>So in this case I think it is just a flaw in my brain. I am acting as if all my patients had made some kind of implied deal to respect each other&#8217;s privacy, and the one tattletale was being a dealbreaker by defecting. But since no such deal was made &#8211; and since indeed people in a rehab facility should not expect such a deal &#8211; there was no deal-breaking involved.</p>
<p>One cannot say the same for the position endorsed by Leah Libresco, who wrote about a similar episode of tattling in her blog post <A HREF="http://www.patheos.com/blogs/unequallyyoked/2014/08/the-ethicist-endorses-omerta.html">The Ethicist Endorses Omertà</A><br />
<blockquote>The NYT‘s Ethicist has taken a very strange approach to wrongdoing in this weekend’s column.  A student wrote in to say that ze saw a friend take someone’s car keys and throw them into a lake.  The friend offered the letterwriter $50 as an implicit bribe in order to stay quiet.  The bribe worked.  Later, someone came by looking for his keys, and the letterwriter kept mum.  But ze felt queasy about zer choice, and asked the Ethicist for his advice.</p></blockquote>
<p>Assume that for some reason he can&#8217;t just give the guy his $50 back before talking. His only two choices are to keep the money and stay silent, or keep the money and talk. The first choice fails to right a wrong, the second breaks his contract. Which is better?</p>
<p>The Ethicist said the writer was wrong to take the deal, but having taken it, he is compelled to respect it. Leah disagreed, saying that he was wrong to take the bribe, and having realized that he should break his deal and tell the victim everything. She says: &#8220;Sticking by an immoral compact wrongs yourself and your accomplice. . . it’s clear that we don’t want people to hew to unethical agreements, simply because breaking promises is bad.&#8221;</p>
<p>I&#8217;m about halfway between these two positions. One should try one&#8217;s hardest to get out of an immoral contract. But if that&#8217;s impossible, I think one needs to weigh the moral cost of breaking a promise against the moral cost of carrying out the immoral contract, with a bias towards keeping your word unless it&#8217;s totally repugnant.</p>
<p>Let me try to give an example Leah will be especially able to understand.</p>
<p>Suppose that I become a Catholic priest and take confession. I swear not to break the seal of the confessional and not to go tell the secular authorities what I hear.</p>
<p>My first client (I bet there&#8217;s a better word for that!) is a child molester who confesses all the child molestation he&#8217;s doing. I tell him to stop, and he says unconvincingly that he&#8217;ll think about it.</p>
<p>I think &#8220;Holy f@#k, I was just expecting people to talk about sleeping in on Sundays, this is way worse than I could have expected&#8221;. I decide that my original promise not to tell the secular authorities was immoral, and I go off and tell the secular authorities. They arrest the child molester. Everyone lives happily ever after except that no one confesses things at Church ever again.</p>
<p>Both Leah and myself agree that some sort of a confessional-type institution is useful (even if I as an atheist think of it more in terms of psychiatric confidentiality). But such an institution is impossible without people being able to <i>really mean</i> promises. A credible promise can&#8217;t just be &#8220;I promise to do this thing unless I later decide it is bad, in which case I won&#8217;t&#8221;. You have to be able to <i>really</i> trust someone.</p>
<p>As Leah herself very correctly puts it in a <i>different</i> <A HREF="http://www.patheos.com/blogs/unequallyyoked/2013/06/a-terrible-consequence-of-consequentialism.html">blog post</A> on a <i>different</i> botched Ethicist decision:<br />
<blockquote>The Ethicist is crippling his own ability (and that of anyone suspected to subscribe to his philosophy) to make a promise.  A promise is not an indication of present beliefs (“I don’t plan to repeat anything you say in this room”) it is a bind on future action (“I won’t repeat what you say, even if I wish I hadn’t made this promise later”).  If he isn’t comfortable making that kind of promise, he has the option to tell patients and others up front, but treating promises as breakable upon reflection dilutes them for him and everyone else.</p>
<p>The covenant marriage movement is meant to counteract this kind of thinking in one sphere.  In an age of no-fault divorce, they’re trying to carve out a special niche, clearly differentiated from mainstream marriage, where a change of heart isn’t sufficient justification to break a promise.  But there isn’t an equivalent in most other spheres of life.  One can say only “I really mean this promise,” and a reader of the Ethicist’s column might reasonably hear a silent “right now” at the end of that phrase.</p></blockquote>
<p>But now it&#8217;s Leah adding the &#8220;right now&#8221; and The Ethicist enforcing covenants. Leah points out that the Ethicist has changed its mind on this point, but doesn&#8217;t explain why, upon being given the opposite position, she continues to disagree.</p>
<p>Promises are useful because they allow beneficial Pareto-optimal deals to be made. If promises are untrustworthy, beneficial deals become impossible and everyone loses out. The principle &#8220;Break all promises to respect immoral deals&#8221; not only makes immoral deals impossible, but also any moral deals where there is a <i>risk</i> of either participant deciding they are immoral, or even moral deals where one participant can credibly <i>claim</i> to have decided they are immoral and so back out of their obligation punishment-free. This is a pretty big set of deals and so we should not lightly endorse people&#8217;s ability to break promises they believe are immoral.</p>
<p>I should probably clarify here that all my promises usually contain an implied &#8220;unless following this promise is much more difficult than I could reasonably have expected&#8221; and I assume my interlocutor knows this. So if I promise someone to get them milk from the store, and then I go to the store and there&#8217;s only one carton of milk and a guy has just taken it and tells me he won&#8217;t give it to me, I don&#8217;t feel morally obligated to beat him up and steal it from him. If somebody wants a promise from me <i>without</i> the implied &#8220;unless&#8221; they are welcome to ask me for it. Or in certain cases where it is obvious that is what they want, I will assume it without being asked. And in <i>those</i> sorts of cases if I make it I will keep it, beating-up and all. But I would think <i>much</i> harder before making a promise like that, and I would lawyer its wording the same way I would a wish from a genie with a known mean streak.</p>
<p>Much simpler and perhaps best of all were those ancient promises, where people were like &#8220;If ever I betray your trust, then may the ravens of Odin peck out my right eye!&#8221; There&#8217;s no ambiguity here. You know exactly what&#8217;s enforcing the deal &#8211; getting your right eye pecked out by ravens. If you later decide your deal was unethical, you are welcome to assuage your conscience by cancelling it, but you should still expect to have your right eye pecked out by ravens. Since the enforcement mechanism is bloodthirsty heavenly birds rather than morality per se, you don&#8217;t get these weird questions about whether other, different morality can ride in and free you from it. It&#8217;s not even a question of &#8220;freeing&#8221; so much as of trade-offs. If you want to break your promise for money, you can get the money &#8211; but the ravens will peck out your eye. If you want to break your promise for love, you can get the love &#8211; but the ravens will peck out your eye. And if you want to break your promise for a greater moral cause, you can get your greater moral cause &#8211; but your eye still gets pecked out. </p>
<p>You know exactly where you stand with eye-pecking ravens, which is a hell of a lot better than you can ever say about morality.</p>
]]></content:encoded>
			<wfw:commentRss>http://slatestarcodex.com/2014/08/23/nobody-likes-a-tattletale/feed/</wfw:commentRss>
		<slash:comments>113</slash:comments>
		</item>
		<item>
		<title>Ground Morality In Party Politics</title>
		<link>http://slatestarcodex.com/2014/06/20/ground-morality-in-party-politics/</link>
		<comments>http://slatestarcodex.com/2014/06/20/ground-morality-in-party-politics/#comments</comments>
		<pubDate>Fri, 20 Jun 2014 18:49:53 +0000</pubDate>
		<dc:creator><![CDATA[Scott Alexander]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[morality]]></category>
		<category><![CDATA[philosophy]]></category>

		<guid isPermaLink="false">http://slatestarcodex.com/?p=2283</guid>
		<description><![CDATA[My name sounds a lot like Scott Aaronson&#8217;s and I get confused for him a lot. I try to encourage this confusion, since it can only increase people&#8217;s opinion of me. So let me propose a tool for investigating morality &#8230; <a href="http://slatestarcodex.com/2014/06/20/ground-morality-in-party-politics/">Continue reading <span class="pjgm-metanav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>My name sounds a lot like Scott Aaronson&#8217;s and I get confused for him a lot. I try to encourage this confusion, since it can only increase people&#8217;s opinion of me. So let me propose a tool for investigating morality through algorithmic systems very similar to <A HREF="http://www.scottaaronson.com/blog/?p=1820">Aaronson&#8217;s recent post on eigenmorality</A>.</p>
<p>I say we use DW-Nominate.</p>
<p>I <A HREF="http://slatestarcodex.com/2013/09/21/the-thin-blue-line-that-stays-bizarrely-horizontal/">wrote about DW-Nominate before</A>. It&#8217;s the tool political scientists use to calculate Congresspeople&#8217;s position on the political spectrum. Whenever you hear an alarmed-sounding voice on a black-and-white attack ad say something like &#8220;Senator Schmendrick is the third most liberal senator in Congress,&#8221; chances are they used DW-Nominate to calculate &#8220;third most liberal&#8221;.</p>
<p>The system is beautifully elegant. They take a Congressperson&#8217;s votes on all the issues and compare them to other Congresspeople&#8217;s votes to find blocs of Congresspeople who tend to vote together. Then they do factor analysis stuff to see how many dimensions of &#8220;similar voting&#8221; there are. They end up with something that looks a <i>lot</i> like the traditional left-right dimension and <i>occasionally</i> a Northern-US-vs.-Southern-US-dimension that doesn&#8217;t always matter that much. Then they use Congresspeople with multi-decade careers to bridge the gap between current Congresses and past Congresses, and use dead Congresspeople with multi-decade careers to bridge the gap between past Congresses and even-further-in-the-past Congresses, so that they can compare any Congressperson, living or dead, to any other Congressperson, living or dead. They also get the opportunity to evaluate <i>bills</i> as liberal or conservative, based on whether liberal or conservative Congresspeople support them.</p>
<p>And the neat thing about it is that at no point did they enter into the system that it was supposed to give &#8220;left&#8221; vs. &#8220;right&#8221; &#8211; or even that it was supposed to come out with only one major grouping. It could have found that actually Democrats and Republicans vote much the same, but men always vote with other men and women with other women, or that the real difference was a religious worldview versus a secular worldview or whatever. Instead they found that our notion of left and right emerges naturally from the data, even if you&#8217;re not looking for it, and that this transcends party lines &#8211; ie some Democrats are further left than other Democrats, and this has consistent effects. If you really wanted, you could use this to rate whether, say, cutting carbon emissions vs. gun control is a more truly leftist cause, by seeing which bills get more heavily supported by leftists.</p>
<p>And I wonder what would happen if you tried DW-Nominate with <i>moral</i> decisions.</p>
<p>Now there&#8217;s a very boring interpretation of that proposal, which is that we hand a hundred people a hundred different multiple-choice questions on moral dilemmas, and then use factor analysis stuff to see if we can divide them into groups. I bet we&#8217;d come out with something a lot like &#8220;utilitarians vs. deontologists vs. virtue ethics&#8221;, or maybe something more like &#8220;religious vs. secular&#8221; or maybe &#8220;people who use all Haidtian foundations vs. people who just use care and fairness&#8221;. Actually, forget what I said before, this would already be a quite interesting thing to do and somebody should do it.</p>
<p>But I would be much more interested in a (much harder) naturalistic experiment. What if we took the <i>real</i> decisions people engage in? I&#8217;m not even talking about obviously morally charged decisions like whether to have an abortion, I&#8217;m talking about things from &#8220;what college major should I have?&#8221; all the way down to &#8220;do I drink alcohol at age 17?&#8221; to &#8220;do I call my parents tonight like I promised I would, even though I&#8217;m very tired?&#8221;</p>
<p>A lot of people have to go through very similar decisions, which allows DW-Nominate style ranking. There would be some wiggle room in deciding which two decisions were equivalent (is the person who decides not to call loving parents making the same decision as the person who decides not to call abusive parents?), but let&#8217;s say we get a panel of raters to decide among themselves which decisions are equivalent and throw out any they can&#8217;t agree upon. This isn&#8217;t meant to be Pure And Objective here, only statistically useful.</p>
<p>In the same way that ten thousand Congressional votes, suitably analyzed, naturally group people into two categories that look to our trained eyes like Left and Right, would ten thousand little life decisions, suitably analyzed, naturally group people into two categories that look to our trained eyes like Good and Bad?</p>
<p>If so, it would be pretty easy to tell who the best person was, in the same way we can identify the most liberal member of Congress. We could give them a nice little award. Even better, it would be pretty easy to tell which option on each decision is more moral, for the same reason DW-Nominate can tell us that supporting gun control is more liberal than opposing it.</p>
<p>Suppose that we learned that one factor that naturally fell out of the data included giving money to the poor, supporting one&#8217;s aging parents, never committing violent crimes, avoiding ethnic slurs, conserving water and electricity, helping one&#8217;s friends when they were in trouble, and everything else we traditionally think of as good moral choices. And suppose this factor was heavily, <i>heavily</i> associated with being pro-life, to the same degree that the &#8220;liberal&#8221; factor in DW-Nominate is heavily associated with gun control. Would this provide some evidence in the debate over abortion? I&#8217;m not sure, but it would sure get me thinking long and hard about it.</p>
<p>I mean, we would probably also find some really silly things . Like that our moral factor loads on not getting tattoos of flaming skulls, ie the decision to get a tattoo of a flaming skull clusters with lots of immoral decisions. Presumably we would want to be able to say that getting a flaming skull tattoo is not itself immoral, but is correlated with immorality. But then we might as well say the same thing about being pro-life. Indeed, maybe everything religious will end out correlated with morality for religious reasons. We&#8217;d probably have to sort through this and fight a bunch of interminable correlation vs. causation debates.</p>
<p>But then there are areas where this could really shine.</p>
<p>I think this might solve a problem that Aaronson thought was unsolveable in his proposed algorithms. He said that in a world that was completely backwards &#8211; for example Nazi Germany &#8211; where everybody thought right was wrong and wrong was right, any moral sorting algorithm will give backwards results because it has to start with majority opinion in some sense. His example was that a PageRank style algorithm where people-believed-to-be-moral are the ones people-believed-to-be-moral believe are moral would fail, because most Nazis would believe that the high-ranking Nazi authorities were moral, and then the circle would complete with the high-ranking Nazi authorities getting to determine who the moral people were.</p>
<p>I think DW-Nominate might go part of the way solving that problem. Consider three different things we might find if we DW-Nominated Nazi Germany:</p>
<p>1. There is a General Factor of Morality, which includes giving to the poor, caring for your aged parents, cooperating with your neighbors, et cetera. People high in this General Factor of Morality are much more likely to oppose Nazi policies and hide Jews in their attics.</p>
<p>2. There is a General Factor of Morality, which includes giving to the poor, caring for your aged parents, cooperating with your neighbors, et cetera. People high in this General Factor of Morality are no more likely to hide Jews than anyone else, or maybe <i>less</i> likely to hide Jews.</p>
<p>3. There are multiple dimensions of morality. One dimension is something like &#8220;prosocial in-group patriotism&#8221; and captures things like paying your taxes on time, going without luxuries in order to help the war effort, and sending nice care packages to the troops. Another dimension is something like &#8220;willingness to go against consensus when it&#8217;s the right thing to do&#8221; and would include whistleblowing against corruption and being a passive resister to unjust wars. Hiding Jews in your attic might be negatively correlated with the first factor but positively correlated with the second factor. Universally beloved things like giving to the poor and caring for your aged parents might load about equally on both factors, or be a third factor, or whatever.</p>
<p>If Hypothesis 1 were true, that would be <i>super interesting</i>. It would suggest there&#8217;s something kind of objective about morality. Also, we could make it do <i>work</i>. Like we could go around to the Nazis, and say &#8220;Look, you agree that helping the poor is moral, right? And caring for your aged parents? Well, now that we&#8217;ve established what morality is, we have bad news for you. You don&#8217;t have it. Moral people are much more likely to oppose you. So stop doing what you&#8217;re doing.&#8221; This might actually work. Or if it didn&#8217;t, then when World War II ended and everyone agreed they <i>should have</i> listened to the General Factor Of Morality, then maybe after ten or twenty iterations of this people would start listening <i>eventually</i>.</p>
<p>If Hypothesis 2 were true, that would also be super interesting, albeit disappointing. It would mean that morality probably isn&#8217;t very objective, and that our moral positions are a lot closer to random than we want to believe. If being moral in every other way we can think of had minimal correlation with being moral in the particular way of saving Jews from the Nazis, it would mean that there was no consistent basis to morality and it was just a hodgepodge of popular positions. Or that if there was a philosophically consistent basis, it has little to do with how it&#8217;s practiced in the real world.</p>
<p>If Hypothesis 3 were true, that would be very boring, but possibly still worthwhile. Like we could have debates on whether Factor I Morality is more important than Factor II morality, and what to do when they contradict each other, and these debates would probably be more interesting than our current more vague debates on things like &#8220;what do you do when your duty and your moral intuitions conflict?&#8221;</p>
<p>I don&#8217;t really have some grand plan for how this could be used to solve everything or how a utopia could be created around it (although now that I mention it, if we can easily identify the most moral people in a population, they would make good candidates for judges and other high officials, though perhaps not legislators or executives). </p>
<p>I just think it would be fun to study.</p>
]]></content:encoded>
			<wfw:commentRss>http://slatestarcodex.com/2014/06/20/ground-morality-in-party-politics/feed/</wfw:commentRss>
		<slash:comments>108</slash:comments>
		</item>
		<item>
		<title>Who By Very Slow Decay</title>
		<link>http://slatestarcodex.com/2013/07/17/who-by-very-slow-decay/</link>
		<comments>http://slatestarcodex.com/2013/07/17/who-by-very-slow-decay/#comments</comments>
		<pubDate>Thu, 18 Jul 2013 01:48:09 +0000</pubDate>
		<dc:creator><![CDATA[Scott Alexander]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[long post is long]]></category>
		<category><![CDATA[medicine]]></category>
		<category><![CDATA[morality]]></category>

		<guid isPermaLink="false">http://slatestarcodex.com/?p=865</guid>
		<description><![CDATA[[Trigger warning: Death, pain, suffering, sadness] I. Some people, having completed the traditional forms of empty speculation &#8211; &#8220;What do you want to be when you grow up?&#8221;, &#8220;If you could bang any celebrity who would it be?&#8221; &#8211; turn &#8230; <a href="http://slatestarcodex.com/2013/07/17/who-by-very-slow-decay/">Continue reading <span class="pjgm-metanav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>[<i>Trigger warning: Death, pain, suffering, sadness</i>]</p>
<p><b>I.</b></p>
<p>Some people, having completed the traditional forms of empty speculation &#8211; &#8220;What do you want to be when you grow up?&#8221;, &#8220;If you could bang any celebrity who would it be?&#8221; &#8211; turn to &#8220;What will you say as your last words?&#8221;</p>
<p>Sounds like a valid question. You can go out with a wisecrack, like Oscar Wilde (&#8220;Either this wallpaper goes or I do&#8221;). Or with piety and humility, like Jesus (&#8220;Into thy hands, o Father, I commend my spirit.&#8221;) Or burning with defiance, like Karl Marx (&#8220;Last words are for fools who haven&#8217;t said enough.&#8221;)</p>
<p>Well, this is an atheist/skeptic blog, so let me do my job of puncturing all your pleasant dreams. You&#8217;ll probably never become an astronaut. You&#8217;re not going to bang Emma Watson. And your last words will probably be something like &#8220;mmmrrrgggg graaaaaaaaaaaHAAACK!&#8221;</p>
<p>I guess I always pictured dying as &#8211; unless you got hit by a truck or something &#8211; a bittersweet and strangely beautiful process. You&#8217;d grow older and weaker and gradually get some disease and feel your time was upon you. You&#8217;d be in a nice big bed at home with all your friends and family gathered around. You&#8217;d gradually feel the darkness closing in. You&#8217;d tell them all how much you loved them, there would be tears, you would say something witty or pious or defiant, and then you would close your eyes and drift away into a dreamless sleep.</p>
<p>And I think this happens sometimes. For all I know, maybe it happens quite a lot. If it does, I never see these people. They very wisely stay far away from hospitals and the medical system in general. I see the other kind of people.</p>
<p>If you are like the patients I see dying, then here is how you will go.</p>
<p>You will grow old. When you were young, you would go to institutions and gradually gather letters after your name: BA, MD, PhD. Now that you are old, you do the same thing, but they are different institutions and different letters. Your doctors will introduce you to their colleagues as &#8220;Mary Smith, COPD, PVD, ESRD, IDDM&#8221;. With each set of letters comes another decrease in quality of life. </p>
<p>At first these sacrifices will be minor. The COPD means you have to breathe from an oxygen tank you carry around wherever you go. The PVD will prevent you from walking more than a few feet at a time. The ESRD will require three hours dialysis in a hospital or outpatient dialysis center three times a week. The IDDM will require insulin shots after every meal. Not fun, but hardly inconsistent with a life worth living.</p>
<p>Eventually these will add up beyond your ability to manage them on your own, and you will be sent off to a nursing home. This will seem like a reasonable enough idea, and sometimes it goes well. Other times it gives you freedom to develop a completely new set of morbidities totally unconstrained by what a person in any other situation could possibly be expected to survive.</p>
<p>You will become bedridden, unable to walk or even to turn yourself over. You will become completely dependent on nurse assistants to intermittently shift your position to avoid pressure ulcers. When they inevitably slip up, your skin develops huge incurable sores that can sometimes erode all the way to the bone, and which are perpetually infected with foul-smelling bacteria. Your limbs will become practically vestigial organs, like the appendix, and when your vascular disease gets too bad, one or more will be amputated, sacrifices to save the host.  Urinary and fecal continence disappear somewhere in the process, so you&#8217;re either connected to catheters or else spend a while every day lying in a puddle of your own wastes until the nurses can help you out. The digestive system isn&#8217;t too happy either by this point, so you can either have a tube plugged directly into your stomach or just skip the middleman and have an IV line feeding nutrients into your bloodstream.</p>
<p>Somewhere in the process your mind very quietly and without fanfare gives up the ghost. It starts with forgetting a couple of little things, and progresses until you have no idea what&#8217;s going on ever. In medical jargon, healthy people are &#8220;alert and oriented x 3&#8243;, which means oriented to person (you know your name), oriented to time (you know what day/month/year it is), and oriented to place (you know you&#8217;re in a hospital). My patients who have the sorts of issues I mentioned in the last paragraph are generally alert and oriented x0. They don&#8217;t remember their own names, they don&#8217;t know where they are or what they&#8217;re doing there, and they think it&#8217;s the 1930s or the 1950s or don&#8217;t even have a concept of years at all. When you&#8217;re alert and oriented x0, the world becomes this terrifying place where you are stuck in some kind of bed and can&#8217;t move and people are sticking you with very large needles and forcing tubes down your throat and you have no idea why or what&#8217;s going on.</p>
<p>So of course you start screaming and trying to attack people and trying to pull the tubes and IV lines out. Every morning when I come in to work I have to check the nurses&#8217; notes for what happened the previous night, and every morning a couple of my patients have tried to pull all of their tubes and lines out. If it&#8217;s especially bad they try to attack the staff, and although the extremely elderly are <i>really</i> bad at attacking people this is nevertheless Unacceptable Behavior and they have to be restrained ie tied down to the bed. A presumably more humane alternative sometimes used instead or in addition is to just drug you up on all of those old-timey psychiatric medications that actual psychiatrists don&#8217;t use anymore because of their bad reputation.</p>
<p>After a while of this, your doctors will call a meeting with your family and very gingerly raise the possibility of going to &#8220;comfort care only&#8221;, which means they disconnect the machines and stop the treatments and put you on painkillers so that you die peacefully. Your family will start yelling at the doctors, asking how the hell these quacks were ever allowed to practice when for God&#8217;s sake they&#8217;re trying to kill off Grandma just so they can avoid doing a tiny bit of work. They will demand the doctors find some kind of complicated surgery that will fix all your problems, add on new pills to the thirteen you&#8217;re already being force-fed every day, call in the most expensive consultants from Europe, figure out some extraordinary effort that can keep you living another few days.</p>
<p>(then these people will go home and log onto the Internet and yell at cryonics advocates for being selfish for wanting to live longer. Don&#8217;t those stupid cryonicists realize all that money could be spent on charity, instead of chasing after fantastically unlikely chances?)</p>
<p>Robin Hanson sometimes writes about how health care is a form of signaling, trying to spend money to show you care about someone else. I think he&#8217;s wrong in the general case &#8211; most people pay their own health insurance &#8211; but I think he&#8217;s spot on in the case of families caring for their elderly relatives. The hospital lawyer mentioned during orientation that it never fails that the family members who live in the area and have spent lots of time with their mother/father/grandparent over the past few years are willing to let them go, but someone from 2000 miles away flies in at the last second and makes ostentatious demands that EVERYTHING POSSIBLE must be done for the patient.</p>
<p>Your doctors will nod their heads and tell your family they respect their wishes. It will be a lie. Oh, sure, they will <i>carry out</i> the family&#8217;s wishes, in terms of continuing to provide the care. But <i>respect</i>? In the cafeteria at lunch, they will &#8211; despite medical confidentiality laws that totally prohibit this &#8211; compare stories of the most ridiculous families. &#8220;I have a blind 90 year old patient with stage 4 lung cancer with brain mets and no kidney function, and the family is <i>demanding</i> I enroll her in a clinical trial from Sri Lanka.&#8221; &#8220;Oh, that&#8217;s nothing. <i>I</i> have a patient who can&#8217;t walk or speak who&#8217;s breathing from a ventilator and has anoxic brain injury, and the family is insisting I try to get him a liver transplant.&#8221;</p>
<p>Every day, your doctors will meet with your family another time, and eventually, as your condition worsens and your family has more time to be hit on the head with a big club marked &#8216;REALITY&#8217;, they will start to relent. Finally, they will allow your doctors to take you off of the machines, and you will be transferred to Palliative Care, whose job I do not envy even though <i>every single palliative care doctor I have ever met is relentlessly cheerful and upbeat and this is a total mystery to me</i>.</p>
<p>And you will die, but not quickly. It takes time for the heart to give up, for the lungs to fill with water and stop breathing, for the toxic wastes to build up. It is generally considered wise for the patient to be on epic doses of morphine throughout the process, both to spare them the inevitable pain as their disease takes their course and to spare their family from having to watch them.</p>
<p>&#8230;not that they always do. It can take anywhere from a day to several weeks for someone to die. Sometimes your family wants to wait at the bedside for a week. But a lot of the time they have work and things to do. Maybe they live thousands of miles away. You haven&#8217;t recognized them in years, you haven&#8217;t spoken a coherent word in months, and even if for some reason your brain chose this moment to recover lucidity you&#8217;re on enough morphine to be <i>well</i> inside the borders of la-la-land. A lot of families, faced with the prospect of missing work and school to sit by what&#8217;s basically a living corpse day in and day out for weeks just to watch it turn into a non-living corpse, politely decline. I absolutely 100% cannot blame them.</p>
<p>There is a national volunteer program called No One Dies Alone. Nice people from the community go into hospitals to spend time with dying people who don&#8217;t have anyone else there for them. It makes me happy that this program exists.</p>
<p>Nevertheless, this is the way many of my patients die. Old, limbless, bedridden, ulcerated, in a puddle of waste, gasping for breath, loopy on morphine, hopelessly demented, in a sterile hospital room with someone from a volunteer program who just met them sitting by their bed.</p>
<p>And let me just emphasize again, not everyone dies this way. I am hugely selection biased by my position in a hospital. But enough people die this way. I&#8217;m in a small community. There can&#8217;t be too many deaths here. Of the ones there are, I see a lot of them. And they&#8217;re not pretty.</p>
<p><i>[<b>EDIT:</b> Just looked up <A HREF="http://www.pbs.org/wgbh/pages/frontline/facing-death/facts-and-figures/">statistics</A>. Only about a quarter of old people die at home. The rest are split between hospitals (disproportionately ICUs), nursing homes, and hospices.]</i></p>
<p><b>II.</b></p>
<p>Hospital poetry is notoriously bad.</p>
<p>I mean, practically all modern poetry is bad. Modern poetry by complete amateurs could be expected to be even worse. But hospital poetry is in a league all of its own as far as badness goes.</p>
<p>When I search &#8220;hospital poetry&#8221;, Google brings up examples like the following:<br />
<blockquote>Pain… searing<br />
Belly… throbbing<br />
There is no baby.<br />
There will be no baby.<br />
Endometriosis.</p></blockquote>
<p>I feel bad making fun of it, because it is clearly heartfelt. This is part of the problem with hospital poetry. It is very heartfelt, whereas I think most popular poetry comes from people who have strong emotions but also some distance from them and a little bit of post-processing. And unfortunately doctors, who are on this decades-long quest to prove they are actual people with real feelings and not just arrogant robot-like people in white coats who know a very large number of facts about thyroiditis, just eat this sort of thing up.</p>
<p>But I&#8217;m not really complaining about those sorts of endometriosis poems. The ones I&#8217;m really complaining about are worse. The epitome of the genre I can&#8217;t find on Google, because it was presented as some kind of event at the hospital where I trained in Ireland. I don&#8217;t remember it, but let me just make up some doggerel approximately faithful to the spirit of the original:<br />
<blockquote>When my doctor told me that I had cancer<br />
I knew that despair was not the answer<br />
It felt like the darkness was closing in<br />
But to give up would have been a sin<br />
Everyone here helped me so much<br />
And nothing is like a helping hand&#8217;s touch<br />
Thanks, Dr. Connell, and everyone in Cork<br />
I really appreciate all your hard work</p></blockquote>
<p>Doctors and nurses eat this kind of thing up and put it on shiny plaques that go on the walls of the hospital. (I suggest a wall near the gastroenterology unit, to expedite care for people who start vomiting.)</p>
<p>Wittgenstein said that &#8220;if anyone ever wrote a book of ethics, that really was a book of ethics, it would destroy all the other books in the world with a bang.&#8221; I&#8217;m not really sure what he meant. But if anyone ever wrote a book of hospital poetry, that really was a book of hospital poetry&#8230;well, I don&#8217;t know what would happen, but I bet it would be loud and angry, and that it wouldn&#8217;t be put on shiny plaques on anybody&#8217;s walls, except maybe the same people who hang Hieronymous Bosch paintings on their walls.</p>
<p>Wait, am I calling hospitals hellish? Sure am. It has nothing to do with the decor, which has actually gotten much nicer in your newer hospitals until it&#8217;s hard to tell them apart from a stylish office building. It&#8217;s nothing to do with the staff, either &#8211; most doctors and some nurses seem pretty happy and trade banter around the water coolers like everyone else. It&#8217;s mostly the screams.</p>
<p>The screams are coming about 33% from the confused demented old people I mentioned, 33% from people having minor procedures performed without anaesthetics for one or another good reason, and 33% from people who just have very painful diseases (plus 1% from me sitting in the break room looking up examples of hospital poetry for this post). They run the gamut of human screams. There are wordless shrieks. There are some angry screams, like &#8220;$#%! YOU GET ME OUT OF HERE!&#8221;. There are a lot of people screaming &#8220;SOMEBODY HELP ME!&#8221; And there are some religious screams, like &#8220;OH GOD!&#8221; or &#8220;JESUS HELP ME!&#8221; or &#8220;CHRIST NO!&#8221;.</p>
<p>When I first started working in hospitals, I would not only inevitably run over to these screams, but I would feel contempt and anger at the rest of the hospital staff who would just continue their daily routine. I soon learned better. Not only would I be unable to do anything &#8211; I can&#8217;t single-handedly cure their painful illness, or make their procedure go any faster, or explain to them that the year is 2013 and they&#8217;re no longer on their childhood farm in Oklahoma &#8211; but as soon as they saw me I would be the one they started screaming at and expecting to save them. The bystander effect, my last defense, disappeared. Sometimes I would make a stand by asking the nurse to increase their pain medication or something, and be politely told all the reasons why that was a bad idea from a medical perspective (pain medication has lots of side effects which doctors monitor carefully). In the end I would just slink out of the room, wishing I had never come in.</p>
<p>So the constant screams being completely ignored by a bunch of happy people going through their day is pretty hellish. But there&#8217;s also the bodies. Usually we are able to avoid thinking about people as bodies except to briefly note that certain people like Emma Watson are really hot. In a hospital, this filter disappears. Some people have gigantic swollen legs the size of your waist. Others have huge ulcerated sores all over. Still others have skin covered with the sorts of bacterial colonies you usually only see on a petri dish. And body sizes range from so thin that you can see their organs bulging out of their skin and use them as a grisly impromptu anatomy lesson, to so morbidly obese that you have to search through the fat folds to find body part you&#8217;re looking for.</p>
<p>The senses are under constant assault. Smell is the worst. There are some people who can identify different infections by smell. <i>Pseudomonas aeruginosa</i> is supposed to smell fruity. <i>Gardnerella</i> is supposed to smell fishy. <i>Clostridium</i> is supposed to smell like the worst thing you can possibly imagine, if it were then covered in feces and left to rot on a warm summer day.</p>
<p>But the other senses get their time too. The sight is vexed by flashing call lights. And the hearing is battered with incessant beeping from IV lines which have hard-coded alarms to alert doctors of critically important events such as &#8220;Look at me! I am an IV line!&#8221; The end result is something it would take a first-rate poet to describe. I&#8217;m tempted to nominate Oscar Wilde. He did a good job on prisons in <i>Ballad of Reading Gaol</i>, and I feel like the skill would transfer:<br />
<blockquote>He does not rise in piteous haste<br />
To put on convict-clothes,<br />
While some coarse-mouthed doctor gloats,<br />
and notes each new and nerve-twitched pose,<br />
Fingering a watch whose little ticks<br />
Are like horrible hammer-blows [&#8230;]</p>
<p>He does not stare upon the air<br />
Through a little roof of glass;<br />
He does not pray with lips of clay<br />
For his agony to pass;<br />
Nor feel upon his shuddering cheek<br />
The kiss of Caiaphas.</p></blockquote>
<p>But after some more thought, I think I&#8217;m going to go with Wilfred Owen:<br />
<blockquote>If in some smothering dreams you too could pace<br />
Behind the wagon that we flung him in,<br />
And watch the white eyes writhing in his face,<br />
His hanging face, like a devil&#8217;s sack of sin;<br />
If you could hear, at every jolt, the blood<br />
Come gargling from the froth-corrupted lungs,<br />
Obscene as cancer, bitter as the cud<br />
Of vile, incurable sores on innocent tongues [&#8230;]</p></blockquote>
<p>Or better yet, if Oscar Wilde&#8217;s muse when he was writing <i>Reading Gaol</i> were to bear Wilfred Owen&#8217;s children, then those kids would be competent to write hospital poetry that was actually hospital poetry.</p>
<p>Dante would also be an acceptable choice.</p>
<p><b>III.</b></p>
<p>You may have read the excellent article <A HREF="http://www.saturdayeveningpost.com/2013/03/06/in-the-magazine/health-in-the-magazine/how-doctors-die.html">How Doctors Die</A>. If you haven&#8217;t, do it now. It says that most doctors, knowing everything I&#8217;ve just mentioned above, choose to die quickly and with very limited engagement with the health system. </p>
<p>I (and the doctors in my family whom I&#8217;ve asked) am pretty much like the doctors in the article. If I get a terminal disease, I want to wring what I can out of the few months of life I have left and totally avoid any surgery, chemotherapy, amputations, ventilators, and the like. It would be a clean death. It would be okay.</p>
<p>My big fear, though, is that I <i>won&#8217;t</i> get a terminal disease.</p>
<p>If I just start accumulating damage, growing more and more bedridden and demented and pain-riddling until I want out &#8211; well, there won&#8217;t <i>be</i> a way out. If there&#8217;s not some very specific life-saving treatment that can be withdrawn, I&#8217;m stuck above ground, not just in the &#8220;unless I want to risk the danger and shame of suicide&#8221; way I am now, but &#8211; if I&#8217;m too debilitated to access means of suicide on my own &#8211; in an absolute way.</p>
<p>Even if my doctors and nurses and caretakers are sympathetic, my only legal option, without exposing <i>them</i> to jail time, is to starve myself to death &#8211; something both painful and difficult, and itself not really the way I want to go.</p>
<p>I was sitting in an ICU room yesterday where a patient&#8217;s body had just been brought out after their death. My attending was taking care of the paperwork in the other room, and I was sitting there reflecting, and I started thinking about what it would be like to die in that room. There was a big window, and it was a sunny day, and although I mostly had a spectacular view of the hospital parking lot, a bit further in the distance I could see a park full of really big trees. And I knew that if I were dying in that room my last thought would be that I wanted to be outside.</p>
<p>I think if I were very debilitated and knew I would die soon, I would want to go to that park or one like it on a very sunny day, surround myself with my friends and family, say some last words, and give myself an injection of potassium chloride.</p>
<p>(this originally read &#8220;morphine&#8221;, but just today the palliative care doctor at my hospital gave an impassioned lecture about how people need to stop auto-associating morphine with euthanasia, because it makes it really hard for him to offer morphine painkillers to patients who need them without them freaking out. So potassium chloride it is.)</p>
<p>This will never happen. Or if it did, it would be some kind of huge scandal, and whoever gave me the potassium chloride would be fired or something. But the people dying demented and hopeless connected to half a dozen tubes in ICU rooms aren&#8217;t considered scandals by anybody. That&#8217;s just &#8220;the natural way of things&#8221;.</p>
<p>I work in a Catholic hospital. People here say the phrase &#8220;culture of life&#8221; a lot, as in &#8220;we need to cultivate a culture of life.&#8221; They say it almost as often as they say &#8220;patient-centered&#8221;. At my hospital orientation, a whole bunch of nuns and executives and people like that got up and told us how we had to do our part to &#8220;cultivate a culture of life.&#8221;</p>
<p>And now every time I hear that phrase I want to scream. 21st century American hospitals do not need to &#8220;cultivate a culture of life&#8221;. We have enough life. We have life up the wazoo. We have more life than we know what to do with. We have life far beyond the point where it becomes a sick caricature of itself. We prolong life until it becomes a sickness, an abomination, a miserable and pathetic flight from death that saps out and mocks everything that made life desirable in the first place. 21st century American hospitals need to cultivate a culture of life the same way that Newcastle needs to cultivate a culture of coal, the same way a man who is burning to death needs to cultivate a culture of fire.</p>
<p>And so every time I hear that phrase I want to scream, or if I cannot scream, to find some book of hospital poetry that really is a book of hospital poetry and shove it at them, make them read it until they understand.</p>
<p>There is no such book, so I hope it will be acceptable if I just rip off of Wilfred Owen directly:</p>
<blockquote><p>If in some smothering dreams you too could pace<br />
Behind the gurney that we flung him in,<br />
And watch the white eyes writhing in his face,<br />
His hanging face, like a devil&#8217;s sack of sin;<br />
If you could hear, at every jolt, the blood<br />
Come gargling from the froth-corrupted lungs,<br />
Obscene with cancer, bitter with the cud<br />
Of vile, incurable sores on innocent tongues<br />
My friend, you would not so pontificate<br />
To reasoners beset by moral strife<br />
The old lie: we must try to cultivate<br />
A culture of life.</p></blockquote>
]]></content:encoded>
			<wfw:commentRss>http://slatestarcodex.com/2013/07/17/who-by-very-slow-decay/feed/</wfw:commentRss>
		<slash:comments>95</slash:comments>
		</item>
	</channel>
</rss>
