<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	>
<channel>
	<title>Comments on: Whose Utilitarianism?</title>
	<atom:link href="http://slatestarcodex.com/2013/04/08/whose-utilitarianism/feed/" rel="self" type="application/rss+xml" />
	<link>http://slatestarcodex.com/2013/04/08/whose-utilitarianism/</link>
	<description>In a mad world, all blogging is psychiatry blogging</description>
	<lastBuildDate>Thu, 06 Aug 2015 00:00:22 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.2.4</generator>
	<item>
		<title>By: Lorxus</title>
		<link>http://slatestarcodex.com/2013/04/08/whose-utilitarianism/#comment-52295</link>
		<dc:creator><![CDATA[Lorxus]]></dc:creator>
		<pubDate>Wed, 16 Apr 2014 04:26:59 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=387#comment-52295</guid>
		<description><![CDATA[You know... this whole idea of an abstract Platonic pretend contract sounds a hell of a lot like coherent extrapolated volition. Also, there is nothing wrong with wanting to become an immortal god-king(/queen/Grand High Poobah/non-/other-gendered term of royalty of choice) and remake large parts of your local universe in your image. I myself want to do so when I grow up.]]></description>
		<content:encoded><![CDATA[<p>You know&#8230; this whole idea of an abstract Platonic pretend contract sounds a hell of a lot like coherent extrapolated volition. Also, there is nothing wrong with wanting to become an immortal god-king(/queen/Grand High Poobah/non-/other-gendered term of royalty of choice) and remake large parts of your local universe in your image. I myself want to do so when I grow up.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '52295', 'd024a3f7bc')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: blacktrance</title>
		<link>http://slatestarcodex.com/2013/04/08/whose-utilitarianism/#comment-47466</link>
		<dc:creator><![CDATA[blacktrance]]></dc:creator>
		<pubDate>Wed, 26 Mar 2014 22:45:02 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=387#comment-47466</guid>
		<description><![CDATA[It&#039;s worth noting that while there are some similarities between utilitarianism and contractarianism (what you&#039;re describing sounds more like contractarianism than contractualism, but I may be wrong) in that they&#039;re both consequentialist, they sometimes lead to wildly different conclusions. In your example of the Alicians and the Bobbites, they agree to mutual non-aggression because they both gain from it. A utilitarian would perhaps come to the same conclusion. However, when there aren&#039;t benefits from cooperation, the contractarian and utilitarian conclusion may be strongly at odds. In particular, when it comes to animal suffering - unlike the Aliciians and the Bobbites, humans don&#039;t have anything to gain from trying to cooperate with cows/pigs/etc, so they can go on eating them if they want, while a utilitarian would say that even if humans have to give up some utility, eating meat would be wrong because it reduces net utility.

In short, contractarianism is about maximizing your utility, specifically about when you should restrict yourself in exchange for others restricting themselves. Utilitarianism is about maximizing world utility.]]></description>
		<content:encoded><![CDATA[<p>It&#8217;s worth noting that while there are some similarities between utilitarianism and contractarianism (what you&#8217;re describing sounds more like contractarianism than contractualism, but I may be wrong) in that they&#8217;re both consequentialist, they sometimes lead to wildly different conclusions. In your example of the Alicians and the Bobbites, they agree to mutual non-aggression because they both gain from it. A utilitarian would perhaps come to the same conclusion. However, when there aren&#8217;t benefits from cooperation, the contractarian and utilitarian conclusion may be strongly at odds. In particular, when it comes to animal suffering &#8211; unlike the Aliciians and the Bobbites, humans don&#8217;t have anything to gain from trying to cooperate with cows/pigs/etc, so they can go on eating them if they want, while a utilitarian would say that even if humans have to give up some utility, eating meat would be wrong because it reduces net utility.</p>
<p>In short, contractarianism is about maximizing your utility, specifically about when you should restrict yourself in exchange for others restricting themselves. Utilitarianism is about maximizing world utility.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '47466', 'd024a3f7bc')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: blacktrance</title>
		<link>http://slatestarcodex.com/2013/04/08/whose-utilitarianism/#comment-47460</link>
		<dc:creator><![CDATA[blacktrance]]></dc:creator>
		<pubDate>Wed, 26 Mar 2014 22:20:07 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=387#comment-47460</guid>
		<description><![CDATA[For more on this, I recommend reading David Gauthier&#039;s &quot;Morals by Agreement&quot;. It gets a little mathy at times, and I don&#039;t agree with all of its conclusions, but it does something like adapting Hobbesianism to the present day. You may find it of interest.]]></description>
		<content:encoded><![CDATA[<p>For more on this, I recommend reading David Gauthier&#8217;s &#8220;Morals by Agreement&#8221;. It gets a little mathy at times, and I don&#8217;t agree with all of its conclusions, but it does something like adapting Hobbesianism to the present day. You may find it of interest.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '47460', 'd024a3f7bc')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: blacktrance</title>
		<link>http://slatestarcodex.com/2013/04/08/whose-utilitarianism/#comment-47459</link>
		<dc:creator><![CDATA[blacktrance]]></dc:creator>
		<pubDate>Wed, 26 Mar 2014 22:19:40 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=387#comment-47459</guid>
		<description><![CDATA[deleted]]></description>
		<content:encoded><![CDATA[<p>deleted</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '47459', 'd024a3f7bc')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: James</title>
		<link>http://slatestarcodex.com/2013/04/08/whose-utilitarianism/#comment-47452</link>
		<dc:creator><![CDATA[James]]></dc:creator>
		<pubDate>Wed, 26 Mar 2014 21:04:57 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=387#comment-47452</guid>
		<description><![CDATA[I wonder whether you&#039;d get anything out of reading Richard Rorty. In particular, his book Contingency, Irony and Solidarity deal in large part with this kind of issue.

He makes the case that we ought to acknowledge that our moral intuitions aren&#039;t amenable to grounding in irrefutable, logical arguments (he calls this antifoundationalism). Having done so, we can instead focus on honing and broadening those intuitions -- an exercise more likely to benefit from things like novels or documentaries and more in the realm of &#039;judgement&#039; (even &#039;taste&#039;) than logical argument.

He doesn&#039;t seem to get much mention in the rationalist/LW part of town, perhaps because he&#039;s coming from more of a humanities-ish, &#039;theory&#039;-ish background - more continental than analytic. I think he&#039;s sound, though.]]></description>
		<content:encoded><![CDATA[<p>I wonder whether you&#8217;d get anything out of reading Richard Rorty. In particular, his book Contingency, Irony and Solidarity deal in large part with this kind of issue.</p>
<p>He makes the case that we ought to acknowledge that our moral intuitions aren&#8217;t amenable to grounding in irrefutable, logical arguments (he calls this antifoundationalism). Having done so, we can instead focus on honing and broadening those intuitions &#8212; an exercise more likely to benefit from things like novels or documentaries and more in the realm of &#8216;judgement&#8217; (even &#8216;taste&#8217;) than logical argument.</p>
<p>He doesn&#8217;t seem to get much mention in the rationalist/LW part of town, perhaps because he&#8217;s coming from more of a humanities-ish, &#8216;theory&#8217;-ish background &#8211; more continental than analytic. I think he&#8217;s sound, though.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '47452', 'd024a3f7bc')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: peterdjones</title>
		<link>http://slatestarcodex.com/2013/04/08/whose-utilitarianism/#comment-42608</link>
		<dc:creator><![CDATA[peterdjones]]></dc:creator>
		<pubDate>Thu, 27 Feb 2014 12:58:30 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=387#comment-42608</guid>
		<description><![CDATA[Re uncomputability: do you realise that in tackling  uncomputability by approximationsand rules of thumb you are heading at least half way towarrds deontology?]]></description>
		<content:encoded><![CDATA[<p>Re uncomputability: do you realise that in tackling  uncomputability by approximationsand rules of thumb you are heading at least half way towarrds deontology?</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '42608', 'd024a3f7bc')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: peterdjones</title>
		<link>http://slatestarcodex.com/2013/04/08/whose-utilitarianism/#comment-42603</link>
		<dc:creator><![CDATA[peterdjones]]></dc:creator>
		<pubDate>Thu, 27 Feb 2014 11:57:15 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=387#comment-42603</guid>
		<description><![CDATA[You say morality isn&#039;t objective because it isn&#039;t an empirical detectable property.
But it could be objective logically.

You say there are no universally compelling arguments. But an argument only has to compel sane and rational minds, and rationality is value.
 
You say good is a shorthand for what you want, but it isn&#039;t because you say can  what is not good.]]></description>
		<content:encoded><![CDATA[<p>You say morality isn&#8217;t objective because it isn&#8217;t an empirical detectable property.<br />
But it could be objective logically.</p>
<p>You say there are no universally compelling arguments. But an argument only has to compel sane and rational minds, and rationality is value.</p>
<p>You say good is a shorthand for what you want, but it isn&#8217;t because you say can  what is not good.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '42603', 'd024a3f7bc')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: peterdjones</title>
		<link>http://slatestarcodex.com/2013/04/08/whose-utilitarianism/#comment-42601</link>
		<dc:creator><![CDATA[peterdjones]]></dc:creator>
		<pubDate>Thu, 27 Feb 2014 11:17:46 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=387#comment-42601</guid>
		<description><![CDATA[What if we discover the True Physics, and it goes against our intuitions? We did, and it did, and we accepted our intuitions are wrong. You should be worrying about the rightness of your intuitions, not about the intuitiveness of the truth.]]></description>
		<content:encoded><![CDATA[<p>What if we discover the True Physics, and it goes against our intuitions? We did, and it did, and we accepted our intuitions are wrong. You should be worrying about the rightness of your intuitions, not about the intuitiveness of the truth.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '42601', 'd024a3f7bc')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: peterdjones</title>
		<link>http://slatestarcodex.com/2013/04/08/whose-utilitarianism/#comment-42600</link>
		<dc:creator><![CDATA[peterdjones]]></dc:creator>
		<pubDate>Thu, 27 Feb 2014 11:02:19 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=387#comment-42600</guid>
		<description><![CDATA[Moral systems have a practical job to do, which is to ground and justify the apportiomimg of resources, awards and sanctions. Since these are objective...you are either in jail or out, you either get the whole cake or half...it is desirable for moral systems to be objective.]]></description>
		<content:encoded><![CDATA[<p>Moral systems have a practical job to do, which is to ground and justify the apportiomimg of resources, awards and sanctions. Since these are objective&#8230;you are either in jail or out, you either get the whole cake or half&#8230;it is desirable for moral systems to be objective.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '42600', 'd024a3f7bc')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: peterdjones</title>
		<link>http://slatestarcodex.com/2013/04/08/whose-utilitarianism/#comment-42599</link>
		<dc:creator><![CDATA[peterdjones]]></dc:creator>
		<pubDate>Thu, 27 Feb 2014 10:53:07 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=387#comment-42599</guid>
		<description><![CDATA[You can&#039;t have multiple kinds of objectively right or wrong. That&#039;s kind of intrinsic to the meaning of &quot;objective&quot;.]]></description>
		<content:encoded><![CDATA[<p>You can&#8217;t have multiple kinds of objectively right or wrong. That&#8217;s kind of intrinsic to the meaning of &#8220;objective&#8221;.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '42599', 'd024a3f7bc')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
</channel>
</rss>
