<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	>
<channel>
	<title>Comments on: Extremism In Thought Experiment Is No Vice</title>
	<atom:link href="http://slatestarcodex.com/2015/03/26/high-energy-ethics/feed/" rel="self" type="application/rss+xml" />
	<link>http://slatestarcodex.com/2015/03/26/high-energy-ethics/</link>
	<description>In a mad world, all blogging is psychiatry blogging</description>
	<lastBuildDate>Fri, 24 Jul 2015 16:25:50 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.2.3</generator>
	<item>
		<title>By: Bryan Hann</title>
		<link>http://slatestarcodex.com/2015/03/26/high-energy-ethics/#comment-198494</link>
		<dc:creator><![CDATA[Bryan Hann]]></dc:creator>
		<pubDate>Mon, 20 Apr 2015 07:53:42 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=3590#comment-198494</guid>
		<description><![CDATA[As always: Clear to &lt;b&gt;whom?&lt;/b&gt;]]></description>
		<content:encoded><![CDATA[<p>As always: Clear to <b>whom?</b></p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '198494', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Bryan Hann</title>
		<link>http://slatestarcodex.com/2015/03/26/high-energy-ethics/#comment-198484</link>
		<dc:creator><![CDATA[Bryan Hann]]></dc:creator>
		<pubDate>Mon, 20 Apr 2015 06:59:32 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=3590#comment-198484</guid>
		<description><![CDATA[The sociopath taking the care he did in the situation makes this sound *not* like &quot;shooting in random directions&quot;.]]></description>
		<content:encoded><![CDATA[<p>The sociopath taking the care he did in the situation makes this sound *not* like &#8220;shooting in random directions&#8221;.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '198484', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Bryan Hann</title>
		<link>http://slatestarcodex.com/2015/03/26/high-energy-ethics/#comment-198480</link>
		<dc:creator><![CDATA[Bryan Hann]]></dc:creator>
		<pubDate>Mon, 20 Apr 2015 06:46:47 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=3590#comment-198480</guid>
		<description><![CDATA[We have a different sense of what is charitable. I don&#039;t mind someone pointing out that I am untrained, or even speculating that I might be untrainable. But I do mind someone suggesting that I am acting in bad faith.]]></description>
		<content:encoded><![CDATA[<p>We have a different sense of what is charitable. I don&#8217;t mind someone pointing out that I am untrained, or even speculating that I might be untrainable. But I do mind someone suggesting that I am acting in bad faith.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '198480', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Peter Gerdes</title>
		<link>http://slatestarcodex.com/2015/03/26/high-energy-ethics/#comment-197047</link>
		<dc:creator><![CDATA[Peter Gerdes]]></dc:creator>
		<pubDate>Sat, 11 Apr 2015 18:49:59 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=3590#comment-197047</guid>
		<description><![CDATA[As someone with a philosophy degree I think you should acknowledge more of the complexity inherit in moral anti-realism.

For instance, I&#039;m both a moral anti-realist AND a utilitarian.  I&#039;m a utilitarian in that I have strong moral feelings that it is better to do those things than increase overall utility.  Indeed, I have no trouble saying that people *should* choose to act in ways that result in more utility (I don&#039;t acknowledge any notion of duty or responsibility...just a partial order on the set of possible worlds).

However, utilitarianism is not my *belief* about what some natural kind &#039;the morally good&#039; attaches to.  Rather, what I mean by good or bad just *is* something like increases/decreases utility (really relative to some background expectation about what is usual in such a situation given the actors limitations).  

The error made by Robinson is to identify our willingness to condemn people/call them immoral for acting in certain ways with a belief that there is some objective feature of the universe (beyond our preference for certain kinds of worlds) that our word morality tracks.  This isn&#039;t surprising, unfortunately we usually associate the word preference with selfish desire that shouldn&#039;t be projected on others but we can have preferences, like that for more utility, that we prefer in the strong sense of being willing to jail, kill or condemn others for that reason.]]></description>
		<content:encoded><![CDATA[<p>As someone with a philosophy degree I think you should acknowledge more of the complexity inherit in moral anti-realism.</p>
<p>For instance, I&#8217;m both a moral anti-realist AND a utilitarian.  I&#8217;m a utilitarian in that I have strong moral feelings that it is better to do those things than increase overall utility.  Indeed, I have no trouble saying that people *should* choose to act in ways that result in more utility (I don&#8217;t acknowledge any notion of duty or responsibility&#8230;just a partial order on the set of possible worlds).</p>
<p>However, utilitarianism is not my *belief* about what some natural kind &#8216;the morally good&#8217; attaches to.  Rather, what I mean by good or bad just *is* something like increases/decreases utility (really relative to some background expectation about what is usual in such a situation given the actors limitations).  </p>
<p>The error made by Robinson is to identify our willingness to condemn people/call them immoral for acting in certain ways with a belief that there is some objective feature of the universe (beyond our preference for certain kinds of worlds) that our word morality tracks.  This isn&#8217;t surprising, unfortunately we usually associate the word preference with selfish desire that shouldn&#8217;t be projected on others but we can have preferences, like that for more utility, that we prefer in the strong sense of being willing to jail, kill or condemn others for that reason.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '197047', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Dan</title>
		<link>http://slatestarcodex.com/2015/03/26/high-energy-ethics/#comment-195138</link>
		<dc:creator><![CDATA[Dan]]></dc:creator>
		<pubDate>Thu, 02 Apr 2015 21:41:42 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=3590#comment-195138</guid>
		<description><![CDATA[Robertson is attempting the same moral argument for God that has been tried and failed millions of times. We don&#039;t need divine authority over moral decisions; &lt;i&gt;we&lt;/i&gt; make the moral decisions.

It&#039;s hardwired into our brains, through millennia of biological and social evolution, that happiness is good and suffering is bad. No, those aren&#039;t &quot;rational&quot; positions in the context of the entire universe. They don&#039;t have to be. They only have to matter to us. And given those axioms, further moral laws can logically follow, starting from &quot;don&#039;t torture and murder people.&quot; There&#039;s no mystery here, and certainly no hypocrisy on the part of atheism.]]></description>
		<content:encoded><![CDATA[<p>Robertson is attempting the same moral argument for God that has been tried and failed millions of times. We don&#8217;t need divine authority over moral decisions; <i>we</i> make the moral decisions.</p>
<p>It&#8217;s hardwired into our brains, through millennia of biological and social evolution, that happiness is good and suffering is bad. No, those aren&#8217;t &#8220;rational&#8221; positions in the context of the entire universe. They don&#8217;t have to be. They only have to matter to us. And given those axioms, further moral laws can logically follow, starting from &#8220;don&#8217;t torture and murder people.&#8221; There&#8217;s no mystery here, and certainly no hypocrisy on the part of atheism.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '195138', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Unaussprechlichen</title>
		<link>http://slatestarcodex.com/2015/03/26/high-energy-ethics/#comment-195133</link>
		<dc:creator><![CDATA[Unaussprechlichen]]></dc:creator>
		<pubDate>Thu, 02 Apr 2015 20:01:54 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=3590#comment-195133</guid>
		<description><![CDATA[&quot;Torture or dust specks&quot; can be resolved by postulating that codomain of utility function is not the real line. It doesn&#039;t need to support all operations that are defined on real numbers; all it really needs is addition (to aggreagate outcomes) and comparison (to choose between them).

For &quot;torture or dust specs&quot; to be applicable, this set should support natural number-valued partial division, that is, for any two outcomes A and B exist n such that A*n &gt; B. So, there exist such n that n dust specks is worse than torture.

Yet there are many ways to construct a set that doesn&#039;t satisfy this axiom, but satisfy the ones needed to construct a moral system. It can be postulated that while 1 torture and 1 dust speck is worse that 1 torture alone, 1 torture is worse than any number of dust specks.

In fact, by demanding different axioms for the set of outcomes, or, taking the &quot;set&quot; of outcomes form objects of different categories, we can specify different meta-ethical systems.

For example:
category of totally ordered abelian groups — utilitariansm: there&#039;s always the right strategy of action, and outcomes accumulate;
category of additive abelian groups with total order that are both subgroups and suborders of real line — the kind of utilitarism where &quot;torture or dust specks&quot; holds;
category of posets — deontology: not all situations have the most moral strategy of action, but some actions are better than others;
category of preorders — non-cognitivism: some actions are better than others, but it doesn&#039;t need to be consistent;
category of sets — moral nihilism: everything goes.]]></description>
		<content:encoded><![CDATA[<p>&#8220;Torture or dust specks&#8221; can be resolved by postulating that codomain of utility function is not the real line. It doesn&#8217;t need to support all operations that are defined on real numbers; all it really needs is addition (to aggreagate outcomes) and comparison (to choose between them).</p>
<p>For &#8220;torture or dust specs&#8221; to be applicable, this set should support natural number-valued partial division, that is, for any two outcomes A and B exist n such that A*n &gt; B. So, there exist such n that n dust specks is worse than torture.</p>
<p>Yet there are many ways to construct a set that doesn&#8217;t satisfy this axiom, but satisfy the ones needed to construct a moral system. It can be postulated that while 1 torture and 1 dust speck is worse that 1 torture alone, 1 torture is worse than any number of dust specks.</p>
<p>In fact, by demanding different axioms for the set of outcomes, or, taking the &#8220;set&#8221; of outcomes form objects of different categories, we can specify different meta-ethical systems.</p>
<p>For example:<br />
category of totally ordered abelian groups — utilitariansm: there&#8217;s always the right strategy of action, and outcomes accumulate;<br />
category of additive abelian groups with total order that are both subgroups and suborders of real line — the kind of utilitarism where &#8220;torture or dust specks&#8221; holds;<br />
category of posets — deontology: not all situations have the most moral strategy of action, but some actions are better than others;<br />
category of preorders — non-cognitivism: some actions are better than others, but it doesn&#8217;t need to be consistent;<br />
category of sets — moral nihilism: everything goes.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '195133', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Lightning Round &#8211; 2015/04/02 &#124; Free Northerner</title>
		<link>http://slatestarcodex.com/2015/03/26/high-energy-ethics/#comment-195085</link>
		<dc:creator><![CDATA[Lightning Round &#8211; 2015/04/02 &#124; Free Northerner]]></dc:creator>
		<pubDate>Thu, 02 Apr 2015 05:03:23 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=3590#comment-195085</guid>
		<description><![CDATA[[&#8230;] The necessity of  extremes in thought experiments. [&#8230;]]]></description>
		<content:encoded><![CDATA[<p>[&#8230;] The necessity of  extremes in thought experiments. [&#8230;]</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '195085', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Aaron Brown</title>
		<link>http://slatestarcodex.com/2015/03/26/high-energy-ethics/#comment-195058</link>
		<dc:creator><![CDATA[Aaron Brown]]></dc:creator>
		<pubDate>Thu, 02 Apr 2015 00:12:12 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=3590#comment-195058</guid>
		<description><![CDATA[&lt;blockquote&gt;An emergency is an unchosen, unexpected event, limited in time, that creates conditions under which human survival is impossible—such as a flood, an earthquake, a fire, a shipwreck. In an emergency situation, men&#039;s primary goal is to combat the disaster, escape the danger and restore normal conditions (to reach dry land, to put out the fire, etc.).

[...]

The principle that one should help men in an emergency cannot be extended to regard all human suffering as an emergency and to turn the misfortune of some into a first mortgage on the lives of others.&lt;/blockquote&gt;

-- &lt;a href=&quot;http://aynrandlexicon.com/lexicon/emergencies.html&quot; rel=&quot;nofollow&quot;&gt;Ayn Rand, &quot;The Ethics of Emergencies&quot;, &lt;i&gt;The Virtue of Selfishness&lt;/i&gt;&lt;/a&gt;]]></description>
		<content:encoded><![CDATA[<blockquote><p>An emergency is an unchosen, unexpected event, limited in time, that creates conditions under which human survival is impossible—such as a flood, an earthquake, a fire, a shipwreck. In an emergency situation, men&#8217;s primary goal is to combat the disaster, escape the danger and restore normal conditions (to reach dry land, to put out the fire, etc.).</p>
<p>[&#8230;]</p>
<p>The principle that one should help men in an emergency cannot be extended to regard all human suffering as an emergency and to turn the misfortune of some into a first mortgage on the lives of others.</p></blockquote>
<p>&#8212; <a href="http://aynrandlexicon.com/lexicon/emergencies.html" rel="nofollow">Ayn Rand, &#8220;The Ethics of Emergencies&#8221;, <i>The Virtue of Selfishness</i></a></p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '195058', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Princess Stargirl</title>
		<link>http://slatestarcodex.com/2015/03/26/high-energy-ethics/#comment-195032</link>
		<dc:creator><![CDATA[Princess Stargirl]]></dc:creator>
		<pubDate>Wed, 01 Apr 2015 21:29:09 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=3590#comment-195032</guid>
		<description><![CDATA[&quot;What Rand wrote is that, however, life is not to be modeled as though we were always in that lifeboat – that the lifeboat situation is inapplicable to 99.99% of our human interaction.&quot;

Having read some of Rand&#039;s &quot;technical&quot; work (not novels) I do not recall this. Where does she make this argument? For example she seems to believe that &quot;rational agents&#039; goals do not conflict&quot; is a universal based on this:

&quot;There are no conflicts of interests among rational men. . . A man&#039;s &#039;interests&#039; depend on the kind of goals he chooses to pursue, his choice of goals depends on his desires, his desires depend on his values -- and, for a rational man, his values depend on the judgment of his mind. . . A rational man never holds a desire or pursues a goal which cannot be achieved directly or indirectly [i.e., by trading] by his own effort. . . He never seeks or desires the unearned. . . The mere fact that two men desire the same job does not constitute proof that either of them is entitled to it or deserves it, and that his interests are damaged if he does not obtain it. (The Virtue of Selfishness, p. 50-6)&quot;]]></description>
		<content:encoded><![CDATA[<p>&#8220;What Rand wrote is that, however, life is not to be modeled as though we were always in that lifeboat – that the lifeboat situation is inapplicable to 99.99% of our human interaction.&#8221;</p>
<p>Having read some of Rand&#8217;s &#8220;technical&#8221; work (not novels) I do not recall this. Where does she make this argument? For example she seems to believe that &#8220;rational agents&#8217; goals do not conflict&#8221; is a universal based on this:</p>
<p>&#8220;There are no conflicts of interests among rational men. . . A man&#8217;s &#8216;interests&#8217; depend on the kind of goals he chooses to pursue, his choice of goals depends on his desires, his desires depend on his values &#8212; and, for a rational man, his values depend on the judgment of his mind. . . A rational man never holds a desire or pursues a goal which cannot be achieved directly or indirectly [i.e., by trading] by his own effort. . . He never seeks or desires the unearned. . . The mere fact that two men desire the same job does not constitute proof that either of them is entitled to it or deserves it, and that his interests are damaged if he does not obtain it. (The Virtue of Selfishness, p. 50-6)&#8221;</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '195032', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Emm</title>
		<link>http://slatestarcodex.com/2015/03/26/high-energy-ethics/#comment-194832</link>
		<dc:creator><![CDATA[Emm]]></dc:creator>
		<pubDate>Wed, 01 Apr 2015 07:33:26 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=3590#comment-194832</guid>
		<description><![CDATA[Agree with Scott that, although Phil Robertson is no philosopher, what he’s doing is (at least when written down, I didn’t watch the video) largely in line with one way moral philosophy is frequently discussed. At least, I recall several college philosophy classes where teachers made students take philosophical arguments they were prone to laugh off in a similar way (“You think there is no morality? Well, what about Hitler? There was nothing ‘wrong’ with that?”). Effective or no, it’s a pretty standard approach.

I actually thought the “Ha! Gotcha!” tone helped the example by making it clear that the hypothetical atheist knew that the brutal rape/murder/castration was for a stupid reason done by horrible malicious people, which would have the tendency of awakening the moral senses, and combined with the graphic description would help any atheists listening picture it and understand the horror of the situation. Since the thrust of the argument is to point out that the atheist doesn’t act on his belief that there is no objective truth in a situation of sufficient subjective awfulness, inducing moral poignancy into the situation by recreating on a small verbal scale the brutality of the event doesn&#039;t just help the argument, it in a way IS the argument if the listener is an atheist (this is not to say it is in good taste or a strong argument – I think it is neither*).

However, the usefulness of this kind of approach when dealing with questions of Objective Morality (by which I mean strict moral ethical standards that are True, something along the lines of “It’s Wrong to kill another human being” or “It’s Wrong to commit adultery” or “It is Right to maximize utility” rather than “I’d prefer to live in a society where killing was illegal” or “I feel like I would prefer to live in a society that maximized utility.”) is limited. The way an extreme example argument works is that it puts someone&#039;s theory of morality in a situation where the moral philosophy dictates an answer that is repugnant to the individual’s moral sense/intuition/whatever you want to call it (I’ll stick to ‘intuition’ for the remainder of this comment). The moral theory says you make Choice X but something inside you says Choice Y is the moral one, therefore the moral theory is suspect.

But the problem here is that, if you knew your moral intuition was 100% reliable, you wouldn’t need a moral theory. In fact, knowing what is Objectively Right is only valuable when the Objectively Correct thing contradicts your moral intuition. If adherence to your moral intuition is the standard by which a moral theory is judged and your moral intuition isn’t 100% spot-on (and it&#039;s unlikely to be, given the enormous variation between individuals and over time), under that standard you will reject the Objective Truth if you ever come into contact with it because it will violate your intuition in at least some respects.

For example, if you accept the tenants of Catholicism, it follows that birth control is a sin… no matter what you personally, feel about it. Since you have accepted that your moral intuition is Wrong in some cases and Catholicism is Right, you cannot then backtrack and use your moral intuition as a way to argue against Catholicism. To flip to the other side, a lot of people throughout history felt that there was something just Wrong with homosexuality. They would have been at risk of discarding any moral theory that said punishing homosexuality was wrong… on the grounds of “well, I’m not comfortable with that.” Suppose it is Objectively Wrong to punish people for any sex act. If someone argued this in the middle ages, a plausible extreme scenario someone could have presented would have been, “Well, that means we can’t punish homosexuality… therefore this strange new morality must be Wrong.”

If an axe murderer asks where your friend is and you think Kant is right, you tell him where your friend is. The fact that every fiber in your being is screaming at you to not do that doesn’t make telling him the Wrong choice. The fact that a Kantian would have to tell the axe murderer where his friend is doesn’t mean the Kantian made the Wrong decision… unless Kant actually was Wrong. Similarly, evaluating this question on utilitarian grounds doesn’t make Kant wrong either, because all you’re doing is judging one moral theory based on how closely its outcomes map to the outcomes of a differing moral theory. 

Not to say there isn’t merit in the morality-by-example approach. It can help you understand your own moral intuitions better and see contradictions in your thinking. All I’m saying is that the felt moral absurdities the examples are meant to elicit are not proof that the absurd outcome is Wrong. So if the atheist feels the existence of God or a sense that he is being Wronged in the moments of his family&#039;s brutal murder, that doesn&#039;t necessarily mean the atheist isn&#039;t truly amoral or show that Morality Exists, because that experience is also compatible with the atheist feeling an incorrect intuition, something very likely compatible with his professed world views.

*Robertson’s example is a special case of bad because, if he’s serious, he’s trying to prove the existence of an Objective Truth superior to subjective truth (i.e. God) simply by showing the experience of subjective truth, an experience seriously in dispute by nobody. Most of the extreme scenario cases take the form of ‘If you follow your moral system, you have to make Repulsive Choice X when obviously Nice Choice Y is so much better, therefore your moral system is wrong,’ which at least forces an actual choice and could lead to fruitful contradiction.]]></description>
		<content:encoded><![CDATA[<p>Agree with Scott that, although Phil Robertson is no philosopher, what he’s doing is (at least when written down, I didn’t watch the video) largely in line with one way moral philosophy is frequently discussed. At least, I recall several college philosophy classes where teachers made students take philosophical arguments they were prone to laugh off in a similar way (“You think there is no morality? Well, what about Hitler? There was nothing ‘wrong’ with that?”). Effective or no, it’s a pretty standard approach.</p>
<p>I actually thought the “Ha! Gotcha!” tone helped the example by making it clear that the hypothetical atheist knew that the brutal rape/murder/castration was for a stupid reason done by horrible malicious people, which would have the tendency of awakening the moral senses, and combined with the graphic description would help any atheists listening picture it and understand the horror of the situation. Since the thrust of the argument is to point out that the atheist doesn’t act on his belief that there is no objective truth in a situation of sufficient subjective awfulness, inducing moral poignancy into the situation by recreating on a small verbal scale the brutality of the event doesn&#8217;t just help the argument, it in a way IS the argument if the listener is an atheist (this is not to say it is in good taste or a strong argument – I think it is neither*).</p>
<p>However, the usefulness of this kind of approach when dealing with questions of Objective Morality (by which I mean strict moral ethical standards that are True, something along the lines of “It’s Wrong to kill another human being” or “It’s Wrong to commit adultery” or “It is Right to maximize utility” rather than “I’d prefer to live in a society where killing was illegal” or “I feel like I would prefer to live in a society that maximized utility.”) is limited. The way an extreme example argument works is that it puts someone&#8217;s theory of morality in a situation where the moral philosophy dictates an answer that is repugnant to the individual’s moral sense/intuition/whatever you want to call it (I’ll stick to ‘intuition’ for the remainder of this comment). The moral theory says you make Choice X but something inside you says Choice Y is the moral one, therefore the moral theory is suspect.</p>
<p>But the problem here is that, if you knew your moral intuition was 100% reliable, you wouldn’t need a moral theory. In fact, knowing what is Objectively Right is only valuable when the Objectively Correct thing contradicts your moral intuition. If adherence to your moral intuition is the standard by which a moral theory is judged and your moral intuition isn’t 100% spot-on (and it&#8217;s unlikely to be, given the enormous variation between individuals and over time), under that standard you will reject the Objective Truth if you ever come into contact with it because it will violate your intuition in at least some respects.</p>
<p>For example, if you accept the tenants of Catholicism, it follows that birth control is a sin… no matter what you personally, feel about it. Since you have accepted that your moral intuition is Wrong in some cases and Catholicism is Right, you cannot then backtrack and use your moral intuition as a way to argue against Catholicism. To flip to the other side, a lot of people throughout history felt that there was something just Wrong with homosexuality. They would have been at risk of discarding any moral theory that said punishing homosexuality was wrong… on the grounds of “well, I’m not comfortable with that.” Suppose it is Objectively Wrong to punish people for any sex act. If someone argued this in the middle ages, a plausible extreme scenario someone could have presented would have been, “Well, that means we can’t punish homosexuality… therefore this strange new morality must be Wrong.”</p>
<p>If an axe murderer asks where your friend is and you think Kant is right, you tell him where your friend is. The fact that every fiber in your being is screaming at you to not do that doesn’t make telling him the Wrong choice. The fact that a Kantian would have to tell the axe murderer where his friend is doesn’t mean the Kantian made the Wrong decision… unless Kant actually was Wrong. Similarly, evaluating this question on utilitarian grounds doesn’t make Kant wrong either, because all you’re doing is judging one moral theory based on how closely its outcomes map to the outcomes of a differing moral theory. </p>
<p>Not to say there isn’t merit in the morality-by-example approach. It can help you understand your own moral intuitions better and see contradictions in your thinking. All I’m saying is that the felt moral absurdities the examples are meant to elicit are not proof that the absurd outcome is Wrong. So if the atheist feels the existence of God or a sense that he is being Wronged in the moments of his family&#8217;s brutal murder, that doesn&#8217;t necessarily mean the atheist isn&#8217;t truly amoral or show that Morality Exists, because that experience is also compatible with the atheist feeling an incorrect intuition, something very likely compatible with his professed world views.</p>
<p>*Robertson’s example is a special case of bad because, if he’s serious, he’s trying to prove the existence of an Objective Truth superior to subjective truth (i.e. God) simply by showing the experience of subjective truth, an experience seriously in dispute by nobody. Most of the extreme scenario cases take the form of ‘If you follow your moral system, you have to make Repulsive Choice X when obviously Nice Choice Y is so much better, therefore your moral system is wrong,’ which at least forces an actual choice and could lead to fruitful contradiction.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '194832', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
</channel>
</rss>
