<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	>
<channel>
	<title>Comments on: OT9: The Thread Pirate Roberts</title>
	<atom:link href="http://slatestarcodex.com/2014/11/22/ot9-the-thread-pirate-roberts/feed/" rel="self" type="application/rss+xml" />
	<link>http://slatestarcodex.com/2014/11/22/ot9-the-thread-pirate-roberts/</link>
	<description>In a mad world, all blogging is psychiatry blogging</description>
	<lastBuildDate>Fri, 24 Jul 2015 23:08:50 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.2.3</generator>
	<item>
		<title>By: Susebron</title>
		<link>http://slatestarcodex.com/2014/11/22/ot9-the-thread-pirate-roberts/#comment-165497</link>
		<dc:creator><![CDATA[Susebron]]></dc:creator>
		<pubDate>Fri, 12 Dec 2014 02:28:53 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=3268#comment-165497</guid>
		<description><![CDATA[No, you didn&#039;t say that. &lt;i&gt;I&lt;/i&gt; said that. It depends on what you want out of a moral system. Do you want a shining ideal, which in theory could be implemented universally, or do you want something for yourself that provides a reasonable prescription for your actions? It&#039;s a big question, and there&#039;s no right or wrong answer.]]></description>
		<content:encoded><![CDATA[<p>No, you didn&#8217;t say that. <i>I</i> said that. It depends on what you want out of a moral system. Do you want a shining ideal, which in theory could be implemented universally, or do you want something for yourself that provides a reasonable prescription for your actions? It&#8217;s a big question, and there&#8217;s no right or wrong answer.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '165497', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: onyomi</title>
		<link>http://slatestarcodex.com/2014/11/22/ot9-the-thread-pirate-roberts/#comment-165495</link>
		<dc:creator><![CDATA[onyomi]]></dc:creator>
		<pubDate>Fri, 12 Dec 2014 02:15:53 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=3268#comment-165495</guid>
		<description><![CDATA[I didn&#039;t say letting people die was immoral; I said killing people (or directly causing their deaths) was immoral. Not letting people die (assuming preventing their deaths imposes some cost on you) is supererogatory.

I would say that rather than including built-in acceptance of failures, a well-designed moral &quot;system&quot; might rather be one anyone can reasonably follow without being a saint. 

That said, since I am convinced by, and am arguing in favor of the moral realist position, it doesn&#039;t really make sense for me to even say &quot;my&quot; moral system is x and &quot;your&quot; moral system is y. Morality isn&#039;t reasonable or unreasonable or well-designed or poorly designed; it just is. At lest, that seems to be an unavoidable consequence of maintaining that it is objective.]]></description>
		<content:encoded><![CDATA[<p>I didn&#8217;t say letting people die was immoral; I said killing people (or directly causing their deaths) was immoral. Not letting people die (assuming preventing their deaths imposes some cost on you) is supererogatory.</p>
<p>I would say that rather than including built-in acceptance of failures, a well-designed moral &#8220;system&#8221; might rather be one anyone can reasonably follow without being a saint. </p>
<p>That said, since I am convinced by, and am arguing in favor of the moral realist position, it doesn&#8217;t really make sense for me to even say &#8220;my&#8221; moral system is x and &#8220;your&#8221; moral system is y. Morality isn&#8217;t reasonable or unreasonable or well-designed or poorly designed; it just is. At lest, that seems to be an unavoidable consequence of maintaining that it is objective.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '165495', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Susebron</title>
		<link>http://slatestarcodex.com/2014/11/22/ot9-the-thread-pirate-roberts/#comment-165494</link>
		<dc:creator><![CDATA[Susebron]]></dc:creator>
		<pubDate>Fri, 12 Dec 2014 00:24:13 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=3268#comment-165494</guid>
		<description><![CDATA[If it was me getting harvested, or me on the tracks, it would be better for me to die. Yes, by not donating all my money to malaria eradication I&#039;m letting people die. On the other hand, my moral system doesn&#039;t have to tell me that I&#039;m good and perfect all the time. If your moral system does, then either you are a saint or you should perhaps reconsider your moral system.]]></description>
		<content:encoded><![CDATA[<p>If it was me getting harvested, or me on the tracks, it would be better for me to die. Yes, by not donating all my money to malaria eradication I&#8217;m letting people die. On the other hand, my moral system doesn&#8217;t have to tell me that I&#8217;m good and perfect all the time. If your moral system does, then either you are a saint or you should perhaps reconsider your moral system.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '165494', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: onyomi</title>
		<link>http://slatestarcodex.com/2014/11/22/ot9-the-thread-pirate-roberts/#comment-165490</link>
		<dc:creator><![CDATA[onyomi]]></dc:creator>
		<pubDate>Thu, 11 Dec 2014 23:21:25 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=3268#comment-165490</guid>
		<description><![CDATA[My personal take on the trolley problem is that it IS wrong to flip the switch, because by flipping the switch you are KILLING one person, whereas in not flipping the switch you are LETTING five people die. 

To me, there is a vast ethical gulf separating &quot;killing&quot; and &quot;letting die.&quot; We all let people die all the time. Every time you don&#039;t donate all your money to malaria eradication efforts you are letting people die. It may be virtuous in a supererogatory way to donate all your money, but it&#039;s not unethical not to.

Related question: is it okay to steal the life savings of some random first-world person and donate it all to malaria eradication efforts? It seems obviously to me to be wrong, even though it produces &quot;better&quot; results: first world guy can&#039;t retire but is still alive while, thousands of third-world children survive. Yet it&#039;s still wrong.

You can give all of your own money to fight malaria--this is equivalent to flipping the switch so the train hits you in the version of the problem that allows the suicide option. 

Scott tends to take it to the extreme to prove the point: &quot;if you&#039;re not okay killing one to save five, what about one to save a million, or one to save a billion? etc.&quot; This is a good point, but I think it reveals the problematic aspect of his move to collapse &quot;right and wrong&quot; and &quot;good and bad.&quot;

I&#039;m pretty sure we can all of think of an example where an ethically wrong action might produce a &quot;good&quot; result--as in, result in a world which everyone would agree was better. This means these two things are distinct. Maybe it&#039;s a GOOD idea to kill one innocent person if, in so doing, you can save one million innocent people, but that doesn&#039;t make it right.

If this seems counterintuitive, I&#039;d suggest to try thinking of it this way: imagine YOU are the one whose organs are going to be harvested to save many lives, that YOU are the one person sitting on the train track when a bystander switches the lines so that the train hits you instead of five others on another track. 

In such a case, I would not only feel that I was a victim of murder, but that I would even be justified in killing the organ harvester or switch flipper to save my life. It would be virtuous of me to voluntarily OFFER my life to save a greater number, but to force me to give it is always wrong, if not always bad in terms of results.]]></description>
		<content:encoded><![CDATA[<p>My personal take on the trolley problem is that it IS wrong to flip the switch, because by flipping the switch you are KILLING one person, whereas in not flipping the switch you are LETTING five people die. </p>
<p>To me, there is a vast ethical gulf separating &#8220;killing&#8221; and &#8220;letting die.&#8221; We all let people die all the time. Every time you don&#8217;t donate all your money to malaria eradication efforts you are letting people die. It may be virtuous in a supererogatory way to donate all your money, but it&#8217;s not unethical not to.</p>
<p>Related question: is it okay to steal the life savings of some random first-world person and donate it all to malaria eradication efforts? It seems obviously to me to be wrong, even though it produces &#8220;better&#8221; results: first world guy can&#8217;t retire but is still alive while, thousands of third-world children survive. Yet it&#8217;s still wrong.</p>
<p>You can give all of your own money to fight malaria&#8211;this is equivalent to flipping the switch so the train hits you in the version of the problem that allows the suicide option. </p>
<p>Scott tends to take it to the extreme to prove the point: &#8220;if you&#8217;re not okay killing one to save five, what about one to save a million, or one to save a billion? etc.&#8221; This is a good point, but I think it reveals the problematic aspect of his move to collapse &#8220;right and wrong&#8221; and &#8220;good and bad.&#8221;</p>
<p>I&#8217;m pretty sure we can all of think of an example where an ethically wrong action might produce a &#8220;good&#8221; result&#8211;as in, result in a world which everyone would agree was better. This means these two things are distinct. Maybe it&#8217;s a GOOD idea to kill one innocent person if, in so doing, you can save one million innocent people, but that doesn&#8217;t make it right.</p>
<p>If this seems counterintuitive, I&#8217;d suggest to try thinking of it this way: imagine YOU are the one whose organs are going to be harvested to save many lives, that YOU are the one person sitting on the train track when a bystander switches the lines so that the train hits you instead of five others on another track. </p>
<p>In such a case, I would not only feel that I was a victim of murder, but that I would even be justified in killing the organ harvester or switch flipper to save my life. It would be virtuous of me to voluntarily OFFER my life to save a greater number, but to force me to give it is always wrong, if not always bad in terms of results.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '165490', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Mike</title>
		<link>http://slatestarcodex.com/2014/11/22/ot9-the-thread-pirate-roberts/#comment-165482</link>
		<dc:creator><![CDATA[Mike]]></dc:creator>
		<pubDate>Thu, 11 Dec 2014 20:28:27 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=3268#comment-165482</guid>
		<description><![CDATA[I&#039;m really happy to see someone talking about Michael Huemer on here. I too am desperate for Scott to delve into Huemer and give us his take. I think Huemer&#039;s got is right and Scott would be convinced by him. He&#039;s already hinted at the fact that there&#039;s something &lt;a href=&quot;http://slatestarcodex.com/2013/04/08/whose-utilitarianism/&quot; rel=&quot;nofollow&quot;&gt;intuitively wrong&lt;/a&gt; about Utilitarianism.
Unlike other times where our intuition is wrong, the evidence for Utilitarianism being the best way to structure the world does not seem to override that intuition. So, barring something far more convincing (that perhaps I simply haven&#039;t seen yet), Utilitarianism seems more like bullet-biting than the Repugnant Conclusion that Huemer accepts.]]></description>
		<content:encoded><![CDATA[<p>I&#8217;m really happy to see someone talking about Michael Huemer on here. I too am desperate for Scott to delve into Huemer and give us his take. I think Huemer&#8217;s got is right and Scott would be convinced by him. He&#8217;s already hinted at the fact that there&#8217;s something <a href="http://slatestarcodex.com/2013/04/08/whose-utilitarianism/" rel="nofollow">intuitively wrong</a> about Utilitarianism.<br />
Unlike other times where our intuition is wrong, the evidence for Utilitarianism being the best way to structure the world does not seem to override that intuition. So, barring something far more convincing (that perhaps I simply haven&#8217;t seen yet), Utilitarianism seems more like bullet-biting than the Repugnant Conclusion that Huemer accepts.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '165482', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: peterdjones</title>
		<link>http://slatestarcodex.com/2014/11/22/ot9-the-thread-pirate-roberts/#comment-162692</link>
		<dc:creator><![CDATA[peterdjones]]></dc:creator>
		<pubDate>Mon, 01 Dec 2014 22:52:04 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=3268#comment-162692</guid>
		<description><![CDATA[&gt; Skepticism about the senses isn’t the only logical consequence of the Evil Demon. You don’t seem to have even considered skepticism about memory.If we can’t trust our memories (or our reasoning for that matter), how can we trust that even illusionary experiences will remain the same?

If they do stay the same, then there is an instrumentality rational benefit in treating them as predictable. There is no benefit in treating them as unpredictable - in fact, there is little you can do about unpredictability. So there is nett benefit in treating them as predictable.

You seem to be implicitly assuming that only a guaranteed benefit will do.

&gt; There is a difference in my head between Truths of Faith and Truths of Reason.

There is a difference in my head between epistemological reason and instrumental reason.

&gt; I thought it would be obvious from my posistion which are which.As I said earlier, lack of groundedness means that anti-skeptical truth claims are basically self-refuting- if you assume them, you get to their contrary.

You do seem to be under the impression that the insufficient justification, or circular justification in an argument implies the falsity of the conclusion: it doesn&#039;t. &quot;2+2=4, therefore 2+2=4&quot; is not a valid way of arguing that &quot;2+2=4&quot;, but the conclusion is sound,

Sceptical arguments, however, can be, and often are self undermining, because the sceptic argues against kinds of justification that they need to make their own points.

&gt; This is because the wrongness of circular arguments

You need to understand the soundness/validity distinction.

&gt; and the wrongness of infinitist arguments are clear by the fact they are no better than contrary arguments of the same kind, so believing them legitimate would lead to a contradiction.Reasoning is always about something outside the brain. “Reasoning about reasoning” is either (depending on whether a faith-based or rational perspective is taken) reasoning about the brain,

Then reasoning is sometimes about the brain.

&gt; or an attempt to discern a way to know something about externa lreality.“Instrumental rationality” with no connection to actual reality is not worthy of the name.

Well, it can keep you alive...

&gt; My two roads example was meant to be a clarification, not an argument. Deciding to act AS IF something is true without evidence might be rational, but BELIEVING said thing is true without evidence never is.

Never us instrumental rational, or never is epistemically rational?

&gt;To ignore correspondence reality is to ignore all sorts of dangerous possibilities- that you might ‘blink’ out of existence at a moment’s notice, for example- implied by skepticism. It is irrational.I’m going to devote a new post to axioms.

So what am I supposed to do with that multitude of possibilities? The Matrix Lords might wide me out for not eating fish on Friday, but they might equally punish me for not eating bananas. What can I do with a multitude of unknown possibilities? as far as I can see, they sum to &quot;don&#039;t be too sure about anything&quot;.]]></description>
		<content:encoded><![CDATA[<p>&gt; Skepticism about the senses isn’t the only logical consequence of the Evil Demon. You don’t seem to have even considered skepticism about memory.If we can’t trust our memories (or our reasoning for that matter), how can we trust that even illusionary experiences will remain the same?</p>
<p>If they do stay the same, then there is an instrumentality rational benefit in treating them as predictable. There is no benefit in treating them as unpredictable &#8211; in fact, there is little you can do about unpredictability. So there is nett benefit in treating them as predictable.</p>
<p>You seem to be implicitly assuming that only a guaranteed benefit will do.</p>
<p>&gt; There is a difference in my head between Truths of Faith and Truths of Reason.</p>
<p>There is a difference in my head between epistemological reason and instrumental reason.</p>
<p>&gt; I thought it would be obvious from my posistion which are which.As I said earlier, lack of groundedness means that anti-skeptical truth claims are basically self-refuting- if you assume them, you get to their contrary.</p>
<p>You do seem to be under the impression that the insufficient justification, or circular justification in an argument implies the falsity of the conclusion: it doesn&#8217;t. &#8220;2+2=4, therefore 2+2=4&#8243; is not a valid way of arguing that &#8220;2+2=4&#8243;, but the conclusion is sound,</p>
<p>Sceptical arguments, however, can be, and often are self undermining, because the sceptic argues against kinds of justification that they need to make their own points.</p>
<p>&gt; This is because the wrongness of circular arguments</p>
<p>You need to understand the soundness/validity distinction.</p>
<p>&gt; and the wrongness of infinitist arguments are clear by the fact they are no better than contrary arguments of the same kind, so believing them legitimate would lead to a contradiction.Reasoning is always about something outside the brain. “Reasoning about reasoning” is either (depending on whether a faith-based or rational perspective is taken) reasoning about the brain,</p>
<p>Then reasoning is sometimes about the brain.</p>
<p>&gt; or an attempt to discern a way to know something about externa lreality.“Instrumental rationality” with no connection to actual reality is not worthy of the name.</p>
<p>Well, it can keep you alive&#8230;</p>
<p>&gt; My two roads example was meant to be a clarification, not an argument. Deciding to act AS IF something is true without evidence might be rational, but BELIEVING said thing is true without evidence never is.</p>
<p>Never us instrumental rational, or never is epistemically rational?</p>
<p>&gt;To ignore correspondence reality is to ignore all sorts of dangerous possibilities- that you might ‘blink’ out of existence at a moment’s notice, for example- implied by skepticism. It is irrational.I’m going to devote a new post to axioms.</p>
<p>So what am I supposed to do with that multitude of possibilities? The Matrix Lords might wide me out for not eating fish on Friday, but they might equally punish me for not eating bananas. What can I do with a multitude of unknown possibilities? as far as I can see, they sum to &#8220;don&#8217;t be too sure about anything&#8221;.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '162692', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Susebron</title>
		<link>http://slatestarcodex.com/2014/11/22/ot9-the-thread-pirate-roberts/#comment-162506</link>
		<dc:creator><![CDATA[Susebron]]></dc:creator>
		<pubDate>Sun, 30 Nov 2014 21:13:21 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=3268#comment-162506</guid>
		<description><![CDATA[Well, one formulation of the trolley problem is that you&#039;re the person who flips the switch. In that case, you do know the outcome of your actions.]]></description>
		<content:encoded><![CDATA[<p>Well, one formulation of the trolley problem is that you&#8217;re the person who flips the switch. In that case, you do know the outcome of your actions.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '162506', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: houseboatonstyx</title>
		<link>http://slatestarcodex.com/2014/11/22/ot9-the-thread-pirate-roberts/#comment-162502</link>
		<dc:creator><![CDATA[houseboatonstyx]]></dc:creator>
		<pubDate>Sun, 30 Nov 2014 20:00:42 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=3268#comment-162502</guid>
		<description><![CDATA[&lt;I&gt;And I think this is the most truly relevant heuristic: “always do what is right unless you can achieve a vastly disproportionate good by doing something that is just a tiny bit wrong…but you are probably a terrible judge of when it is appropriate to make this call, so you should probably mostly stick to doing what is right.”&lt;/I&gt;

+ like 

Perhaps popularly known as &quot;Honesty/etc is the best policy&quot;. &#039;Policy&#039; seems very accurate here; usually applied routinely but subject to exception for good reason. It seems a good answer to the &#039;fat man problem&#039;: you can&#039;t be reasonably sure of the result, or of your own judgement, so better follow the usual policy/reflex of not causing certain harm for uncertain good.

Elsewhere in this thread, the term &#039;lesser evil&#039; might fit in, even if to be dismissed.]]></description>
		<content:encoded><![CDATA[<p><i>And I think this is the most truly relevant heuristic: “always do what is right unless you can achieve a vastly disproportionate good by doing something that is just a tiny bit wrong…but you are probably a terrible judge of when it is appropriate to make this call, so you should probably mostly stick to doing what is right.”</i></p>
<p>+ like </p>
<p>Perhaps popularly known as &#8220;Honesty/etc is the best policy&#8221;. &#8216;Policy&#8217; seems very accurate here; usually applied routinely but subject to exception for good reason. It seems a good answer to the &#8216;fat man problem': you can&#8217;t be reasonably sure of the result, or of your own judgement, so better follow the usual policy/reflex of not causing certain harm for uncertain good.</p>
<p>Elsewhere in this thread, the term &#8216;lesser evil&#8217; might fit in, even if to be dismissed.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '162502', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: peterdjones</title>
		<link>http://slatestarcodex.com/2014/11/22/ot9-the-thread-pirate-roberts/#comment-162485</link>
		<dc:creator><![CDATA[peterdjones]]></dc:creator>
		<pubDate>Sun, 30 Nov 2014 16:39:44 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=3268#comment-162485</guid>
		<description><![CDATA[One way of being secure in power is to manipulate public opinion into not wanting to overthrow you. That&#039;s why monarchies tend to go with official state religions that preach that His Maj. was placed on the throne by God.]]></description>
		<content:encoded><![CDATA[<p>One way of being secure in power is to manipulate public opinion into not wanting to overthrow you. That&#8217;s why monarchies tend to go with official state religions that preach that His Maj. was placed on the throne by God.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '162485', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Anonymous</title>
		<link>http://slatestarcodex.com/2014/11/22/ot9-the-thread-pirate-roberts/#comment-162483</link>
		<dc:creator><![CDATA[Anonymous]]></dc:creator>
		<pubDate>Sun, 30 Nov 2014 16:23:08 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=3268#comment-162483</guid>
		<description><![CDATA[Your main point is your first sentence, which is correct, but &quot;The Little Rock School board voted to integrate&quot; is extremely misleading. The board did not spontaneously choose to integrate, but merely to comply with the &lt;em&gt;Brown&lt;/em&gt; ruling. It negotiated an integration plan with NAACP, in a failed attempt to avoid court.]]></description>
		<content:encoded><![CDATA[<p>Your main point is your first sentence, which is correct, but &#8220;The Little Rock School board voted to integrate&#8221; is extremely misleading. The board did not spontaneously choose to integrate, but merely to comply with the <em>Brown</em> ruling. It negotiated an integration plan with NAACP, in a failed attempt to avoid court.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '162483', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
</channel>
</rss>
