<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	>
<channel>
	<title>Comments on: The Cowpox of Doubt</title>
	<atom:link href="http://slatestarcodex.com/2014/04/15/the-cowpox-of-doubt/feed/" rel="self" type="application/rss+xml" />
	<link>http://slatestarcodex.com/2014/04/15/the-cowpox-of-doubt/</link>
	<description>In a mad world, all blogging is psychiatry blogging</description>
	<lastBuildDate>Sat, 25 Jul 2015 00:16:58 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.2.3</generator>
	<item>
		<title>By: The &#34;Flying Car Fallacy&#34; and Why It&#039;s Wrong - Stratexist</title>
		<link>http://slatestarcodex.com/2014/04/15/the-cowpox-of-doubt/#comment-127257</link>
		<dc:creator><![CDATA[The &#34;Flying Car Fallacy&#34; and Why It&#039;s Wrong - Stratexist]]></dc:creator>
		<pubDate>Mon, 28 Jul 2014 20:39:47 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=1854#comment-127257</guid>
		<description><![CDATA[[&#8230;] a bad one. However, while it&#8217;s not a precise corollary, I think Scott Alexander&#8217;s Innoculation Effect comes into play here. Because there&#8217;s this very basic, obviously failed prediction about the [&#8230;]]]></description>
		<content:encoded><![CDATA[<p>[&#8230;] a bad one. However, while it&#8217;s not a precise corollary, I think Scott Alexander&#8217;s Innoculation Effect comes into play here. Because there&#8217;s this very basic, obviously failed prediction about the [&#8230;]</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '127257', '4a6e30181a')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: b person</title>
		<link>http://slatestarcodex.com/2014/04/15/the-cowpox-of-doubt/#comment-123351</link>
		<dc:creator><![CDATA[b person]]></dc:creator>
		<pubDate>Fri, 18 Jul 2014 11:30:56 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=1854#comment-123351</guid>
		<description><![CDATA[Nice read. I would say, belief is something you choose for yourself. and it doesn&#039;t really matter what that belief is, as long as you know that it&#039;s just your belief, and it doesn&#039;t mean anything else than that. Beliefs have nothing to do with science. science is about technology and progress, belief (for me) is about comfort and stress relief, maybe even balance. the problem for me is that people now hold their beliefs as science, and their science as beliefs.]]></description>
		<content:encoded><![CDATA[<p>Nice read. I would say, belief is something you choose for yourself. and it doesn&#8217;t really matter what that belief is, as long as you know that it&#8217;s just your belief, and it doesn&#8217;t mean anything else than that. Beliefs have nothing to do with science. science is about technology and progress, belief (for me) is about comfort and stress relief, maybe even balance. the problem for me is that people now hold their beliefs as science, and their science as beliefs.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '123351', '4a6e30181a')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Maybe You’re Wrong (Or: Exercises in Humility) &#124; Andrew Glidden</title>
		<link>http://slatestarcodex.com/2014/04/15/the-cowpox-of-doubt/#comment-80575</link>
		<dc:creator><![CDATA[Maybe You’re Wrong (Or: Exercises in Humility) &#124; Andrew Glidden]]></dc:creator>
		<pubDate>Sat, 17 May 2014 17:57:20 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=1854#comment-80575</guid>
		<description><![CDATA[[&#8230;] no Schelling Fences, the more likely consequence of entertaining arguments involving them is simple ideological inoculation: when we are exposed to a weak argument, just as with a weak virus, we become more resistant to [&#8230;]]]></description>
		<content:encoded><![CDATA[<p>[&#8230;] no Schelling Fences, the more likely consequence of entertaining arguments involving them is simple ideological inoculation: when we are exposed to a weak argument, just as with a weak virus, we become more resistant to [&#8230;]</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '80575', '4a6e30181a')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: nyan_sandwich</title>
		<link>http://slatestarcodex.com/2014/04/15/the-cowpox-of-doubt/#comment-64519</link>
		<dc:creator><![CDATA[nyan_sandwich]]></dc:creator>
		<pubDate>Sat, 26 Apr 2014 16:17:11 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=1854#comment-64519</guid>
		<description><![CDATA[&gt;Creationism is a pretty widespread belief; In the realm of ethics, that it is a good idea to deny people certain rights based on pigmentation was an extremely common belief until recently, and denying people rights based on sexual orientation is still pretty common.

Be careful with that. One of those things is not like the others.

To be clear, the odd one is Creationism. It&#039;s a belief about how the world works, and it&#039;s very wrong.

The other two are what you might call social technology, which are judged on entirely different criteria. They are social constructs that assume facts about the real world try to produce some (hopefully prosocial) results. What would the world have to be like for those things (for concreteness, voting restricted to whites, and marriage restricted to male/female pairs) to meet the same standard as other accepted social technology?

Are you sure you could debunk them if I presented an argument for them?

Also, taboo &quot;rights&quot;, it&#039;s generally a confusing concept.]]></description>
		<content:encoded><![CDATA[<p>&gt;Creationism is a pretty widespread belief; In the realm of ethics, that it is a good idea to deny people certain rights based on pigmentation was an extremely common belief until recently, and denying people rights based on sexual orientation is still pretty common.</p>
<p>Be careful with that. One of those things is not like the others.</p>
<p>To be clear, the odd one is Creationism. It&#8217;s a belief about how the world works, and it&#8217;s very wrong.</p>
<p>The other two are what you might call social technology, which are judged on entirely different criteria. They are social constructs that assume facts about the real world try to produce some (hopefully prosocial) results. What would the world have to be like for those things (for concreteness, voting restricted to whites, and marriage restricted to male/female pairs) to meet the same standard as other accepted social technology?</p>
<p>Are you sure you could debunk them if I presented an argument for them?</p>
<p>Also, taboo &#8220;rights&#8221;, it&#8217;s generally a confusing concept.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '64519', '4a6e30181a')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: nyan_sandwich</title>
		<link>http://slatestarcodex.com/2014/04/15/the-cowpox-of-doubt/#comment-64513</link>
		<dc:creator><![CDATA[nyan_sandwich]]></dc:creator>
		<pubDate>Sat, 26 Apr 2014 16:00:19 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=1854#comment-64513</guid>
		<description><![CDATA[&gt;So, what should I use as an example of a popular belief that is obviously incorrect to my listener but feels correct to large group of people [that the listener is aware of]? Those seem to do the job pretty well and I don’t see what alternatives you provide (unless you also think that we shouldn’t try to convince people that false beliefs don’t feel false from the inside – then I see a bigger problem)

Conservatism.

Moldbug does this really well, IMO. He&#039;s like &quot;look at this 50% of america that is totally deluded and bonkers they must be infected by some kind of exotic memetic virus&quot; but then instead of leaving it at that and acting smug about how much better us progressives are than those dumb conservatives, he immediately turns it around and says &quot;but what if progressivism is a bonkers meme-virus as well?&quot; and then goes on to spend the next hundred thousand words ripping your entire political philosophy a new asshole.

One comes out of that with a new appreciation for being really careful about possible memetic viruses. I think that pattern could make a pretty good intro to rationality.]]></description>
		<content:encoded><![CDATA[<p>&gt;So, what should I use as an example of a popular belief that is obviously incorrect to my listener but feels correct to large group of people [that the listener is aware of]? Those seem to do the job pretty well and I don’t see what alternatives you provide (unless you also think that we shouldn’t try to convince people that false beliefs don’t feel false from the inside – then I see a bigger problem)</p>
<p>Conservatism.</p>
<p>Moldbug does this really well, IMO. He&#8217;s like &#8220;look at this 50% of america that is totally deluded and bonkers they must be infected by some kind of exotic memetic virus&#8221; but then instead of leaving it at that and acting smug about how much better us progressives are than those dumb conservatives, he immediately turns it around and says &#8220;but what if progressivism is a bonkers meme-virus as well?&#8221; and then goes on to spend the next hundred thousand words ripping your entire political philosophy a new asshole.</p>
<p>One comes out of that with a new appreciation for being really careful about possible memetic viruses. I think that pattern could make a pretty good intro to rationality.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '64513', '4a6e30181a')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: anon</title>
		<link>http://slatestarcodex.com/2014/04/15/the-cowpox-of-doubt/#comment-57132</link>
		<dc:creator><![CDATA[anon]]></dc:creator>
		<pubDate>Sun, 20 Apr 2014 03:17:28 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=1854#comment-57132</guid>
		<description><![CDATA[2 is interesting. I kind of like the idea of intentionally abstaining from politics. Rather than attempting to engage with politics cautiously it might be best to back away entirely. I think I&#039;ll try to do that a bit more, if not completely.]]></description>
		<content:encoded><![CDATA[<p>2 is interesting. I kind of like the idea of intentionally abstaining from politics. Rather than attempting to engage with politics cautiously it might be best to back away entirely. I think I&#8217;ll try to do that a bit more, if not completely.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '57132', '4a6e30181a')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Sniffnoy</title>
		<link>http://slatestarcodex.com/2014/04/15/the-cowpox-of-doubt/#comment-57093</link>
		<dc:creator><![CDATA[Sniffnoy]]></dc:creator>
		<pubDate>Sun, 20 Apr 2014 02:38:56 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=1854#comment-57093</guid>
		<description><![CDATA[Douglas Knight has already made the basic point I wanted to, but I wanted to expand on it a bit more.

In short, you are pretty mixed-up about what people are talking about when they talk about utility functions and linearity.  It is indeed the case, as you say, that utility functions are not required to be linear in money, nor in quantity of any other good; and the direction and the degree to which it deviates from linearity determines to what extent you are risk-averse or risk-seeking.

Utility functions are, however, required to be &quot;linear in probability&quot; -- except that really, the preceding statement is actually an abuse of notation, as I&#039;ll get to in a moment.  Otherwise you don&#039;t have a utility function; you just have a preference (pre-)ordering.  (Some people do use the phrase &quot;utility function&quot; in this way, but seeing as this does not seem to be the sense under discussion, I&#039;m just going to ignore it.  Also, to me it seems silly to introduce a function to the real numbers when all you wanted to talk about was a preordering.)

The Allais paradox is a violation of &quot;linearity in probability&quot;; it does not in any way rely on linearity in money.  Your comments about the latter are basically irrelevant.

(Although, I must disagree with your final comment that a utility function which is linear in money is insane and irrational -- by what means are you judging another agent&#039;s terminal values?  I mean, sure, it seems nutty for a person to &lt;i&gt;build&lt;/I&gt; such an agent, and I&#039;d certainly question its builder&#039;s rationality, but I&#039;m not sure it&#039;s meaningful to state that the agent itself is &quot;irrational&quot;.)

But back to the point -- I have so far not justified why utility should be &quot;linear in probability&quot;, but in the case of the Allais paradox, where the independence assumption of the VNM theorem is violated, the problem can be seen via a &lt;a href=&quot;http://lesswrong.com/lw/my/the_allais_paradox/&quot; rel=&quot;nofollow&quot;&gt;Dutch Book argument&lt;/a&gt;.

And indeed I&#039;m not going to give any justification for this beyond A. pointing out the VNM theorem and, B. making some comments on why (in my opinion) the VNM axioms make sense.  (At least, they make sense if you already accept the notion of measuring strength-of-belief with probability; otherwise you need something like &lt;a href=&quot;http://lesswrong.com/lw/5te/a_summary_of_savages_foundations_for_probability/&quot; rel=&quot;nofollow&quot;&gt;Savage&#039;s Theorem&lt;/a&gt;.  But you seem to already be assuming that, so I&#039;ll use the VNM framework.)

Well, OK -- you haven&#039;t done anything to challenge totality, transitivity, or the Archimedean property, so let&#039;s just talk about independence.  But if you accept totality and transitivity, then independence becomes easy to justify, because of exactly the sort of Dutch book argument Eliezer makes in his &lt;a href=&quot;http://lesswrong.com/lw/my/the_allais_paradox/&quot; rel=&quot;nofollow&quot;&gt;post&lt;/a&gt; on it!  So, really, I don&#039;t know where you&#039;re coming from here.

Now I should note here that I&#039;ve really basically been counterarguing rather than refuting; I haven&#039;t actually addressed your argument that this choice is consistent.  But you haven&#039;t really given one -- you haven&#039;t actually given an example of a utility function that will make this consistent; your calculations are just of the form &quot;well, it could turn out this way&quot;.  But as has already been noted, because this violates the independence axiom, there can be no such utility function -- at least, not with the definition of &quot;utility function&quot; that&#039;s actually under discussion.

Finally, a note on the &quot;abuse of notation&quot; issue I&#039;ve been mentioned -- properly speaking, a utility function is a function (call it f) from consequences to real numbers, not a function (call it F) from gambles to real numbers.  Obviously one can induce F from f (by making F &quot;linear in probability&quot;, though &quot;affine&quot; might be a better word than &quot;linear&quot;).  But what makes f a utility function is that the F determined in this way actually reflects the agent&#039;s preferences.  If there were not required to be any relation between the f describing the agent&#039;s preferences among outcomes and the F describing the agent&#039;s preferences among gambles, what would be the point of talking about f?  It would be useless except under conditions of certainty.  (And why would we use functions to the real numbers rather than just pre-orderings?)  Of course, by the VNM theorem, if our agent satsifies the appropriate axioms, then f is not useless because it in fact determines F entirely, obviating any need for talking about F as a separate object -- hence why the simpler object f, not its extension F, is referred to as the &quot;utility function&quot;.  (Although the abuse of notation is still perfectly understandable obviously.)]]></description>
		<content:encoded><![CDATA[<p>Douglas Knight has already made the basic point I wanted to, but I wanted to expand on it a bit more.</p>
<p>In short, you are pretty mixed-up about what people are talking about when they talk about utility functions and linearity.  It is indeed the case, as you say, that utility functions are not required to be linear in money, nor in quantity of any other good; and the direction and the degree to which it deviates from linearity determines to what extent you are risk-averse or risk-seeking.</p>
<p>Utility functions are, however, required to be &#8220;linear in probability&#8221; &#8212; except that really, the preceding statement is actually an abuse of notation, as I&#8217;ll get to in a moment.  Otherwise you don&#8217;t have a utility function; you just have a preference (pre-)ordering.  (Some people do use the phrase &#8220;utility function&#8221; in this way, but seeing as this does not seem to be the sense under discussion, I&#8217;m just going to ignore it.  Also, to me it seems silly to introduce a function to the real numbers when all you wanted to talk about was a preordering.)</p>
<p>The Allais paradox is a violation of &#8220;linearity in probability&#8221;; it does not in any way rely on linearity in money.  Your comments about the latter are basically irrelevant.</p>
<p>(Although, I must disagree with your final comment that a utility function which is linear in money is insane and irrational &#8212; by what means are you judging another agent&#8217;s terminal values?  I mean, sure, it seems nutty for a person to <i>build</i> such an agent, and I&#8217;d certainly question its builder&#8217;s rationality, but I&#8217;m not sure it&#8217;s meaningful to state that the agent itself is &#8220;irrational&#8221;.)</p>
<p>But back to the point &#8212; I have so far not justified why utility should be &#8220;linear in probability&#8221;, but in the case of the Allais paradox, where the independence assumption of the VNM theorem is violated, the problem can be seen via a <a href="http://lesswrong.com/lw/my/the_allais_paradox/" rel="nofollow">Dutch Book argument</a>.</p>
<p>And indeed I&#8217;m not going to give any justification for this beyond A. pointing out the VNM theorem and, B. making some comments on why (in my opinion) the VNM axioms make sense.  (At least, they make sense if you already accept the notion of measuring strength-of-belief with probability; otherwise you need something like <a href="http://lesswrong.com/lw/5te/a_summary_of_savages_foundations_for_probability/" rel="nofollow">Savage&#8217;s Theorem</a>.  But you seem to already be assuming that, so I&#8217;ll use the VNM framework.)</p>
<p>Well, OK &#8212; you haven&#8217;t done anything to challenge totality, transitivity, or the Archimedean property, so let&#8217;s just talk about independence.  But if you accept totality and transitivity, then independence becomes easy to justify, because of exactly the sort of Dutch book argument Eliezer makes in his <a href="http://lesswrong.com/lw/my/the_allais_paradox/" rel="nofollow">post</a> on it!  So, really, I don&#8217;t know where you&#8217;re coming from here.</p>
<p>Now I should note here that I&#8217;ve really basically been counterarguing rather than refuting; I haven&#8217;t actually addressed your argument that this choice is consistent.  But you haven&#8217;t really given one &#8212; you haven&#8217;t actually given an example of a utility function that will make this consistent; your calculations are just of the form &#8220;well, it could turn out this way&#8221;.  But as has already been noted, because this violates the independence axiom, there can be no such utility function &#8212; at least, not with the definition of &#8220;utility function&#8221; that&#8217;s actually under discussion.</p>
<p>Finally, a note on the &#8220;abuse of notation&#8221; issue I&#8217;ve been mentioned &#8212; properly speaking, a utility function is a function (call it f) from consequences to real numbers, not a function (call it F) from gambles to real numbers.  Obviously one can induce F from f (by making F &#8220;linear in probability&#8221;, though &#8220;affine&#8221; might be a better word than &#8220;linear&#8221;).  But what makes f a utility function is that the F determined in this way actually reflects the agent&#8217;s preferences.  If there were not required to be any relation between the f describing the agent&#8217;s preferences among outcomes and the F describing the agent&#8217;s preferences among gambles, what would be the point of talking about f?  It would be useless except under conditions of certainty.  (And why would we use functions to the real numbers rather than just pre-orderings?)  Of course, by the VNM theorem, if our agent satsifies the appropriate axioms, then f is not useless because it in fact determines F entirely, obviating any need for talking about F as a separate object &#8212; hence why the simpler object f, not its extension F, is referred to as the &#8220;utility function&#8221;.  (Although the abuse of notation is still perfectly understandable obviously.)</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '57093', '4a6e30181a')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: no</title>
		<link>http://slatestarcodex.com/2014/04/15/the-cowpox-of-doubt/#comment-56364</link>
		<dc:creator><![CDATA[no]]></dc:creator>
		<pubDate>Sat, 19 Apr 2014 11:09:05 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=1854#comment-56364</guid>
		<description><![CDATA[For those of us who have blogs that run on daily [or even semi-daily] schedules, best-practice rationality does not mix well with daily schedules unless you have both free time and {superhuman focus&#124;a reliable source of speed}.

And every once in a while (this has only happened to me once, however, and it ran elsewhere than my blog), word comes down from on high that an article is needed that makes the best case possible for X, whether or not X is actually right.]]></description>
		<content:encoded><![CDATA[<p>For those of us who have blogs that run on daily [or even semi-daily] schedules, best-practice rationality does not mix well with daily schedules unless you have both free time and {superhuman focus|a reliable source of speed}.</p>
<p>And every once in a while (this has only happened to me once, however, and it ran elsewhere than my blog), word comes down from on high that an article is needed that makes the best case possible for X, whether or not X is actually right.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '56364', '4a6e30181a')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: peterdjones</title>
		<link>http://slatestarcodex.com/2014/04/15/the-cowpox-of-doubt/#comment-56357</link>
		<dc:creator><![CDATA[peterdjones]]></dc:creator>
		<pubDate>Sat, 19 Apr 2014 10:57:40 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=1854#comment-56357</guid>
		<description><![CDATA[To be precise, there&#039;s no extra non-physical  information.]]></description>
		<content:encoded><![CDATA[<p>To be precise, there&#8217;s no extra non-physical  information.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '56357', '4a6e30181a')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: peterdjones</title>
		<link>http://slatestarcodex.com/2014/04/15/the-cowpox-of-doubt/#comment-56331</link>
		<dc:creator><![CDATA[peterdjones]]></dc:creator>
		<pubDate>Sat, 19 Apr 2014 10:25:00 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=1854#comment-56331</guid>
		<description><![CDATA[Plus there&#039;s usually material available in a debate or pro and anti  format.]]></description>
		<content:encoded><![CDATA[<p>Plus there&#8217;s usually material available in a debate or pro and anti  format.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '56331', '4a6e30181a')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
</channel>
</rss>
