<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	>
<channel>
	<title>Comments on: Bicameral Reasoning</title>
	<atom:link href="http://slatestarcodex.com/2015/05/17/bicameral-reasoning/feed/" rel="self" type="application/rss+xml" />
	<link>http://slatestarcodex.com/2015/05/17/bicameral-reasoning/</link>
	<description>In a mad world, all blogging is psychiatry blogging</description>
	<lastBuildDate>Fri, 24 Jul 2015 18:36:06 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.2.3</generator>
	<item>
		<title>By: g</title>
		<link>http://slatestarcodex.com/2015/05/17/bicameral-reasoning/#comment-207374</link>
		<dc:creator><![CDATA[g]]></dc:creator>
		<pubDate>Mon, 01 Jun 2015 13:00:20 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=3641#comment-207374</guid>
		<description><![CDATA[It should be &quot;with y copies of x&quot;, so:

x*y = x+...+x with y copies of x

x^y = x*...*x with y copies of x

x^^y = x^(x^...(x^x)...) with y copies of x

(it&#039;s only from x^^y onwards that we need to put the parentheses in, because addition and multiplication are associative).]]></description>
		<content:encoded><![CDATA[<p>It should be &#8220;with y copies of x&#8221;, so:</p>
<p>x*y = x+&#8230;+x with y copies of x</p>
<p>x^y = x*&#8230;*x with y copies of x</p>
<p>x^^y = x^(x^&#8230;(x^x)&#8230;) with y copies of x</p>
<p>(it&#8217;s only from x^^y onwards that we need to put the parentheses in, because addition and multiplication are associative).</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '207374', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: 27chaos</title>
		<link>http://slatestarcodex.com/2015/05/17/bicameral-reasoning/#comment-207052</link>
		<dc:creator><![CDATA[27chaos]]></dc:creator>
		<pubDate>Sun, 31 May 2015 02:53:49 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=3641#comment-207052</guid>
		<description><![CDATA[Here&#039;s a similar problem resulting from the same underlying flawed thinking: http://stanford.edu/~dbroock/papers/broockman_approaches_to_studying_representation.pdf]]></description>
		<content:encoded><![CDATA[<p>Here&#8217;s a similar problem resulting from the same underlying flawed thinking: <a href="http://stanford.edu/~dbroock/papers/broockman_approaches_to_studying_representation.pdf" rel="nofollow">http://stanford.edu/~dbroock/papers/broockman_approaches_to_studying_representation.pdf</a></p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '207052', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Anonymous</title>
		<link>http://slatestarcodex.com/2015/05/17/bicameral-reasoning/#comment-206540</link>
		<dc:creator><![CDATA[Anonymous]]></dc:creator>
		<pubDate>Fri, 29 May 2015 13:31:04 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=3641#comment-206540</guid>
		<description><![CDATA[&quot;x^^^y means “do x^^x y times”, so 2^^^3 becomes 2^^(2^^2)&quot;

Are you sure that&#039;s right? Subbing in numbers to your sentence:

2^^^1 means “do 2^^2 1 times” = 2^^2
and following on from this,
2^^^2 means “do 2^^2 2 times” = 2^^(2^^2)
but you have
2^^^3 means “do 2^^2 3 times” = 2^^(2^^2)

Also by my logic 8^^^4 should become 8^^(8^^(8^^(8^^8)))]]></description>
		<content:encoded><![CDATA[<p>&#8220;x^^^y means “do x^^x y times”, so 2^^^3 becomes 2^^(2^^2)&#8221;</p>
<p>Are you sure that&#8217;s right? Subbing in numbers to your sentence:</p>
<p>2^^^1 means “do 2^^2 1 times” = 2^^2<br />
and following on from this,<br />
2^^^2 means “do 2^^2 2 times” = 2^^(2^^2)<br />
but you have<br />
2^^^3 means “do 2^^2 3 times” = 2^^(2^^2)</p>
<p>Also by my logic 8^^^4 should become 8^^(8^^(8^^(8^^8)))</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '206540', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Matt Goldenberg</title>
		<link>http://slatestarcodex.com/2015/05/17/bicameral-reasoning/#comment-206431</link>
		<dc:creator><![CDATA[Matt Goldenberg]]></dc:creator>
		<pubDate>Thu, 28 May 2015 05:40:51 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=3641#comment-206431</guid>
		<description><![CDATA[I think the issue with 9/11 is that it&#039;s a black swan, Nassim Taleb would say it exists in extremistan.  Sure, it wasn&#039;t that bad, but terrorists attacks COULD be EXTREMELY bad.  On the other hand, car accidents could never be devestating on that scale.]]></description>
		<content:encoded><![CDATA[<p>I think the issue with 9/11 is that it&#8217;s a black swan, Nassim Taleb would say it exists in extremistan.  Sure, it wasn&#8217;t that bad, but terrorists attacks COULD be EXTREMELY bad.  On the other hand, car accidents could never be devestating on that scale.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '206431', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Uniqueness</title>
		<link>http://slatestarcodex.com/2015/05/17/bicameral-reasoning/#comment-206409</link>
		<dc:creator><![CDATA[Uniqueness]]></dc:creator>
		<pubDate>Wed, 27 May 2015 22:40:07 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=3641#comment-206409</guid>
		<description><![CDATA[Isn&#039;t your attitude to chickens entirely consistent with your attitude toward *yourself*?
http://lesswrong.com/lw/bg0/cryonics_without_freezers_resurrection/
Your value goes down when there are close substitutes available. If there are trillions of Yvains, that&#039;s not much better than having one Yvain. Why should the same not hold for chickens?]]></description>
		<content:encoded><![CDATA[<p>Isn&#8217;t your attitude to chickens entirely consistent with your attitude toward *yourself*?<br />
<a href="http://lesswrong.com/lw/bg0/cryonics_without_freezers_resurrection/" rel="nofollow">http://lesswrong.com/lw/bg0/cryonics_without_freezers_resurrection/</a><br />
Your value goes down when there are close substitutes available. If there are trillions of Yvains, that&#8217;s not much better than having one Yvain. Why should the same not hold for chickens?</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '206409', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: g</title>
		<link>http://slatestarcodex.com/2015/05/17/bicameral-reasoning/#comment-206257</link>
		<dc:creator><![CDATA[g]]></dc:creator>
		<pubDate>Tue, 26 May 2015 22:22:39 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=3641#comment-206257</guid>
		<description><![CDATA[I am not a strict utilitarian. I am pretty much a strict consequentialist, but among the actions that have consequences are, e.g., committing always to act in a particular way whether or not it seems to maximize utility, and both for game-theoretic reasons and because of human fallibility such actions may sometimes be best.

I haven&#039;t seen that Star Trek episode. Having read the summary it&#039;s not clear to me whether we are supposed to understand that the ambassador&#039;s actions really &lt;em&gt;did&lt;/em&gt; do the good he claims they did. In the real world, people making such claims are very commonly lying or mistaken and one does well to discount them heavily. But -- embracing the hypothetical for the sake of argument -- &lt;em&gt;if&lt;/em&gt; I were fully convinced that his actions predictably prevented (say) a major war at the cost of a couple of lives, and that any less-horrible action would predictably have led to war, then I would be glad he did it; and if I were fully convinced that it was only for that reason that he was willing to do it, I would approve of his doing it.]]></description>
		<content:encoded><![CDATA[<p>I am not a strict utilitarian. I am pretty much a strict consequentialist, but among the actions that have consequences are, e.g., committing always to act in a particular way whether or not it seems to maximize utility, and both for game-theoretic reasons and because of human fallibility such actions may sometimes be best.</p>
<p>I haven&#8217;t seen that Star Trek episode. Having read the summary it&#8217;s not clear to me whether we are supposed to understand that the ambassador&#8217;s actions really <em>did</em> do the good he claims they did. In the real world, people making such claims are very commonly lying or mistaken and one does well to discount them heavily. But &#8212; embracing the hypothetical for the sake of argument &#8212; <em>if</em> I were fully convinced that his actions predictably prevented (say) a major war at the cost of a couple of lives, and that any less-horrible action would predictably have led to war, then I would be glad he did it; and if I were fully convinced that it was only for that reason that he was willing to do it, I would approve of his doing it.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '206257', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: onyomi</title>
		<link>http://slatestarcodex.com/2015/05/17/bicameral-reasoning/#comment-206201</link>
		<dc:creator><![CDATA[onyomi]]></dc:creator>
		<pubDate>Tue, 26 May 2015 18:47:55 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=3641#comment-206201</guid>
		<description><![CDATA[I can highly recommend Huemer&#039;s &quot;Problem of Political Authority.&quot; In fact, if I could suggest only one book for all non-libertarians to read, it would be that book. The first half makes an ethical case for why &quot;at most&quot; a night watchman state may be justified, and the second half makes an empirical case for why pure anarchy would actually be a good thing. It also addresses questions like why belief in government&#039;s ethical legitimacy is so widespread if, in fact, it isn&#039;t. 

If you are truly a pure, committed utilitarian then it may be impossible for us to reach any agreement on ethical issues. 

A bit out of left field, but have you ever seen this episode of Star Trek: http://en.memory-alpha.wikia.com/wiki/Man_of_the_People_(episode)

And if so (or less ideally, if you have read the summary), do you think that the ambassador acted ethically? I bring it up because I happened to rewatch this episode recently and it struck me then that it seemed a good example of why I am not a utilitarian (because I found the ambassador&#039;s actions to be obviously wrong, even though we are told he saved many people).]]></description>
		<content:encoded><![CDATA[<p>I can highly recommend Huemer&#8217;s &#8220;Problem of Political Authority.&#8221; In fact, if I could suggest only one book for all non-libertarians to read, it would be that book. The first half makes an ethical case for why &#8220;at most&#8221; a night watchman state may be justified, and the second half makes an empirical case for why pure anarchy would actually be a good thing. It also addresses questions like why belief in government&#8217;s ethical legitimacy is so widespread if, in fact, it isn&#8217;t. </p>
<p>If you are truly a pure, committed utilitarian then it may be impossible for us to reach any agreement on ethical issues. </p>
<p>A bit out of left field, but have you ever seen this episode of Star Trek: <a href="http://en.memory-alpha.wikia.com/wiki/Man_of_the_People_(episode)" rel="nofollow">http://en.memory-alpha.wikia.com/wiki/Man_of_the_People_(episode)</a></p>
<p>And if so (or less ideally, if you have read the summary), do you think that the ambassador acted ethically? I bring it up because I happened to rewatch this episode recently and it struck me then that it seemed a good example of why I am not a utilitarian (because I found the ambassador&#8217;s actions to be obviously wrong, even though we are told he saved many people).</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '206201', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: g</title>
		<link>http://slatestarcodex.com/2015/05/17/bicameral-reasoning/#comment-206169</link>
		<dc:creator><![CDATA[g]]></dc:creator>
		<pubDate>Tue, 26 May 2015 17:14:57 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=3641#comment-206169</guid>
		<description><![CDATA[I haven&#039;t read Friedman&#039;s book, but I have read &lt;a href=&quot;http://slatestarcodex.com/2015/03/18/book-review-the-machinery-of-freedom/&quot; rel=&quot;nofollow&quot;&gt;Scott&#039;s review&lt;/a&gt; of it; his concerns after reading the book overlap a lot with my concerns before reading it, which I take as an indication that the book doesn&#039;t address them convincingly. Since Scott is at pains to point out that DF is smart and thorough and attempts to rebut objections, that doesn&#039;t seem like a good sign that the things that look to me like near-insurmountable problems will all turn out fine. Perhaps Michael Huemer does better?

Thanks for going into more detail about the ethical position you&#039;re arguing from. I&#039;m glad to hear that you &lt;em&gt;do&lt;/em&gt; care (and admit to caring!) about other things besides physical coercion and taking of property, and that you do pay (and admit to paying!) some attention to consequences. But I&#039;m having trouble seeing how, in practice, your moral judgements differ much from what they would be if you didn&#039;t. For instance, you concede that maybe in some weird counterfactual world it would be necessary to have a government that applies coercion -- but only &lt;em&gt;to avoid disaster&lt;/em&gt;, and the only specific disaster you mention is &lt;em&gt;a much greater level of violence and coercion&lt;/em&gt;.

So we differ in (at least) two ways. Firstly, if we have two possible ways to organize a society, one of which has a potentially-coercive government and the other of which doesn&#039;t but will produce somewhat more violence and coercion than the first, I prefer the first: I will generally take less violence and coercion in preference to more, even if it&#039;s not &lt;em&gt;much&lt;/em&gt; more, and the fact that the first option involves a coercive government doesn&#039;t change that. Whereas you, I think, will countenance a coercive government only if it saves &lt;em&gt;much more&lt;/em&gt; violence and coercion. I don&#039;t think that makes any sense.

Secondly, I don&#039;t see coercion as infinitely worse, or even vastly worse, than every other misfortune that can befall a person, and I therefore don&#039;t see any reason to say that coercion can only be acceptable when the alternative is catastrophe. I would rather experience coercion than cancer, for instance, and if some government regulation coerces N people in order to prevent 10N people getting cancer then that sounds like a clear net win to me. (Of course it may depend a bit on exactly what sort of coercion and exactly what sort of cancer.) And this applies even if the regulation doesn&#039;t &quot;prevent blood running in the streets&quot;. Such a regulation really doesn&#039;t seem to me very accurately modelled by &quot;forcing someone to make you a sandwich at gunpoint&quot;.

(Specific example: I&#039;m sure many smokers really like to smoke; I&#039;m sure tobacco company executives really don&#039;t like being obliged to put health warnings on their products, or forbidden to lie about the consequences of smoking; but I will &lt;em&gt;very happily&lt;/em&gt; trade their unhappiness, and all the other negative consequences of tobacco regulation, for the tens of millions of premature deaths prevented by tobacco regulation since, say, 1960. I take it you disagree; I would be interested to know whether you deny that tens of millions of premature deaths have been prevented by tobacco regulation, or consider that the harm done by that regulation outweighs them, or just don&#039;t care about the deaths if it takes government action to prevent them.)

I agree that this isn&#039;t the place for a substantial debate on metaethics, but I will remark that your argument against moral nonrealism is wrong. Nonrealism doesn&#039;t imply that &quot;if most people think something is right then it is right&quot; (since, e.g., the conclusion there appears to presuppose moral realism); rather, what it means is that words like &quot;right&quot; always need (implicitly or explicitly) a specification of &lt;em&gt;whose&lt;/em&gt; values you&#039;re considering; if most people think something is right then it&#039;s right-in-their-value-system, but that doesn&#039;t mean it has to be right-in-your-value-system. But I&#039;ve no particular objection to intuitionism -- though it seems that your moral intuitions may just be extremely different from mine and it&#039;s not clear how to proceed when that happens.]]></description>
		<content:encoded><![CDATA[<p>I haven&#8217;t read Friedman&#8217;s book, but I have read <a href="http://slatestarcodex.com/2015/03/18/book-review-the-machinery-of-freedom/" rel="nofollow">Scott&#8217;s review</a> of it; his concerns after reading the book overlap a lot with my concerns before reading it, which I take as an indication that the book doesn&#8217;t address them convincingly. Since Scott is at pains to point out that DF is smart and thorough and attempts to rebut objections, that doesn&#8217;t seem like a good sign that the things that look to me like near-insurmountable problems will all turn out fine. Perhaps Michael Huemer does better?</p>
<p>Thanks for going into more detail about the ethical position you&#8217;re arguing from. I&#8217;m glad to hear that you <em>do</em> care (and admit to caring!) about other things besides physical coercion and taking of property, and that you do pay (and admit to paying!) some attention to consequences. But I&#8217;m having trouble seeing how, in practice, your moral judgements differ much from what they would be if you didn&#8217;t. For instance, you concede that maybe in some weird counterfactual world it would be necessary to have a government that applies coercion &#8212; but only <em>to avoid disaster</em>, and the only specific disaster you mention is <em>a much greater level of violence and coercion</em>.</p>
<p>So we differ in (at least) two ways. Firstly, if we have two possible ways to organize a society, one of which has a potentially-coercive government and the other of which doesn&#8217;t but will produce somewhat more violence and coercion than the first, I prefer the first: I will generally take less violence and coercion in preference to more, even if it&#8217;s not <em>much</em> more, and the fact that the first option involves a coercive government doesn&#8217;t change that. Whereas you, I think, will countenance a coercive government only if it saves <em>much more</em> violence and coercion. I don&#8217;t think that makes any sense.</p>
<p>Secondly, I don&#8217;t see coercion as infinitely worse, or even vastly worse, than every other misfortune that can befall a person, and I therefore don&#8217;t see any reason to say that coercion can only be acceptable when the alternative is catastrophe. I would rather experience coercion than cancer, for instance, and if some government regulation coerces N people in order to prevent 10N people getting cancer then that sounds like a clear net win to me. (Of course it may depend a bit on exactly what sort of coercion and exactly what sort of cancer.) And this applies even if the regulation doesn&#8217;t &#8220;prevent blood running in the streets&#8221;. Such a regulation really doesn&#8217;t seem to me very accurately modelled by &#8220;forcing someone to make you a sandwich at gunpoint&#8221;.</p>
<p>(Specific example: I&#8217;m sure many smokers really like to smoke; I&#8217;m sure tobacco company executives really don&#8217;t like being obliged to put health warnings on their products, or forbidden to lie about the consequences of smoking; but I will <em>very happily</em> trade their unhappiness, and all the other negative consequences of tobacco regulation, for the tens of millions of premature deaths prevented by tobacco regulation since, say, 1960. I take it you disagree; I would be interested to know whether you deny that tens of millions of premature deaths have been prevented by tobacco regulation, or consider that the harm done by that regulation outweighs them, or just don&#8217;t care about the deaths if it takes government action to prevent them.)</p>
<p>I agree that this isn&#8217;t the place for a substantial debate on metaethics, but I will remark that your argument against moral nonrealism is wrong. Nonrealism doesn&#8217;t imply that &#8220;if most people think something is right then it is right&#8221; (since, e.g., the conclusion there appears to presuppose moral realism); rather, what it means is that words like &#8220;right&#8221; always need (implicitly or explicitly) a specification of <em>whose</em> values you&#8217;re considering; if most people think something is right then it&#8217;s right-in-their-value-system, but that doesn&#8217;t mean it has to be right-in-your-value-system. But I&#8217;ve no particular objection to intuitionism &#8212; though it seems that your moral intuitions may just be extremely different from mine and it&#8217;s not clear how to proceed when that happens.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '206169', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: onyomi</title>
		<link>http://slatestarcodex.com/2015/05/17/bicameral-reasoning/#comment-205861</link>
		<dc:creator><![CDATA[onyomi]]></dc:creator>
		<pubDate>Mon, 25 May 2015 14:48:36 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=3641#comment-205861</guid>
		<description><![CDATA[You are correct that I would prefer to eventually abolish government entirely, though I could see that happening gradually, as part of a wave of successively smaller secessions and/or privatizations of the functions government now. I do not, of course, think this would be a less pleasant world, but rather a much nicer and more functional one. For reasons why this may be the case and examples of how it might plausibly come about, see, for example, David Friedman&#039;s book, or the second half of Michael Huemer&#039;s book on political authority. 

My ethical view is not consequentialist or utilitarian, but that doesn&#039;t mean it ignores consequences entirely. As I have said before, IF we assumed that not having a government would produce disastrous consequences (I don&#039;t believe that, but if), then I would concede that a government might be morally justifiable in the same way that stealing is wrong, but if you&#039;re literally starving, and there is no other non-coercive way to obtain food (say, you are lost in the woods, haven&#039;t eaten for days and pass an empty cabin with a refrigerator full of food and have no way of contacting the owner to obtain permission), then, in those extreme circumstances, it may be right to steal food.

What does not follow, however, is that, having established that it&#039;s okay to steal food when you&#039;re starving and are absolutely out of options, therefore it is okay to steal food whenever you want food. 

What I am saying is that IF government, whose use of coercion, like stealing food, is prima facie morally suspect, is necessary to prevent much greater moral ills than the government itself perpetrates, then and *only to the extent necessary to prevent those greater ills* are its actions morally defensible.

Let&#039;s say having a taxation-funded police force which coerces people and throws them in jail is absolutely necessary to preventing  a much greater level of violence and coercion. And let&#039;s say that private security forces or voluntarily-funded community policing are just not a viable option for whatever reason (I think they are, but for the sake of argument). Then, in that case, it would be permissible for the government to tax and establish a police force. It does not follow, however, that because the government needs a police force to prevent violent crime, that they can therefore heavily tax cigarettes to discourage bad habits, require that hairdressers have a license to work, kick out immigrants who have committed no violent or coercive crimes, etc etc. 

Maybe some of these laws make society marginally better (though I think they usually make it worse), but they are obviously not required to prevent blood from running in the streets. They are not required to prevent disastrous consequences or to deal with extreme situations, and so the use of force for these purposes is not morally justifiable, just as forcing someone to make you a sandwich at gunpoint is not justifiable. 

So clearly, I DO care about consequences. It&#039;s just that consequences are not the only thing I care about. I think killing one healthy patient against his will to save five sick patients, for example, is obviously wrong. 

My own ethical view is a species of realist, rational ethical intuitionism, but I don&#039;t know if I want to get in a debate about the specifics of that, as I have previously described the view at length in older threads. The basic idea is that some things really are right or wrong, and we may perceive and debate that rightness or wrongness with our rational faculty. Morality is not a pure social construct, because if it were that would imply that if most people in a society think something is right, then it is right, which is obviously false. 

Ethical intuitionism can take consequences into consideration, but it does not attempt to reduce ethics to that. The view is rather that what makes something right or wrong is complex and that attempting to systematize a grand moral system based on just one parameter is likely to fail, precisely because everyone judges the success or failure of a moral system against their own intuition of what is right and wrong, and intuition takes into account many factors. When a system produces absurd consequences (say, that you should kill one patient to save five), then adherents generally make concessions or logical contortions to square it with the intuition (well, maybe killing one to save five is a net negative because it makes society feel uneasy, etc.) rather than accepting the counterintuitive conclusion.

So given that everyone ultimately contorts the logic of their chosen ethical system to square it with their intuitions, why not simply say that moral intuition IS the basis by which we must judge morality? It may seem too subjective (though I am claiming it is perceiving an objective quality), but it still allows for rational debate, and is, in any case no less subjective than utilitarianism, which requires an arbitrary determination of whose utility judgments are better.]]></description>
		<content:encoded><![CDATA[<p>You are correct that I would prefer to eventually abolish government entirely, though I could see that happening gradually, as part of a wave of successively smaller secessions and/or privatizations of the functions government now. I do not, of course, think this would be a less pleasant world, but rather a much nicer and more functional one. For reasons why this may be the case and examples of how it might plausibly come about, see, for example, David Friedman&#8217;s book, or the second half of Michael Huemer&#8217;s book on political authority. </p>
<p>My ethical view is not consequentialist or utilitarian, but that doesn&#8217;t mean it ignores consequences entirely. As I have said before, IF we assumed that not having a government would produce disastrous consequences (I don&#8217;t believe that, but if), then I would concede that a government might be morally justifiable in the same way that stealing is wrong, but if you&#8217;re literally starving, and there is no other non-coercive way to obtain food (say, you are lost in the woods, haven&#8217;t eaten for days and pass an empty cabin with a refrigerator full of food and have no way of contacting the owner to obtain permission), then, in those extreme circumstances, it may be right to steal food.</p>
<p>What does not follow, however, is that, having established that it&#8217;s okay to steal food when you&#8217;re starving and are absolutely out of options, therefore it is okay to steal food whenever you want food. </p>
<p>What I am saying is that IF government, whose use of coercion, like stealing food, is prima facie morally suspect, is necessary to prevent much greater moral ills than the government itself perpetrates, then and *only to the extent necessary to prevent those greater ills* are its actions morally defensible.</p>
<p>Let&#8217;s say having a taxation-funded police force which coerces people and throws them in jail is absolutely necessary to preventing  a much greater level of violence and coercion. And let&#8217;s say that private security forces or voluntarily-funded community policing are just not a viable option for whatever reason (I think they are, but for the sake of argument). Then, in that case, it would be permissible for the government to tax and establish a police force. It does not follow, however, that because the government needs a police force to prevent violent crime, that they can therefore heavily tax cigarettes to discourage bad habits, require that hairdressers have a license to work, kick out immigrants who have committed no violent or coercive crimes, etc etc. </p>
<p>Maybe some of these laws make society marginally better (though I think they usually make it worse), but they are obviously not required to prevent blood from running in the streets. They are not required to prevent disastrous consequences or to deal with extreme situations, and so the use of force for these purposes is not morally justifiable, just as forcing someone to make you a sandwich at gunpoint is not justifiable. </p>
<p>So clearly, I DO care about consequences. It&#8217;s just that consequences are not the only thing I care about. I think killing one healthy patient against his will to save five sick patients, for example, is obviously wrong. </p>
<p>My own ethical view is a species of realist, rational ethical intuitionism, but I don&#8217;t know if I want to get in a debate about the specifics of that, as I have previously described the view at length in older threads. The basic idea is that some things really are right or wrong, and we may perceive and debate that rightness or wrongness with our rational faculty. Morality is not a pure social construct, because if it were that would imply that if most people in a society think something is right, then it is right, which is obviously false. </p>
<p>Ethical intuitionism can take consequences into consideration, but it does not attempt to reduce ethics to that. The view is rather that what makes something right or wrong is complex and that attempting to systematize a grand moral system based on just one parameter is likely to fail, precisely because everyone judges the success or failure of a moral system against their own intuition of what is right and wrong, and intuition takes into account many factors. When a system produces absurd consequences (say, that you should kill one patient to save five), then adherents generally make concessions or logical contortions to square it with the intuition (well, maybe killing one to save five is a net negative because it makes society feel uneasy, etc.) rather than accepting the counterintuitive conclusion.</p>
<p>So given that everyone ultimately contorts the logic of their chosen ethical system to square it with their intuitions, why not simply say that moral intuition IS the basis by which we must judge morality? It may seem too subjective (though I am claiming it is perceiving an objective quality), but it still allows for rational debate, and is, in any case no less subjective than utilitarianism, which requires an arbitrary determination of whose utility judgments are better.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '205861', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: g</title>
		<link>http://slatestarcodex.com/2015/05/17/bicameral-reasoning/#comment-205813</link>
		<dc:creator><![CDATA[g]]></dc:creator>
		<pubDate>Mon, 25 May 2015 10:06:56 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=3641#comment-205813</guid>
		<description><![CDATA[No, you haven&#039;t offered an alternative, you&#039;ve described one feature of an alternative. What do you think society would look like after making your change (which amounts, so far as I can see, to abolishing government altogether)? 

I have described some ways in which I think what we have now is better than what we&#039;d get then. (And that &lt;em&gt;is&lt;/em&gt; an ethical argument, whether you accept it as such or not.) You seem almost completely uninterested in what the consequences would be.

Incidentally, while I&#039;ve said a fair amount about how I see ethics, so far all I can tell is that you think the use of force is bad and that you regard taking away a person&#039;s property as a species of force. Which is hard to square with your indignation at the idea that rights might be socially constructed -- if anything is socially constructed, &lt;em&gt;property&lt;/em&gt; is. But never mind consistency for now: surely that can&#039;t be the entirety of your ethical system?

The most plausible system I can think of that looks like that says this: the only thing that really matters ethically is that no one should have anything done to them without their consent. I think this has two serious problems: first, it&#039;s not at all clear what counts as &quot;done &lt;em&gt;to them&lt;/em&gt;&quot; (e.g., apparently modifying the charge levels in some tiny capacitors inside computers at your bank&#039;s headquarters can constitute doing something to you, and I suspect that any non-gerrymandered definition broad enough to encompass that is also going to be broad enough to include things you won&#039;t want, such as consuming a good you also use and thereby increasing the price you pay); second, it isn&#039;t merely not  consequentialist (that&#039;s fair enough) but &lt;em&gt;completely ignores consequences&lt;/em&gt;, which in turn has two problems. The first is that an action may, while not in any way applying any force to you directly, have consequences that include other people later doing so. (Obvious example: an already-existing government decides to make something you want to do illegal. Less obvious example: someone abolishes the government where you are, and in the ensuing anarchy you get forced to do things by people with more guns or more minions than you have.) The second is that it seems to commit you to preferring a world in which you have $10k and no one takes it from you over an otherwise similar world in which you have $100k and then someone takes $10k of it, which seems to me like a very of preference.

[EDITED to add: of course the last few paragraphs are criticizing not what I know your opinions to be, but one guys at what they might be, so they may well not be directly applicable. But I guess that your actual opinions, if you choose to share them here, are likely to be subject to similar criticisms.]]]></description>
		<content:encoded><![CDATA[<p>No, you haven&#8217;t offered an alternative, you&#8217;ve described one feature of an alternative. What do you think society would look like after making your change (which amounts, so far as I can see, to abolishing government altogether)? </p>
<p>I have described some ways in which I think what we have now is better than what we&#8217;d get then. (And that <em>is</em> an ethical argument, whether you accept it as such or not.) You seem almost completely uninterested in what the consequences would be.</p>
<p>Incidentally, while I&#8217;ve said a fair amount about how I see ethics, so far all I can tell is that you think the use of force is bad and that you regard taking away a person&#8217;s property as a species of force. Which is hard to square with your indignation at the idea that rights might be socially constructed &#8212; if anything is socially constructed, <em>property</em> is. But never mind consistency for now: surely that can&#8217;t be the entirety of your ethical system?</p>
<p>The most plausible system I can think of that looks like that says this: the only thing that really matters ethically is that no one should have anything done to them without their consent. I think this has two serious problems: first, it&#8217;s not at all clear what counts as &#8220;done <em>to them</em>&#8221; (e.g., apparently modifying the charge levels in some tiny capacitors inside computers at your bank&#8217;s headquarters can constitute doing something to you, and I suspect that any non-gerrymandered definition broad enough to encompass that is also going to be broad enough to include things you won&#8217;t want, such as consuming a good you also use and thereby increasing the price you pay); second, it isn&#8217;t merely not  consequentialist (that&#8217;s fair enough) but <em>completely ignores consequences</em>, which in turn has two problems. The first is that an action may, while not in any way applying any force to you directly, have consequences that include other people later doing so. (Obvious example: an already-existing government decides to make something you want to do illegal. Less obvious example: someone abolishes the government where you are, and in the ensuing anarchy you get forced to do things by people with more guns or more minions than you have.) The second is that it seems to commit you to preferring a world in which you have $10k and no one takes it from you over an otherwise similar world in which you have $100k and then someone takes $10k of it, which seems to me like a very of preference.</p>
<p>[EDITED to add: of course the last few paragraphs are criticizing not what I know your opinions to be, but one guys at what they might be, so they may well not be directly applicable. But I guess that your actual opinions, if you choose to share them here, are likely to be subject to similar criticisms.]</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '205813', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
</channel>
</rss>
