<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Slate Star Codex &#187; rationality</title>
	<atom:link href="http://slatestarcodex.com/tag/rationality/feed/" rel="self" type="application/rss+xml" />
	<link>http://slatestarcodex.com</link>
	<description>In a mad world, all blogging is psychiatry blogging</description>
	<lastBuildDate>Thu, 23 Jul 2015 03:07:59 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.2.3</generator>
	<item>
		<title>(Late) Predictions for 2015</title>
		<link>http://slatestarcodex.com/2015/06/13/late-predictions-for-2015/</link>
		<comments>http://slatestarcodex.com/2015/06/13/late-predictions-for-2015/#comments</comments>
		<pubDate>Sun, 14 Jun 2015 02:21:01 +0000</pubDate>
		<dc:creator><![CDATA[Scott Alexander]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[rationality]]></category>

		<guid isPermaLink="false">http://slatestarcodex.com/?p=3672</guid>
		<description><![CDATA[I was supposed to institute a tradition of making predictions at the beginning of each year, then grading them at the end to test my calibration. Everything went according to plan last year &#8211; last January I made predictions for &#8230; <a href="http://slatestarcodex.com/2015/06/13/late-predictions-for-2015/">Continue reading <span class="pjgm-metanav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>I was supposed to institute a tradition of making predictions at the beginning of each year, then grading them at the end to <A HREF="https://en.wikipedia.org/wiki/Calibrated_probability_assessment">test my calibration</A>. Everything went according to plan last year &#8211; last January I made <A HREF="http://slatestarcodex.com/2014/01/28/predictions-for-2014/">predictions for 2014</A> and then this January I <A HREF="http://slatestarcodex.com/2015/01/01/2014-predictions-calibration-results/">scored them</A>. I ended with &#8220;2015 predictions coming soon,&#8221; then totally forgot to do that.</p>
<p>So now that it&#8217;s June and predicting what will happen in 2015 is about 40% easier, I might as well get started. Acceptable confidence levels are 50%, 60%, 70%, 80%, 90%, 95%, and 99%, to make it easy to score. All predictions are about the state of the world on 12/31/2015:</p>
<p><b>World Events</b><br />
1. US will not get involved in any new major war with death toll of > 100 US soldiers: 70%<br />
2. North Korea’s government will survive the year without large civil war/revolt: 95%<br />
3. Greece will not announce it&#8217;s leaving the Euro: 60%<br />
3. Neither Russia nor Qatar will lose their World Cups: 80%<br />
4. Ebola will kill fewer people in second half of 2015 than the in first half: 95%<br />
5. No terrorist attack in the USA will kill > 100 people: 90%<br />
6. Assad will remain President of Syria: 70%<br />
7. Israel will not get in a large-scale war (ie >100 Israeli deaths) with any Arab state: 90%<br />
8. Syria&#8217;s civil war will not end this year: 80%<br />
9. ISIS will control less territory than it does right now: 70%<br />
10. ISIS will continue to exist: 80%<br />
11. Iran will reach a deal with the West on nuclear weapons: 80%<br />
12. No major civil war in Middle Eastern country not currently experiencing a major civil war: 90%<br />
13. Iraq&#8217;s situation not to get any worse (eg gov&#8217;t collapse, new rebellion): 60%<br />
14. Obamacare will survive the year mostly intact: 60%<br />
15. Hillary Clinton will be the top-polling Democratic Presidential candidate: 95%<br />
16. Jeb Bush will be the top-polling Republican candidate: 50%<br />
17. Trans-Pacific Partnership to pass at least mostly intact: 60%<br />
18. US official unemployment rate will be < 7% in Dec 2015: 95%  19. Bitcoin will end the year higher than $200: 95%  20. Oil will end the year >$60 a barrel: 50%</p>
<p><b>Personal Life</b><br />
21. SSC will remain active: 95%<br />
22. SSC will get fewer hits in the second half of 2015 than the first half: 60%<br />
23. At least one SSC post in the second half of 2015 will get > 100,000 hits: 70%<br />
24. Shireroth will remain active: 90%<br />
25. I will remain at my same job through the end of 2015: 95%<br />
26. There will be no further ramifications or lawsuits from either side over the flooding of my house: 80%<br />
27. I will reach my savings target: 90%<br />
28. I will get a score at >95th percentile for my year on PRITE: 50%<br />
29. I will be involved in at least one published/accepted-to-publish research paper by the end of 2015: 60%<br />
30. I will not break up with any of my current girlfriends: 80%<br />
31. I will not get any new girlfriends: 50%<br />
32. I will not finish [project]: 60%<br />
33. I will attend NYC Solstice ritual: 80%<br />
34. I will flake out of my plan to lead some kind of Solstice Ritual myself: 60%<br />
35. I will be living in the house I&#8217;m currently trying to arrange to rent: 70%</p>
<p>These are all the things I could think of worth predicting; feel free to suggest others.</p>
]]></content:encoded>
			<wfw:commentRss>http://slatestarcodex.com/2015/06/13/late-predictions-for-2015/feed/</wfw:commentRss>
		<slash:comments>211</slash:comments>
		</item>
		<item>
		<title>2014 Predictions: Calibration Results</title>
		<link>http://slatestarcodex.com/2015/01/01/2014-predictions-calibration-results/</link>
		<comments>http://slatestarcodex.com/2015/01/01/2014-predictions-calibration-results/#comments</comments>
		<pubDate>Thu, 01 Jan 2015 22:39:58 +0000</pubDate>
		<dc:creator><![CDATA[Scott Alexander]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[rationality]]></category>

		<guid isPermaLink="false">http://slatestarcodex.com/?p=3503</guid>
		<description><![CDATA[Last year in January I made some predictions for 2014 and gave my calibration level for each. Time to see how I did. Ones in black were successes, ones in red were failures. 1. Obamacare will survive the year mostly &#8230; <a href="http://slatestarcodex.com/2015/01/01/2014-predictions-calibration-results/">Continue reading <span class="pjgm-metanav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>Last year in January I made some predictions for 2014 and gave my calibration level for each. Time to see how I did. Ones in black were successes, ones in red were failures.</p>
<p>1. Obamacare will survive the year mostly intact: 80%<br />
2. US does not get involved in any new major war with death toll of > 100 US soldiers: 90%<br />
3. Syria’s civil war will not end this year (unstable cease-fire doesn’t count as “end”): 60%<br />
<font color="red">4. Bitcoin will end the year higher than $1000: 70%</font><br />
5. US official unemployment rate will be < 7% in Dec 2014: 80%  6. Republicans will keep the House in US midterm elections: 80%  <font color="red">7. Democrats will keep the Senate in US midterm elections: 60%</font><br />
8. North Korea&#8217;s government will survive the year without large civil war/revolt: 80%<br />
9. Iraq&#8217;s government will survive the year without large civil war/revolt: 60%<br />
10. China&#8217;s government will survive the year without large civil war/revolt: 99%<br />
11. US government will survive the year without large civil war/revolt: 99%<br />
12. Egypt&#8217;s government will survive the year without large civil war/revolt: 60%<br />
13. Israel-Palestine negotiations remain blocked, no obvious plan for Palestinian independence: 99%<br />
14. Israel does not get in a large-scale war (ie >100 Israeli deaths) with any Arab state: 95%<br />
15. Sochi Olympics will not be obvious disaster (ie spectacular violence, protests that utterly overshadow events, or have to be stopped early): 90%<br />
16. Putin will remain President of Russia at end of 2014: 95%<br />
17. Obama will remain President of USA at end of 2014: 95%<br />
18. No nuclear weapon used in anger in 2014: 99%<br />
19. No terrorist attack in USA killing > 100 people: 90%<br />
20. No mass shooting in USA killing > 10 people: 50%<br />
21. Republic of Shireroth will not officially disband or obviously die before end of 2014: 90%<br />
22. Republic of Shireroth will remain in Bastion Union: 80%<br />
23. Active population of Shireroth on last 2014 census will be between 10 and 20: 70%<br />
24. Slate Star Codex will remain active until end of 2014: 70%<br />
25. Slate Star Codex will get more hits in 2014 than in 2013: 60%<br />
26. At least one 2014 Slate Star Codex post will get > 10,000 hits total: 80%<br />
27. No 2014 Slate Star Codex post will get > 100,000 hits total: 90%<br />
<font color="red">28. 2014 Less Wrong Survey will show higher population than 2013 Survey conditional on similar methodology: 80%</font><br />
29. 2014 Less Wrong Survey will show population < 2000 C.O.S.M.: 70%  30. 2014 Less Wrong Survey will have > % female than 2013 Less Wrong Survey: 70%<br />
31. 2014 Less Wrong Survey will have < 20% female: 90%  <font color="red">32. HPMoR will conclude in 2014: 80%</font><br />
33. At least 1 LW post > 100 karma in 2014: 50%<br />
34. No LW post > 100 karma by me: 80%<br />
35. CFAR will continue operating in end of 2014: 90%<br />
36. MIRI will continue operating in end of 2014: 99%<br />
37. MetaMed will continue operating in end of 2014: 80%<br />
38. None of Eliezer, Luke, Anna, or Julia will quit their respective organizations: 60%<br />
39. No one in LW community will become world-famous (let’s say >= Peter Thiel) for anything they accomplish this year: 80%<br />
40. MIRI will not announce it is actively working on coding a Friendly AI (not just a few bits and pieces thereof) before the end of 2014: 99%<br />
41. I will remain at my same job through the end of 2014: 95%<br />
<font color="red">42. I will get a score at >95th percentile for my year on PRITE: 70%</font><br />
43. I will be involved in at least one published/accepted-to-publish research paper by the end of 2014: 20%<br />
<font color="red">44. I will not break up with any of my current girlfriends through the end of 2014: 50%</font><br />
<font color="red">45. I will not get any new girlfriends in 2014: 50%</font><br />
46. I will not be engaged by the end of 2014: 80%<br />
<font color="red">47. I will be living with Ozy by the end of 2014: 80%</font><br />
<font color="red">48. I will take nootropics on average at least once/week through the second half of 2014: 50%</font><br />
49. I will not manage to meditate at least 100 days in 2014: 80%<br />
50. I will attend NYC Solstice ritual: 60%<br />
<font color="red">51. I will arrange some kind of Michigan Solstice Ritual: 50%</font><br />
52. I will not publicly identify as religious (> atheist) by the end of 2014: 95%<br />
53. I will not publicly identify as neoreactionary or conservative (> liberal or libertarian) by the end of 2014: 70%<br />
54. I will not publicly identify as leftist or communist (> liberal or libertarian) by the end of 2014: 80%<br />
55. I will get a Tumblr in 2014: 50%<br />
56. I will not delete/abandon either my Facebook or Twitter accounts: 60%<br />
<font color="red">57. I will have less than 1000 Twitter followers by the end of 2014: 60%</font><br />
58. When Eliezer sends me a copy of “Perfect Health Diet”, I will not be convinced that it is more correct or useful than the best mainstream nutrition advice (eg Stephen Guyenet’s blog): 70%<br />
59. I will end up being underconfident on these predictions: 50%</p>
<p>Of predictions at the 50% level, 4/8 (50%) were correct<br />
Of predictions at the 60% level, 7/9 (77%) were correct<br />
Of predictions at the 70% level, 6/8 (75%) were correct<br />
Of predictions at the 80% level, 12/15 (80%) were correct<br />
Of predictions at the 90% level, 7-0 (100%) were correct<br />
Of predictions at the 95% level, 5-0 (100%) were correct<br />
Of predictions at the 99% level, 6-0 (100%) were correct</p>
<p><center><IMG SRC="http://slatestarcodex.com/blog_images/caligraph2014.png"></center></p>
<p>I declare myself to be <i>impressively</i> well-calibrated. You should all trust me about everything.</p>
<p>2015 predictions coming soon.</p>
]]></content:encoded>
			<wfw:commentRss>http://slatestarcodex.com/2015/01/01/2014-predictions-calibration-results/feed/</wfw:commentRss>
		<slash:comments>91</slash:comments>
		</item>
		<item>
		<title>Beware The Man Of One Study</title>
		<link>http://slatestarcodex.com/2014/12/12/beware-the-man-of-one-study/</link>
		<comments>http://slatestarcodex.com/2014/12/12/beware-the-man-of-one-study/#comments</comments>
		<pubDate>Fri, 12 Dec 2014 09:04:56 +0000</pubDate>
		<dc:creator><![CDATA[Scott Alexander]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[rationality]]></category>
		<category><![CDATA[science]]></category>

		<guid isPermaLink="false">http://slatestarcodex.com/?p=3423</guid>
		<description><![CDATA[Aquinas famously said: beware the man of one book. I would add: beware the man of one study. For example, take medical research. Suppose a certain drug is weakly effective against a certain disease. After a few years, a bunch &#8230; <a href="http://slatestarcodex.com/2014/12/12/beware-the-man-of-one-study/">Continue reading <span class="pjgm-metanav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>Aquinas famously <A HREF="http://en.wikipedia.org/wiki/Homo_unius_libri">said</A>: beware the man of one book. I would add: beware the man of one study.</p>
<p>For example, take medical research. Suppose a certain drug is weakly effective against a certain disease. After a few years, a bunch of different research groups have gotten their hands on it and done all sorts of different studies. In the best case scenario the average study will find the true result &#8211; that it&#8217;s weakly effective.</p>
<p>But there will also be random noise caused by inevitable variation and by some of the experiments being better quality than others. In the end, we might expect something looking kind of like a bell curve. The peak will be at &#8220;weakly effective&#8221;, but there will be a few studies to either side. Something like this:</p>
<p><center><IMG SRC="http://slatestarcodex.com/blog_images/onestudy.png"></center></p>
<p>We see that the peak of the curve is somewhere to the right of neutral &#8211; ie weakly effective &#8211; and that there are about 15 studies that find this correct result.</p>
<p>But there are also about 5 studies that find that the drug is very good, and 5 studies missing the sign entirely and finding that the drug is actively bad. There&#8217;s even 1 study finding that the drug is very bad, maybe seriously dangerous.</p>
<p>This is before we get into fraud or statistical malpractice. I&#8217;m saying this is what&#8217;s going to happen just by normal variation in experimental design. As we increase experimental rigor, the bell curve might get squashed horizontally, but there will still be a bell curve.</p>
<p>In practice it&#8217;s worse than this, because this is assuming everyone is investigating exactly the same question.</p>
<p>Suppose that the graph is titled &#8220;Effectiveness Of This Drug In Treating Bipolar Disorder&#8221;. </p>
<p>But maybe the drug is more effective in bipolar i than in bipolar ii (Depakote, for example)</p>
<p>Or maybe the drug is very effective against bipolar mania, but much less effective against bipolar depression (Depakote again).</p>
<p>Or maybe the drug is a good acute antimanic agent, but very poor at maintenance treatment (let&#8217;s stick with Depakote).</p>
<p>If you have a graph titled &#8220;Effectiveness Of Depakote In Treating Bipolar Disorder&#8221; plotting studies from &#8220;Very Bad&#8221; to &#8220;Very Good&#8221; &#8211; and you stick all the studies &#8211; maintenence, manic, depressive, bipolar i, bipolar ii &#8211; on the graph, then you&#8217;re going to end running the gamut from &#8220;very bad&#8221; to &#8220;very good&#8221; even before you factor in noise and even before even before you factor in bias and poor experimental design.</p>
<p>So here&#8217;s why you should beware the man of one study.</p>
<p>If you go to your better class of alternative medicine websites, they don&#8217;t tell you &#8220;Studies are a logocentric phallocentric tool of Western medicine and the Big Pharma conspiracy.&#8221;</p>
<p>They tell you &#8220;medical science has proved that this drug is terrible, but ignorant doctors are pushing it on you anyway. Look, here&#8217;s a study by a reputable institution proving that the drug is not only ineffective, but harmful.&#8221;</p>
<p>And the study will exist, and the authors will be prestigious scientists, and it will probably be about as rigorous and well-done as any other study.</p>
<p>And then a lot of people raised on <A HREF="http://slatestarcodex.com/2014/04/15/the-cowpox-of-doubt/">the idea</A> that some things have Evidence and other things have No Evidence think <i>holy s**t, they&#8217;re right!</i></p>
<p>On the other hand, your doctor isn&#8217;t going to a sketchy alternative medicine website. She&#8217;s examining the entire literature and extracting careful and well-informed conclusions from&#8230;</p>
<p>Haha, just kidding. She&#8217;s going to a luncheon at a really nice restaurant sponsored by a pharmaceutical company, which assures her that they would <i>never</i> take advantage of such an opportunity to shill their drug, they just want to raise awareness of the latest study. And the latest study shows that their drug is great! Super great! And your doctor nods along, because the authors of the study are prestigious scientists, and it&#8217;s about as rigorous and well-done as any other study.</p>
<p>But obviously the pharmaceutical company has selected one of the studies from the &#8220;very good&#8221; end of the bell curve.</p>
<p>And I called this &#8220;Beware The Man of One Study&#8221;, but it&#8217;s easy to see that in the little diagram there are like three or four studies showing that the drug is &#8220;very good&#8221;, so if your doctor is a little skeptical, the pharmaceutical company can say &#8220;You are right to be skeptical, one study doesn&#8217;t prove anything, but look &#8211; here&#8217;s another group that finds the same thing, here&#8217;s yet another group that finds the same thing, and here&#8217;s a replication that confirms both of them.&#8221;</p>
<p>And even though it looks like in our example the sketchy alternative medicine website only has one &#8220;very bad&#8221; study to go off of, they could easily supplement it with a bunch of merely &#8220;bad&#8221; studies. Or they could add all of those studies about slightly different things. Depakote is ineffective at treating bipolar depression. Depakote is ineffective at maintenance bipolar therapy. Depakote is ineffective at bipolar ii. </p>
<p>So just sum it up as &#8220;Smith et al 1987 found the drug ineffective, yet doctors continue to prescribe it anyway&#8221;. Even if you hunt down the original study (which no one does), Smith et al won&#8217;t say specifically &#8220;Do remember that this study is only looking at bipolar maintenance, which is a different topic from bipolar acute antimanic treatment, and we&#8217;re not saying anything about that.&#8221; It will just be titled something like &#8220;Depakote fails to separate from placebo in six month trial of 91 patients&#8221; and trust that the responsible professionals reading it are well aware of the difference between acute and maintenance treatments (hahahahaha).</p>
<p>So it&#8217;s not so much &#8220;beware the man of one study&#8221; as &#8220;beware the man of any number of studies less than a relatively complete and not-cherry-picked survey of the research&#8221;.</p>
<p><b>II.</b></p>
<p>I think medical science is still pretty healthy, and that the consensus of doctors and researchers is more-or-less right on most controversial medical issues. </p>
<p>(it&#8217;s the <i>uncontroversial</i> ones you have to worry about)</p>
<p>Politics doesn&#8217;t have this protection.</p>
<p>Like, take the minimum wage question (please). We all know about the Krueger and Card <A HREF="http://davidcard.berkeley.edu/papers/njmin-aer.pdf">study</A> in New Jersey that found no evidence that high minimum wages hurt the economy. We probably also know the counterclaims that it was <A HREF="http://nypost.com/2013/08/06/minimum-honesty-on-minimum-wage/">completely debunked</A> as despicable dishonest statistical malpractice. Maybe some of us know Card and Krueger wrote a <A HREF="http://www.jstor.org/discover/10.2307/2677856?uid=16785200&#038;uid=3739728&#038;uid=2&#038;uid=3&#038;uid=67&#038;uid=16754504&#038;uid=62&#038;uid=3739256&#038;sid=21104826014421">pretty convincing rebuttal</A> of those claims. Or that a bunch of large and methodologically advanced studies have come out since then, some finding no effect like <A HREF="https://escholarship.org/uc/item/86w5m90m">Dube</A>, others finding strong effects like <A HREF="https://economics.uchicago.edu/workshops/Rubinstein%20Yona%20Using%20Federal%20Minimum%20Wages%20Paper.pdf">Rubinstein</A> and <A HREF="http://econbrowser.com/archives/2014/12/new-estimates-of-the-effects-of-the-minimum-wage">Wither</A>. These are just examples; there are at least dozens and probably hundreds of studies on both sides.</p>
<p>But we can solve this with meta-analyses and systemtic reviews, right?</p>
<p>Depends which one you want. Do you go with <A HREF="http://people.hss.caltech.edu/~camerer/SS280/Card-Kruger-AER_Jan95.pdf">this meta-analysis</A> of fourteen studies that shows that any presumed negative effect of high minimum wages is likely publication bias? With <A HREF="http://onlinelibrary.wiley.com/doi/10.1111/j.1467-8543.2009.00723.x/abstract">this meta-analysis</A> of sixty-four studies that finds the same thing and discovers no effect of minimum wage after correcting for the problem? Or how about <A HREF="http://ftp.iza.org/dp4983.pdf">this meta-analysis</A> of fifty-five countries that does find effects in most of them? Maybe you prefer <A HREF="http://www.nber.org/papers/w12663.pdf">this systematic review</A> of a hundred or so studies that finds strong and consistent effects?</p>
<p>Can we trust news sources, think tanks, econblogs, and other institutions to sum up the state of the evidence?</p>
<p>CNN <A HREF="http://www.cnn.com/2011/09/16/opinion/saltsman-minimum-wage/">claims that</A> 85% of credible studies have shown the minimum wage causes job loss. But raisetheminimumwage.com <A HREF="http://www.raisetheminimumwage.com/pages/job-loss">declares that</A> &#8220;two decades of rigorous economic research have found that raising the minimum wage does not result in job loss&#8230;researchers and businesses alike agree today that the weight of the evidence shows no reduction in employment resulting from minimum wage increases.&#8221; Modeled Behavior <A HREF="http://modeledbehavior.com/2010/10/12/what-the-new-minimum-wage-research-says/">says</A> &#8220;the majority of the new minimum wage research supports the hypothesis that the minimum wage increases unemployment.&#8221; The Center for Budget and Policy Priorities <A HREF="http://www.cbpp.org/cms/?fa=view&#038;id=4075">says</A> &#8220;The common claim that raising the minimum wage reduces employment for low-wage workers is one of the most extensively studied issues in empirical economics.  The weight of the evidence is that such impacts are small to none.&#8221;</p>
<p>Okay, fine. What about economists? They seem like experts. What do they think?</p>
<p>Well, five hundred economists <A HREF="http://economistletter.com/">signed</A> a letter to policy makers saying that the science of economics shows increasing the minimum wage would be a bad idea. That sounds like a promising consensus&#8230;</p>
<p>..except that six hundred economists <A HREF="http://www.epi.org/minimum-wage-statement/">signed</A> a letter to policy makers saying that the science of economics shows increasing the minimum wage would be a <i>good</i> idea. (h/t <A HREF="http://gregmankiw.blogspot.com/2014/03/economists-divided-on-minimum-wage-hike.html">Greg Mankiw</A>)</p>
<p>Fine then. Let&#8217;s do a formal survey of economists. Now what?</p>
<p><A HREF="http://www.raisetheminimumwage.com/pages/job-loss">raisetheminimumwage.com</A>, an unbiased source if ever there was one, confidently tells us that</A> &#8220;indicative is a 2013 survey by the University of Chicago’s Booth School of Business in which leading economists agreed by a nearly 4 to 1 margin that the benefits of raising and indexing the minimum wage outweigh the costs.&#8221;</p>
<p>But the Employment Policies Institute, which sounds like it&#8217;s trying <i>way</i> too hard to sound like an unbiased source, <A HREF="https://www.epionline.org/release/o185/">tells us that</A> &#8220;Over 73 percent of AEA labor economists believe that a significant increase will lead to employment losses and 68 percent think these employment losses fall disproportionately on the least skilled. Only 6 percent feel that minimum wage hikes are an efficient way to alleviate poverty.&#8221; </p>
<p>So the whole thing is fiendishly complicated. But unless you look very very hard, you will never know that.</p>
<p>If you are a conservative, what you will find on the sites you trust will be something like this:<br />
<blockquote>Economic theory has always shown that minimum wage increases decrease employment, but the Left has never been willing to accept this basic fact. In 1992, they trumpeted a single study by Card and Krueger that purported to show no negative effects from a minimum wage increase. This study was immediately debunked and found to be based on statistical malpractice and &#8220;massaging the numbers&#8221;. Since then, dozens of studies have come out confirming what we knew all along &#8211; that a high minimum wage is economic suicide. Systematic reviews and meta-analyses (Neumark 2006, Boockman 2010) consistently show that an overwhelming majority of the research agrees on this fact &#8211; as do 73% of economists. That&#8217;s why five hundred top economists recently signed a letter urging policy makers not to buy into discredited liberal minimum wage theories. Instead of listening to starry-eyed liberal woo, listen to the empirical evidence and an overwhelming majority of economists and oppose a raise in the minimum wage.</p></blockquote>
<p>And if you are a leftist, what you will find on the sites you trust will be something like this:<br />
<blockquote>People used to believe that the minimum wage decreased unemployment. But Card and Krueger&#8217;s famous 1992 study exploded that conventional wisdom. Since then, the results have been replicated over fifty times, and further meta-analyses (Card and Krueger 1995, Dube 2010) have found no evidence of any effect. Leading economists agree by a 4 to 1 margin that the benefits of raising the minimum wage outweigh the costs, and that&#8217;s why more than 600 of them have signed a petition telling the government to do exactly that. Instead of listening to conservative scare tactics based on long-debunked theories, listen to the empirical evidence and the overwhelming majority of economists and support a raise in the minimum wage.</p></blockquote>
<p>Go ahead. <A HREF="http://webcache.googleusercontent.com/search?hl=en&#038;q=cache:TcOxVD4OoyQJ:http://www.businessinsider.com/krueger-card-fast-food-minimum-wage-study-2013-8%2Bhttp://www.businessinsider.com/krueger-card-fast-food-minimum-wage-study-2013-8&#038;gbv=2&#038;&#038;ct=clnk">Google</A> <A HREF="http://www.rifuture.org/republicans-are-wrong-about-minimum-wage-and-economists-know-it.html">the</A> <A HREF="http://mic.com/articles/61573/the-argument-to-increase-minimum-wage-you-haven-t-heard">issue</A> <A HREF="http://www.washingtonexaminer.com/article/2521472">and</A> <A HREF="http://chicagopolicyreview.org/2014/05/20/do-you-want-a-higher-minimum-wage-with-that/">see</A> <A HREF="http://www.nextnewdeal.net/rediscovering-government/debunking-minimum-wage-myth-higher-wages-will-not-reduce-jobs">what</A> <A HREF="http://www.freedomworks.org/content/yes-minimum-wage-increases-reduce-employment-and-hurt-low-skilled-workers">stuff</A>  <A HREF="http://www.nationalreview.com/corner/275846/krueger-s-faulty-minimum-wage-study-carrie-l-lukas">comes</A> <A HREF="http://www.dailykos.com/story/2014/05/01/1296116/-Minimum-Wage-Maximum-Rage">up</A>. If it doesn&#8217;t quite match what I said above, it&#8217;s usually because they can&#8217;t even muster <i>that</i> level of scholarship. Half the sites just cite Card and Krueger and call it a day!</p>
<p>These sites with their long lists of studies and experts are super convincing. And half of them are wrong.</p>
<p>At some point in their education, most smart people usually learn not to credit arguments from authority. If someone says &#8220;Believe me about the minimum wage because I seem like a trustworthy guy,&#8221; most of them will have at least one neuron in their head that says &#8220;I should ask for some evidence&#8221;. If they&#8217;re <i>really</i> smart, they&#8217;ll use the magic words &#8220;peer-reviewed experimental studies.&#8221;</p>
<p>But I worry that most smart people have <i>not</i> learned that a list of dozens of studies, several meta-analyses, hundreds of experts, and expert surveys showing almost all academics support your thesis &#8211; can <i>still</i> be bullshit. </p>
<p>Which is too bad, because that&#8217;s exactly what people who want to bamboozle an educated audience are going to use.</p>
<p><b>III.</b></p>
<p>I do not want to preach radical skepticism.</p>
<p>For example, on the minimum wage issue, I notice only one side has presented a funnel plot. A funnel plot is usually used to investigate publication bias, but it has another use as well &#8211; it&#8217;s pretty much an exact presentation of the &#8220;bell curve&#8221; we talked about above.</p>
<p><center><IMG SRC="http://upload.wikimedia.org/wikipedia/commons/8/82/Funnel_Graph_of_Estimated_Minimum_Wage_Effects.jpg"></center></p>
<p>This is more of a needle curve than a bell curve, but the point still stands. We see it&#8217;s centered around 0, which means there&#8217;s some evidence that&#8217;s the real signal among all this noise. The bell skews more to left than to the right, which means more studies have found negative effects of the minimum wage than positive effects of the minimum wage. But since the bell curve is asymmetrical, we intepret that as <i>probably</i> publication bias. So all in all, I think there&#8217;s at least some evidence that the liberals are right on this one.</p>
<p>Unless, of course, someone has realized that I&#8217;ve wised up to the studies and meta-analyses and and expert surveys, and figured out a way to hack <i>funnel plots</i>, which I am totally not ruling out.</p>
<p>(okay, I <i>kind of</i> want to preach radical skepticism)</p>
<p>Also, I should probably mention that it&#8217;s much more complicated than one side being right, and that the minimum wage probably works differently depending on what industry you&#8217;re talking about, whether it&#8217;s state wage or federal wage, whether it&#8217;s a recession or a boom, whether we&#8217;re talking about increasing from $5 to $6 or from $20 to $30, etc, etc, etc. There are eleven studies on that plot showing an effect even worse than -5, and very possibly they are all accurate for whatever subproblem they have chosen to study &#8211; much like the example with Depakote where it might an effective antimanic but a terrible antidepressant.</p>
<p>(radical skepticism actually sounds a lot better than figuring this all out).</p>
<p><b>IV.</b></p>
<p>But the question remains: what happens when (like in most cases) you don&#8217;t have a funnel plot?</p>
<p>I don&#8217;t have a good positive answer. I do have several good <i>negative</i> answers.</p>
<p>Decrease your confidence about most things if you&#8217;re not sure that you&#8217;ve investigated every piece of evidence.</p>
<p>Do not trust websites which are obviously biased (eg Free Republic, Daily Kos, Dr. Oz) when they tell you they&#8217;re going to give you &#8220;the state of the evidence&#8221; on a certain issue, even if the evidence seems very stately indeed. This goes double for any site that contains a list of &#8220;myths and facts about X&#8221;, quadruple for any site that uses phrases like &#8220;ingroup member uses actual FACTS to DEMOLISH the outgroup&#8217;s lies about Y&#8221;, and octuple for RationalWiki.</p>
<p>Most important, even if someone gives you what seems like overwhelming evidence in favor of a certain point of view, don&#8217;t trust it until you&#8217;ve done a simple Google search to see if the opposite side has equally overwhelming evidence. </p>
]]></content:encoded>
			<wfw:commentRss>http://slatestarcodex.com/2014/12/12/beware-the-man-of-one-study/feed/</wfw:commentRss>
		<slash:comments>270</slash:comments>
		</item>
		<item>
		<title>Why I Am Not Rene Descartes</title>
		<link>http://slatestarcodex.com/2014/11/27/why-i-am-not-rene-descartes/</link>
		<comments>http://slatestarcodex.com/2014/11/27/why-i-am-not-rene-descartes/#comments</comments>
		<pubDate>Thu, 27 Nov 2014 09:47:20 +0000</pubDate>
		<dc:creator><![CDATA[Scott Alexander]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[philosophy]]></category>
		<category><![CDATA[rationality]]></category>

		<guid isPermaLink="false">http://slatestarcodex.com/?p=3308</guid>
		<description><![CDATA[I. Imagine that somebody wrote: Some of my friends support Ron Paul. I think that&#8217;s wrong. After all, he&#8217;s a libertarian, and Wikipedia says a libertarian is a person who believes in free will. But free will is impossible in &#8230; <a href="http://slatestarcodex.com/2014/11/27/why-i-am-not-rene-descartes/">Continue reading <span class="pjgm-metanav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p><b>I.</b></p>
<p>Imagine that somebody wrote:<br />
<blockquote>Some of my friends support Ron Paul. I think that&#8217;s wrong. After all, he&#8217;s a libertarian, and <A HREF="http://en.wikipedia.org/wiki/Libertarianism_%28metaphysics%29">Wikipedia says</A> a libertarian is a person who believes in free will. But free will is impossible in a deterministic universe. Ron Paul&#8217;s belief in free will is clearly why there are so few Swiss people among Ron Paul supporters, since Swiss people are Calvinists and so understand determinism better.</p></blockquote>
<p>This is sort of how I feel reading <A HREF="http://freethoughtblogs.com/almostdiamonds/2014/11/24/why-i-am-not-a-rationalist/">Why I Am Not A Rationalist</A> on Almost Diamonds. </p>
<p>I&#8217;m having trouble not quoting it in full:<br />
<blockquote>I’m not a rationalist because I’m an empiricist. I find no value in “logical” arguments that are based in intuition and “common sense” rather than data. Such arguments can only perpetuate ignorance by giving it a shiny veneer of reason that it hasn’t earned.</p>
<p>I boggle that we haven’t sorted this out yet. I particularly boggle that atheists of my acquaintance promote rationalism over empiricism. The tensions between basic rationalism and empiricism parallel the tensions between church theology and the philosophy of science. We have no problem rejecting church theology as not being grounded in evidence. Why do so many atheists praise rationalism?</p>
<p>Let me stop here and make it clear that I’m not rejecting logic or critical thinking. Goodness knows that I’ve spent hours just this summer helping people share useful heuristics that will, in general, help them get to the right answers more often. I’ve led workshops and panels on evaluating science journalism and scientific results. When I’ve spoken to comparative religion classes in the past, I’ve talked about religious skepticism with an emphasis on the basics of epistemology.</p>
<p>The problem isn’t logic or critical thinking. The problem is a tendency to view those skills as central to getting the right answers. The problem is a tendency to view them as the solution. They’re not, and the idea that they are is in distinct contrast with the way humanity has actually grown in knowledge and understanding of the world.</p>
<p>Rationalism is, at heart, an individualist endeavor. It says that the path to getting things right lies in improving the self, improving the thinking of one person at a time. It’s not surprising that the ideology and movement appeal largely to the young, to men, to white people, to libertarians. It focuses primarily on individual action.</p>
<p>That’s not how we’ve come to learn about our world, though. It’s not how science or any other field of scholarship works. Scholarship is a collaborative process. And I don’t just mean peer review and working groups, though those are important as well.</p>
<p>Scholars add to our knowledge of the world by building on the work of others. They apply tools and methods developed by others to new material and questions. They study the work of other scholars to inspire them and give them the background to ask and answer new questions. They evaluate the work of others and consolidate the best of it into larger theoretical frameworks. Without the work of scholars before them, scholars today and evermore would always be recreating basic work and basic errors.</p>
<p>All too often, I find rationalists taking this repetitive approach. They think but they don’t study. As a consequence, they repeat the same naïve errors time and again. This is particularly noticeable when they engage in social or political theorizing by extrapolating from information they learned in secondary school and 101-level college classes, picked up in pop culture, or provided by people pushing a political cause. Their conclusions are necessarily as limited as their source material and reflect all its cultural biases.</p></blockquote>
<p>As best I can tell, it is conflating a misunderstood version of rationalism (Descartes) with a misunderstood version of rationalism (Yudkowsky) and ending up with something unrelated to either, in the most bizarre possible way. But since there are commenters there who seem to <i>agree</i> with it, better nip this in the bud before it spreads.</p>
<p><b>II.</b></p>
<p>Rationalism (Descartes) is not simply the belief that sitting and thinking is <i>more useful than</i> observation. Descartes-style rationalism is complicated, but involves the claim that certain concepts are known prior to experience. For example, it is possible to understand mathematical truths like 1 + 1 = 2 separately from our experience of observing people add one apple to another. It is also possible to know them more completely than our knowledge of the external world, since our external senses can only tell us that all additions of one plus one have equalled two <i>so far</i>, but our reason can tell us something we have never observed &#8211; that it is necessarily true that everywhere and for all time 1 + 1 will equal two.</p>
<p>This so obviously gets bogged down in definitions of what is or isn&#8217;t &#8220;prior to experience&#8221; or a &#8220;concept&#8221; that philosophers today have mostly moved on to bigger and better things like hitting people with trolleys. It has nevertheless gotten a mild boost of interest recently with Chomsky&#8217;s claim that some features of human language are innate, and evolutionary psychology&#8217;s claim that certain preferences like fear of spiders may be innate. You can learn much more than you wanted to know at the <A HREF="http://plato.stanford.edu/entries/rationalism-empiricism/">Stanford Encyclopedia of Philosophy</A></p>
<p>But in no case does the debate ever resemble Almost Diamonds&#8217; naive conception of some people thinking they have to study the world and other people sitting and speculating in armchairs and playing at self-improvement because they don&#8217;t want to get their hands dirty in the real world. In fact, Descartes himself was a devoted experimentalist &#8211; probably too devoted. His intense interest in anatomy combined with his belief that animals lack souls made him one of the most prolific vivisectionists of all time. We <A HREF="http://books.google.com/books?id=_quBG-_aqJsC&#038;pg=PA134&#038;lpg=PA134&#038;dq=Descartes+vivisection&#038;source=bl&#038;ots=EHyJzL1XFi&#038;sig=whAp0SzBZxPDOdjspol206igBcs&#038;hl=en&#038;sa=X&#038;ei=KOl2VLWVMI_joASNgYKQBQ&#038;ved=0CFcQ6AEwBw#v=onepage&#038;q=Descartes%20vivisection&#038;f=false">can thank him</A> for such useful pieces of scientific information as &#8220;If you cut off the end of the heart of a living dog and insert your finger through the incision into one of the concavities, you will clearly feel that every time the heart shortens, it presses your finger, and stops pressing every time it lengthens.&#8221; One can accuse the author of this statement of a <i>lot</i> of things, but &#8220;not willing to get his hands dirty&#8221; isn&#8217;t one of them.</p>
<p>Likewise, Leibniz, the <i>other</i> most famous partisan of rationalism of all time, also made notable contributions to physics, geology, embryology, paleontology, and medicine. Either he was out exploring the world, or he had <i>some</i> armchair. Particularly ironically for Almost Diamonds&#8217; thesis, he was one of the most prominent advocates of research as a collaborative endeavor, and founded various scientific discussion societies around Europe as well as calling for a giant international database of all scientific findings. All of this was entirely consistent with, and informed by, his rationalism.</p>
<p>The people on r/philosophy also do a <A HREF="http://www.reddit.com/r/philosophy/comments/2na7ig/why_i_am_not_a_rationalist/cmbrbnt">good job</A> of explaining this mistake in that blog&#8217;s conception of rationalism (the people on r/badphilosophy do an, um, <A HREF="http://www.reddit.com/r/badphilosophy/comments/2na8ts/freethought_blogger_gives_takedown_objections_to/">less good job</A>)</p>
<p><b>III.</b></p>
<p>But I think Almost Diamonds is mostly talking (also talking?) about rationalism (Yudkowsky), ie internet rationalism. After all, she mentions &#8220;the rationalist movement&#8221; and says they&#8217;re about &#8220;understanding cognitive biases&#8221; and &#8220;appeal largely to&#8230;libertarians&#8221;.</p>
<p>This is even more wrong.</p>
<p>At least rationalism (Descartes) is <i>sort of</i> about some kind of disconnect with empirical evidence. In the context of rationalism (Yudkowsky) this is about the same level of error as expecting Ron Paul to be a philosopher preaching free will just because &#8220;libertarianism&#8221; can mean something in metaphysics. Rationalism (Yudkowsky) and rationalism (Descartes) share a name, nothing more.</p>
<p>Almost Diamonds says:<br />
<blockquote>I’m not a rationalist because I’m an empiricist. I find no value in “logical” arguments that are based in intuition and “common sense” rather than data&#8230;I boggle that we haven’t sorted this out yet. I particularly boggle that atheists of my acquaintance promote rationalism over empiricism.</p></blockquote>
<p>Meanwhile, the founding document of rationalism (Yudkowsky), the <A HREF="http://yudkowsky.net/rational/virtues/">Twelve Virtues of Rationality</A>, states:<br />
<blockquote><b>The sixth virtue is empiricism.</b> The roots of knowledge are in observation and its fruit is prediction. What tree grows without roots?&#8230;Do not ask which beliefs to profess, but which experiences to anticipate.</p></blockquote>
<p>It adds:<br />
<blockquote>You cannot make a true map of a city by sitting in your bedroom with your eyes shut and drawing lines upon paper according to impulse. You must walk through the city and draw lines on paper that correspond to what you see. If, seeing the city unclearly, you think that you can shift a line just a little to the right, just a little to the left, according to your caprice, this is just the same mistake.</p></blockquote>
<p>This makes the same point as Almost Diamonds. </p>
<p>Now, granted, some movements can have Official Founding Beliefs that they don&#8217;t follow. Many rationalists take the Virtues very seriously (one just let me know it is <A HREF="https://fbcdn-sphotos-c-a.akamaihd.net/hphotos-ak-xaf1/t31.0-8/q81/p720x720/10714449_10101983674058085_4343412677793869329_o.jpg">hanging on the wall of his group house</A>) but perhaps like some of Jesus&#8217; more lovey-dovey commandments or the inconvenient parts of the Constitution, they are honored more in the breach than in the observance?</p>
<p>I don&#8217;t think so. Rationalists have taken this idea and run with it, which is why we are so obsessed with things like <A HREF="http://lesswrong.com/lw/i3/making_beliefs_pay_rent_in_anticipated_experiences/">making beliefs pay rent in experience</A>, discussion of &#8220;Bayesian updating&#8221;, and even making monetary bets on our beliefs to train ourselves to make sure they conform to real world outcomes. It&#8217;s why the rationalist proverb, upon being given a cool theory, goes &#8220;Name three examples&#8221;.</p>
<p>A typical (okay, I lied, highly extreme) example is Gwern, who consumed <A HREF="http://www.gwern.net/Nootropics">pretty much every chemical</A> and then carefully recorded its effects on his sleep, emotions, performance on cognitive tests, et cetera and then performed Bayesian analysis on it. There&#8217;s obviously <i>something</i> wrong with that, but it&#8217;s not lack of empiricism!</p>
<p>Diamonds:<br />
<blockquote>All too often, I find rationalists taking this repetitive approach. They think but they don’t study.</p></blockquote>
<p>Twelve Virtues again:<br />
<blockquote><b>The eleventh virtue is scholarship.</b> Study many sciences and absorb their power as your own. Each field that you consume makes you larger. If you swallow enough sciences the gaps between them will diminish and your knowledge will become a unified whole. If you are gluttonous you will become vaster than mountains.</p></blockquote>
<p>And in accordance with this, I will put the rationalist movement up, mano a mano, against any other movement on the entire Internet in terms of the quality of scholarship and empiricism.</p>
<p>Like, holy @#$%, we have <A HREF="http://squid314.livejournal.com/330825.html">Luke Muehlhauser</A>, who can&#8217;t write a simple life hacks post on productivity without <A HREF="http://lesswrong.com/lw/3w3/how_to_beat_procrastination/">fifty-seven different journal article citations</A>, and who writes at great length about <A HREF="http://lesswrong.com/lw/5me/scholarship_how_to_do_it_efficiently/">how to study a field of research effectively</A>, <A HREF="lesswrong.com/lw/3gu/the_best_textbooks_on_every_subject/">the best textbooks on every subject</A>, <A HREF="http://lesswrong.com/lw/7m8/software_tools_for_efficient_scholarship/">software tools for efficient scholarship</A>, etc, etc, etc.</p>
<p>We have a community-wide survey that collects information on one hundred thirty-six demographic categories, for over fifteen hundred community members, and then a tradition of obsessively arguing about the implications of the results for several weeks every year. </p>
<p>We know that 20% of rationalists over the age of 35 have Ph. Ds. 54% have either a Ph. D, an MD, or a Master&#8217;s!</p>
<p>And not to toot my own horn, but there&#8217;s a reason this blog&#8217;s series of impromptu literature reviews is called &#8220;Much More Than You Wanted To Know&#8221; and has investigated the literature on things like <A HREF="http://slatestarcodex.com/2014/01/05/marijuana-much-more-than-you-wanted-to-know/">marijuana legalization</A> and <A HREF="http://slatestarcodex.com/2014/07/07/ssris-much-more-than-you-wanted-to-know/">SSRI effectiveness</A>, fifty or sixty studies per review, to a degree that&#8217;s gotten some coverage on major news sites including Andrew Sullivan&#8217;s blog and Vox.</p>
<p>And&#8230;wait a second! The author of that blog knows Kate Donovan! How do you know Kate Donovan and still accuse rationalists of &#8220;not studying&#8221;?!?! DO YOU EVEN HAVE EYES?</p>
<p>Finally, Almost Diamonds says:<br />
<blockquote>Even in the modern rationalist movement, which speaks more to collecting evidence than classical rationalism, I have yet to see any emphasis on epistemic humility. </p></blockquote>
<p>But the Twelve Virtues says:<br />
<blockquote><b>The eighth virtue is humility.</b> To be humble is to take specific actions in anticipation of your own errors. To confess your fallibility and then do nothing about it is not humble; it is boasting of your modesty. Who are most humble? Those who most skillfully prepare for the deepest and most catastrophic errors in their own beliefs and plans. Because this world contains many whose grasp of rationality is abysmal, beginning students of rationality win arguments and acquire an exaggerated view of their own abilities. But it is useless to be superior: Life is not graded on a curve. The best physicist in ancient Greece could not calculate the path of a falling apple. There is no guarantee that adequacy is possible given your hardest effort; therefore spare no thought for whether others are doing worse. If you compare yourself to others you will not see the biases that all humans share. To be human is to make ten thousand errors. No one in this world achieves perfection.</p></blockquote>
<p>Really? &#8220;Yet to see any emphasis on epistemic humility&#8221;? The most important mission statement of the rationalist movement says that one of the movement&#8217;s twelve founding principles is humility, then waxes rhapsodic about it. Seriously, we&#8217;re the people who keep calling ourselves &#8220;aspiring rationalists&#8221; to remind ourselves that we&#8217;re not nearly as rational as we should be yet! We&#8217;re the people who <A HREF="http://lesswrong.com/lw/7z9/1001_predictionbook_nights/">obsessively calibrate</A> with Prediction Book et cetera to remind ourselves just how high our error rate is. We&#8217;re the people who keep a community-wide <A HREF="http://lesswrong.com/lw/ikn/mistakes_repository/">keep a mistakes repository</A> (with Gwern once again <A HREF="http://www.gwern.net/Mistakes">going above and beyond</A>). THERE ARE <A HREF="http://lesswrong.com/tag/Overconfidence/?count=32&#038;after=rd_1eb">FORTY ONE DIFFERENT POSTS ON LESS WRONG</A> TAGGED &#8216;OVERCONFIDENCE&#8217;!</p>
<p>Almost Diamonds dislikes rationalism because she believes in an emphasis of empiricism over armchair speculation, careful scholarship over ignorance, and epistemic humility. But she&#8217;s just described the rationalist movement almost to a &#8216;T&#8217;! She&#8217;s attacking the rationalist movement for not living up to her ideal philosophy which is&#8230;the precise philosophy of rationalist movement!</p>
<p>This is mean, but I&#8217;m going to say it. Almost Diamonds describes the rationalist movement in a way that even the most cursory glance at any rationalist site or document would disprove. Her opinion seems to be based entirely on a distorted idea of the dictionary definition of the word &#8220;rationalism&#8221;.</p>
<p>It&#8217;s almost like she&#8217;s, I don&#8217;t know, sitting in an armchair speculating about what rationalism must be, rather than going out and looking for evidence.</p>
<p>ಠ_ಠ</p>
<p><b>IV.</b></p>
<p>Also, can I just mention that one of the commenters on that blog says that the problems with the rationalist movement are a lot like the problems with frequentist statistics, and what would really help them is if they investigated Bayesianism? I swear I am not joking. I swear <A HREF="http://freethoughtblogs.com/almostdiamonds/2014/11/24/why-i-am-not-a-rationalist/#comment-2995160">this is a thing that happened</A>.</p>
<p><b>V.</b></p>
<p>But aside from all this, I do think there&#8217;s an important point that needs to be made here. That is &#8211; given that empiricism and scholarship is obviously super-important, why is it not enough?</p>
<p>The very short answer is &#8220;A meta-analysis of hundreds of studies is <A HREF="http://slatestarcodex.com/2014/04/28/the-control-group-is-out-of-control/">what tells you that psychic powers exist</A>. Critical thinking is what helps you figure out whether to trust that result.&#8221;</p>
<p>The longer answer: rationality is about drawing correct inferences from limited, confusing, contradictory, or maliciously doctored facts. Even the world&#8217;s most stubborn creationist would have to realize the truth of evolution if you could put her in a time machine and make her watch all 3 billion years of life on Earth. But more rational people can realize the truth of evolution after reading a couple of good biology textbooks and having some questions answered. And Darwin could realize the truth of evolution just by observing the natural world and speculating about finches. There&#8217;s something I do better than the creationist and Darwin does better than me, and it&#8217;s not &#8220;have access to data&#8221;.</p>
<p>Life is made up of limited, confusing, contradictory, and maliciously doctored facts. Anyone who says otherwise is either sticking to such incredibly easy solved problems that they never encounter anything outside their comfort level, or so closed-minded that they shut out any evidence that challenges their beliefs.</p>
<p>Given this state of affairs, obviously it&#8217;s useful to have as much evidence as possible, in the same way it&#8217;s useful to have as much money as possible. But equally obviously it&#8217;s useful to be able to use a limited amount of evidence wisely, in the same way it&#8217;s useful to be able to use a limited amount of money wisely.</p>
<p>I recently <A HREF="http://slatestarcodex.com/2014/11/25/race-and-justice-much-more-than-you-wanted-to-know/">reviewed thirty-five studies</A> on racism in the criminal justice system, a very controversial topic. But I would suggest that almost nobody would change their opinion about this based on simple number of studies reviewed. That is, if a person who has read five studies and believes the system is racist encountered another person who has read ten studies and believes the system is fair, she would not simply say &#8220;Well, you&#8217;ve read more studies than I have, so I guess you&#8217;re right and I&#8217;m wrong.&#8221; She would probably say &#8220;That&#8217;s interesting, but I need to double-check the methodologies of those studies, make sure they mean what you think they mean, make sure you haven&#8217;t specifically selected only studies that prove your view, and make sure you haven&#8217;t fallen into one of a million other possible failure modes.&#8221;</p>
<p>The part where you have read 5 studies but I have read 10 is the empiricism that Almost Diamonds would say is the only meaningful skill that exists. The part where we want to make sure they&#8217;re good studies and I understood them right is rationality. I would trust the opinion of a rational person who knows one study far more than that of an irrational person who knows fifty. If you don&#8217;t believe me, I invite you to check out the hundreds of studies published in creation science and homeopathy journals every year.</p>
<p>Or what if you&#8217;re working in an area where you don&#8217;t even have hypotheses yet? It&#8217;s your job to explain or predict something that&#8217;s never been explained or predicted before. Sure, you&#8217;ve got to have a background level of expertise and scholarship, but no matter how many x-ray crystallographers you have somebody has to be the one to say &#8220;You know, our data would make sense if this molecule were in the shape of a helix.&#8221; What if you&#8217;re trying to predict the future &#8211; like in what year fusion power will become a reality, or whether a stock is going to go up and down &#8211; and you&#8217;ve already reviewed all of the relevant evidence? What then?</p>
<p>If somebody says that rationality is all nice and well, but not really important because you can just use the facts, then this is the surest sign of somebody who doesn&#8217;t possess the skill and doesn&#8217;t even realize there is a skill there to be possessed. They have inoculated themselves with <A HREF="http://slatestarcodex.com/2014/04/15/the-cowpox-of-doubt/">the cowpox of doubt</A>, trained themselves on easy problems so long that they&#8217;ve dulled their senses and forgotten that problems that require more thought than just looking up the universally-agreed-upon scientific consensus in Wikipedia even exist.</p>
<p>There is one paragraph for which I will give Almost Diamonds credit: she is partly right when she says rationality is fundamentally an individual endeavor. I mean, only in the sense martial arts is an individual endeavor &#8211; you can train with lots of people, you <i>have</i> to train with lots of people, you&#8217;ve got to learn the craft from others and stand on the shoulders of giants &#8211; but in the end you&#8217;ve got to punch the other guy yourself.</p>
<p>Thousands of scientists have worked their entire lives to get you the evidence in favor of evolution. But thousands of creationists have worked <i>their</i> entire lives to obfuscate and confuse that evidence. Thousands of scientists have studied the criminal justice system, but many of them aren&#8217;t very good at it, many of them disagree with one another, and very likely none of them have worked on the exact subsubproblem that you&#8217;re interested in. Other people can present the facts to you, but in the end you&#8217;re the one deciding what and who to believe. Just like everybody dies alone, everybody decides on their beliefs alone. And rationality is what allows them to do that accurately. </p>
<p>If you&#8217;re investigating a problem even slightly more interesting than evolution versus creationism, you will always encounter limited, confusing, contradictory, and maliciously doctored facts. The more rationality you have, the greater your ability to draw accurate conclusions from this mess. And the differences aren&#8217;t subtle</p>
<p>A superintelligence can take a grain of sand and envision the entire universe.</p>
<p>Einstein <A HREF="http://lesswrong.com/lw/jo/einsteins_arrogance/">can take</A> a few basic facts about light and gravity and figure out the theory of relativity.</p>
<p>I can take a bunch of conflicting studies and feel sort of confident I&#8217;ve at least figured out the gist of the topic.</p>
<p>Some people can&#8217;t take a movement that emphasizes on its founding document &#8220;OUR VIRTUES ARE EMPIRICISM, SCHOLARSHIP, AND HUMILITY&#8221; and figure out that it considers empiricism, scholarship, and humility to be virtues.</p>
]]></content:encoded>
			<wfw:commentRss>http://slatestarcodex.com/2014/11/27/why-i-am-not-rene-descartes/feed/</wfw:commentRss>
		<slash:comments>566</slash:comments>
		</item>
		<item>
		<title>The Categories Were Made For Man, Not Man For The Categories</title>
		<link>http://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/</link>
		<comments>http://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/#comments</comments>
		<pubDate>Fri, 21 Nov 2014 14:34:26 +0000</pubDate>
		<dc:creator><![CDATA[Scott Alexander]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[geography]]></category>
		<category><![CDATA[race/gender/etc]]></category>
		<category><![CDATA[rationality]]></category>
		<category><![CDATA[whale metaphor blogging]]></category>

		<guid isPermaLink="false">http://slatestarcodex.com/?p=3244</guid>
		<description><![CDATA[I. &#8220;Silliest internet atheist argument&#8221; is a hotly contested title, but I have a special place in my heart for the people who occasionally try to prove Biblical fallibility by pointing out whales are not a type of fish. (this &#8230; <a href="http://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/">Continue reading <span class="pjgm-metanav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p><b>I.</b></p>
<p>&#8220;Silliest internet atheist argument&#8221; is a hotly contested title, but I have a special place in my heart for the people who occasionally try to prove Biblical fallibility by pointing out whales are not a type of fish.</p>
<p>(this is going to end up being a metaphor for something. Yup, we&#8217;re back to Whale Metaphor Blogging.)</p>
<p>The argument goes like this. Jonah got swallowed by a whale. But the Bible says Jonah got swallowed by a big fish. So the Bible seems to think whales are just big fish. Therefore the Bible is fallible. Therefore, the Bible was not written by God.</p>
<p>The first problem here is that &#8220;whale&#8221; is just our own modern interpretation of the Bible. For all we know, Jonah was swallowed by a really really really big herring.</p>
<p>The second problem is that if the ancient Hebrews want to call whales a kind of fish, let them call whales a kind of fish.</p>
<p>I&#8217;m not making the weak and boring claim that since they&#8217;d never discovered genetics they don&#8217;t know better. I am making the much stronger claim that, even if the ancient Hebrews had taken enough of a break from murdering Philistines and building tabernacles to sequence the genomes of all knownspecies of aquatic animals, there&#8217;s nothing whatsoever wrong, false, or incorrect with them calling a whale a fish.</p>
<p>Now, there&#8217;s something wrong with saying &#8220;whales are phylogenetically just as closely related to bass, herring, and salmon as these three are related to each other.&#8221; What&#8217;s wrong with the statement is that it&#8217;s false. But saying &#8220;whales are a kind of fish&#8221; isn&#8217;t.</p>
<p>Suppose you travel back in time to ancient Israel and try to explain to King Solomon that whales are a kind of mammal and not a kind of fish.</p>
<p>Your translator isn&#8217;t very good, so you pause to explain &#8220;fish&#8221; and &#8220;mammal&#8221; to Solomon. You tell him that fish is &#8220;the sort of thing herring, bass, and salmon are&#8221; and mammal is &#8220;the sort of thing cows, sheep, and pigs are&#8221;. Solomon tells you that your word &#8220;fish&#8221; is Hebrew <i>dag</i> and your word &#8220;mammal&#8221; is Hebrew <i>behemah</i>.</p>
<p>So you try again and say that a whale is a <i>behemah</i>, not a <i>dag</i>. Solomon laughs at you and says you&#8217;re an idiot.</p>
<p>You explain that you&#8217;re not an idiot, that in fact all kinds of animals have things called genes, and the genes of a whale are much closer to those of the other <i>behemah</i> than those of the <i>dag</i>.</p>
<p>Solomon says he&#8217;s never heard of these gene things before, and that maybe genetics is involved in your weird foreign words &#8220;fish&#8221; and &#8220;mammal&#8221;, but <i>dag</i> are just finned creatures that swim in the sea, and <i>behemah</i> are just legged creatures that walk on the Earth.</p>
<p>(like the <i>kelev</i> and the <i>parah</i> and the <i>gavagai</i>)</p>
<p>You try to explain that no, Solomon is wrong, <i>dag</i> are actually defined not by their swimming-in-sea-with-fins-ness, but by their genes.</p>
<p>Solomon says you didn&#8217;t even <i>know</i> the word <i>dag</i> ten minutes ago, and now suddenly you think you know what it means better than he does, who has been using it his entire life? Who died and made <i>you</i> an expert on Biblical Hebrew?</p>
<p>You try to explain that whales actually have tiny little hairs, too small to even see, just as cows and sheep and pigs have hair.</p>
<p>Solomon says oh God, you are so annoying, who the hell cares whether whales have tiny little hairs or not. In fact, the only thing Solomon cares about is whether responsibilities for his kingdom&#8217;s production of blubber and whale oil should go under his Ministry of Dag or Ministry of Behemah. The Ministry of Dag is based on the coast and has a lot of people who work on ships. The Ministry of Behemah has a strong presence inland and lots of of people who hunt on horseback. So please (he continues) keep going about how whales have little tiny hairs.</p>
<p>It&#8217;s easy to see that Solomon has a point, and that if he wants to define <i>behemah</i> as four-legged-land-dwellers that&#8217;s his right, and no better or worse than your definition of &#8220;creatures in a certain part of the phylogenetic tree&#8221;. Indeed, it might even be that if you spent ten years teaching Solomon all about the theory of genetics and evolution (which would be hilarious &#8211; think how annoyed the creationists would get) he might still say &#8220;That&#8217;s very interesting, and I can see why we need a word to describe creatures closely related along the phylogenetic tree, but make up your own word, because <i>behemah</i> already means &#8216;four-legged-land-dweller&#8217;.&#8221;</p>
<p>Now imagine that instead of talking to King Solomon, you&#8217;re talking to that guy from Duck Dynasty with the really crazy beard (I realize that may describe more than one person), who stands in for all uneducated rednecks in the same way King Solomon stands in for all Biblical Hebrews.</p>
<p>&#8220;Ah course a whale is a feesh, ya moron&#8221; he says in his heavy Southern accent.</p>
<p>&#8220;No it isn&#8217;t,&#8221; you say. &#8220;A fish is a creature phylogenetically related to various other fish, and with certain defining anatomical features. It says so right here in this biology textbook.&#8221;</p>
<p>&#8220;Well,&#8221; Crazy Beard Guy tells you, &#8220;Ah reckon that might be what a fish is, but a <i>feesh</i> is some&#8217;in that swims in the orshun.&#8221;</p>
<p>With a sinking feeling in your stomach, you spend ten years turning Crazy Beard Guy into a world expert on phylogenetics and evolutionary theory. Although the Duck Dynasty show becomes <i>much</i> more interesting, you fail to budge him a bit on the meaning of &#8220;feesh&#8221;.</p>
<p>It&#8217;s easy to see here that &#8220;fish&#8221; and &#8220;feesh&#8221; can be different just as &#8220;fish&#8221; and &#8220;<i>dag</i>&#8221; can be different.</p>
<p>You can point out how many important professors of icthyology in fancy suits use your definition, and how only a couple of people with really weird facial hair use his. But now you&#8217;re making a status argument, not a factual argument. Your argument is &#8220;conform to the way all the cool people use the word &#8216;fish'&#8221;, not &#8220;a whale is really and truly not a fish&#8221;.</p>
<p>There are facts of the matter on each individual point &#8211; whether a whale has fins, whether a whale lives in the ocean, whether a whale has tiny hairs, et cetera. But there is no fact of the matter on whether a whale is a fish. The argument is entirely semantic.</p>
<p>So this is the second reason why this particular objection to the Bible is silly. If God wants to call a whale a big fish, stop telling God what to do.</p>
<p>(also, <A HREF="http://errancy.org/bats.html">bats</A>)</p>
<p><b>II.</b></p>
<p>When terms are <i>not</i> defined directly by God, we need our own methods of dividing them into categories.</p>
<p>The essay <A HREF="http://lesswrong.com/lw/no/how_an_algorithm_feels_from_inside/">&#8220;How An Algorithm Feels From The Inside&#8221;</A> is a gift that keeps on giving. You can get a reputation as a daring and original thinker just by copy-pasting it at different arguments with a couple of appropriate words substituted for one another, mad-libs like. It is the solution to something like 25% of extent philosophical problems.</p>
<p>It starts with a discussion of whether or not Pluto is a planet. Planets tend to share many characteristics in common. For example, they are large, round, have normal shaped orbits lined up with the plane of the ecliptic, have cleared out a certain area of space, and are at least kind of close to the Sun as opposed to way out in the Oort Cloud.</p>
<p>One could imagine a brain that thought about these characteristics like this:</p>
<p>One could imagine this model telling you everything you need to know. If an object is larger, it&#8217;s more likely to be round and in cis-Neptunian space. If an object has failed to clear its orbit of debris, it&#8217;s more likely to have a skewed orbit relative to the plane of the ecliptic. We could give each of these relationships Bayesian weights and say things like large objects have a 32% chance of being in cis-Neptunian space and small objects an 86% chance. Or whatever.</p>
<p>But this model has some big problems. For one thing, if you inscribe it in blood, you accidentally summon the Devil. But second, it&#8217;s computationally very complicated. Each attribute affects each other attribute which affects it in turn and so on in an infinite cycle, so that its behavior tends to be chaotic and unpredictable.</p>
<p>What the human brain actually seems to do is to sweep all common correlations into one big category in the middle, thus dividing possibility-space into large round normal-orbit solitary inner objects, and small irregular skewed-orbit crowded outer objects. It calls the first category &#8220;planets&#8221; and the second category &#8220;planetoids&#8221;.</p>
<p><center><IMG SRC="http://slatestarcodex.com/blog_images/tplanet1.png"></p>
<p><i>Obligatory <A HREF="http://lesswrong.com/lw/no/how_an_algorithm_feels_from_inside/">Less Wrong</A> picture</i></center></p>
<p>You can then sweep minor irregularities under the rug. Neptune is pretty far from the sun, but since it&#8217;s large, round, normal-orbit, and solitary, we know which way the evidence is leaning.</p>
<p>When an object satisfies about half the criteria for planet and half the criteria for planetoid, <i>then</i> it&#8217;s awkward. Pluto is the classic example. It&#8217;s relatively large, round, skewed orbit, solitary&#8230;ish? and outer-ish. What do you do?</p>
<p>The <i>practical</i> answer is you convene some very expensive meeting of prestigious astronomers and come to some official decision which everyone agrees to follow so they&#8217;re all on the same page.</p>
<p>But the <i>ideal</i> answer is you say &#8220;Huh, the assumption encoded in the word &#8216;planet&#8217; that the five red criteria always went together and the five blue criteria always went together doesn&#8217;t hold. Whatever.&#8221;</p>
<p>Then you divide the solar system into three types of objects: planets, planetoids, and dammit-our-categorization-scheme-wasn&#8217;t-as-good-as-we-thought.</p>
<p>(psychiatry, whose philosophy of categorization is light years ahead of a lot of the rest of the world, conveniently abbreviates this latter category as &#8220;NOS&#8221;)</p>
<p>The situation with whales and fish is properly understood in the same context. Fish and mammals differ on a lot of axes. Fish generally live in the water, breathe through gills, have tails and fins, possess a certain hydrodynamic shape, lay eggs, and are in a certain part of the phylogenetic tree. Mammals generally live on land, breathe through lungs, have legs, give live birth, and are in another part of the phylogenetic tree. Most fish conform to all of the fish desiderata, and most mammals conform to all of the mammal desiderata, so there&#8217;s no question of how to categorize them. Occasionally you get something weird (a platypus, a lungfish, or a whale) and it&#8217;s a judgment call which you have to decide by fiat. In our case, that fiat is &#8220;use genetics and ignore all other characteristics&#8221; but some other language, culture, or scientific community might make a different fiat, and then the borders between their categories would look a little bit different.</p>
<p><b>III.</b></p>
<p>Since I shifted to a borders metaphor, let&#8217;s follow that and see where it goes.</p>
<p>Imagine that Israel and Palestine agree to a two-state solution with the final boundary to be drawn by the United Nations. You&#8217;re the head of the United Nations committee involved, so you get out a map and a pencil. Both sides have sworn by their respective gods to follow whatever you determine.</p>
<p>Your job is not to draw &#8220;the correct border&#8221;. There is no one correct border between Israel and Palestine. There are a couple of very strong candidates (for example, the pre-1967 line of control), but both countries have suggested deviations from that (most people think an actual solution would involve Palestine giving up some territory that has since been thoroughly settled by Israel in exchange for some territory within Israel proper, or perhaps for a continuous &#8220;land bridge&#8221; between the West Bank and Gaza). Even if you wanted to use the pre-1967 line as a starting point, there would still be a lot of work to do deciding what land swaps should and shouldn&#8217;t be made.</p>
<p>Instead you&#8217;d be making a series of trade-offs. Giving all of Jerusalem to the Israelis would make them very happy but anger Palestine. Creating a contiguous corridor between Gaza and the West Bank makes some sense, but then you&#8217;d be cutting off Eilat from the rest of Israel. Giving all of the Israeli settlements in the West Bank back to Palestine would satisfy a certain conception of property rights, but also leave a lot of Jews homeless.</p>
<p>There are also much stupider decisions you could make. You could give Tel Aviv to Palestine. You could make the Palestinian state a perfect circle five miles in radius centered on Rishon LeZion. You could just split the territory in half with a straight line, and give Israel the north and Palestine the south. All of these things would be really dumb.</p>
<p>But, crucially, they would not be <i>false</i>. They would not be <i>factually incorrect</i>. They would just be failing to achieve pretty much any of the goals that we would expect a person solving land disputes in the Middle East to have. You can think of alternative arrangements in which these wouldn&#8217;t be dumb. For example, if you&#8217;re a despot, and you want to make it very clear to both the Israelis and Palestinians that their opinions don&#8217;t matter and they should stop bothering you with annoying requests for arbitration, maybe splitting the country in half north-south is the way to go.</p>
<p>This is now unexpectedly a geography blog again.</p>
<p>The border between Turkey and Syria follows a mostly straight-ish line near-ish the 36th parallel, except that about twenty miles south of the border Turkey controls a couple of square meters in the middle of a Syrian village. This is the tomb of the ancestor of the Ottoman Turks, and Turkey&#8217;s border agreement with Syria stipulates that it will remain part of Turkey forever. And the Turks take this <i>very</i> seriously; they maintain a platoon of special forces there and have recently been <A HREF="http://news.nationalgeographic.com/news/2014/10/141003-suleyman-tomb-ottoman-osman-turkey-syria-isis/">threatening war against Syria</A> if their &#8220;territory&#8221; gets &#8220;invaded&#8221; in the current conflict.</p>
<p><center><IMG SRC="http://slatestarcodex.com/blog_images/ttomb.jpg"></p>
<p><i>Pictured: Turkey (inside fence), Syria (outside)</i></center></p>
<p>The border between Bangladesh and India is complicated at the best of times, but it becomes absolutely ridiculous in a place called Cooch-Behar, which I guess is as good a name as any for a place full of ridiculous things. In at least one spot there is an &#8216;island&#8217; of Indian territory within a larger island of Bangladeshi territory within a larger island of Indian territory within Bangladesh. According to <A HREF="http://mentalfloss.com/article/29086/its-complicated-5-puzzling-international-borders">mentalfloss.com</A>:<br />
<blockquote>So why’d the border get drawn like that? It can all be traced back to power struggles between local kings hundreds of years ago, who would try to claim pockets of land inside each other’s territories as a way to leverage political power. When Bangladesh became independent from India in 1947 (as East Pakistan until 1971), all those separate pockets of land were divvied up. Hence the polka-dotted mess.</p></blockquote>
<p><center><IMG SRC="http://slatestarcodex.com/blog_images/tindia.jpg"></center></p>
<p>Namibia is a very weird-looking country with a very thin three-hundred-mile-long panhandle (eg about twice as long as Oklahoma&#8217;s). Apparently during the Scramble For Africa, the Germans who colonized Namibia really wanted access to the Zambezi River so they could reach the Indian Ocean and trade their colonial resources. They kept pestering the British who colonized Botswana until the Brits finally agreed to give up a tiny but very long strip of territory ending at the riverbank. This turned out to be not so useful, as <i>just</i> after Namibia&#8217;s Zambezi access sits Victoria Falls, the largest waterfall in the world &#8211; meaning that any Germans who tried to traverse the Zambezi to reach the Indian Ocean would last a matter of minutes before suddenly encountering a four hundred foot drop and falling to pretty much certain death. The moral of the story is not to pester the British Empire too much, especially if they&#8217;ve explored Africa and you haven&#8217;t.</p>
<p><center><IMG SRC="http://slatestarcodex.com/blog_images/tnamibia.jpg"></center></p>
<p>But the other moral of the story is that borders are weird. Although we think of borders as nice straight lines that separate people of different cultures, they can form giant panhandles, distant islands, and enclaves-within-enclaves-within-enclaves. They can depart from their usual course to pay honor to national founders, to preserve records of ancient conquests, or to connect to trade routes.</p>
<p>Hume&#8217;s ethics restrict &#8220;bad&#8221; to an instrumental criticism &#8211; you can condemn something as a bad way to achieve a certain goal, but not as morally bad independent of what the goal is. In the same way, borders can be bad at fulfilling your goals in drawing them, but not bad in an absolute sense or factually incorrect. Namibia&#8217;s border is bad from the perspective of Germans who want access to the Indian Ocean. But it&#8217;s <i>excellent</i> from the perspective of Englishmen who want to watch Germans plummet into the Lower Zambezi and get eaten by hippos.</p>
<p>Breaking out of the metaphor, the same is true of conceptual boundaries. You <i>may</i> draw the boundaries of the category &#8220;fish&#8221; any way you want. A category &#8220;fish&#8221; containing herring, dragonflies, and asteroids is going to be stupid, but only in the same sense that a Palestinian state centered around Tel Aviv would be stupid &#8211; it fails to fulfill any conceivable goals of the person designing it. Categories &#8220;fish&#8221; that do or don&#8217;t include whales may be appropriate for different people&#8217;s purposes, the same way Palestinians might argue about whether the borders of their state should be optimized for military defensibility or for religious/cultural significance.</p>
<p>Statements like &#8220;the Zambezi River is full of angry hippos&#8221; are brute facts. Statements like &#8220;the Zambezi River is the territory of Namibia&#8221; are negotiable.</p>
<p>In the same way, statements like &#8220;whales have little hairs&#8221; are brute facts. Statements like &#8220;whales are not a kind of fish&#8221; are negotiable.</p>
<p>So it&#8217;s important to keep these two sorts of statements separate, and remember that in no case can an agreed-upon set of borders or a category boundary be factually incorrect.</p>
<p><b>IV.</b></p>
<p>I usually avoid arguing LGBT issues on here, not because I don&#8217;t have strong opinions about them but because I assume so many of my readers already agree with me that it would be a waste of time. I&#8217;m pretty sure I&#8217;m right about this &#8211; on the recent survey, readers of this blog who were asked to rate their opinion of gay marriage from 1 (strongly against) to 5 (strongly in favor) gave an average rating of 4.32.</p>
<p>Nevertheless, I&#8217;ve seen enough anti-transgender comments recently that the issue might be worth a look.</p>
<p>In particular, I&#8217;ve seen one anti-transgender argument around that I take very seriously. The argument goes: we are rationalists. Our <i>entire shtick</i> is trying to believe what&#8217;s actually true, not on what we wish were true, or what our culture tells us is true, or what it&#8217;s popular to say is true. If a man thinks he&#8217;s a woman, then we might (empathetically) wish he were a woman, other people might demand we call him a woman, and we might be much more popular if we say he&#8217;s a woman. But if we&#8217;re going to be rationalists who focus on believing what&#8217;s actually true, then we&#8217;ve got to call him a man and take the consequences.</p>
<p>Thus Abraham Lincoln&#8217;s famous riddle: &#8220;If you call a tail a leg, how many legs does a dog have?&#8221; And the answer: &#8220;Four &#8211; because a tail isn&#8217;t a leg regardless of what you call it.&#8221;</p>
<p>(if John Wilkes Booth had to suffer through that riddle, then I don&#8217;t blame him)</p>
<p>I take this argument very seriously, because sticking to the truth really is important. But having taken it seriously, I think it&#8217;s seriously wrong.</p>
<p>An alternative categorization system is not an error, and borders are not objectively true or false.</p>
<p>Just as we can come up with criteria for a definition of &#8220;planet&#8221;, we can come up with a definition of &#8220;man&#8221;. Absolutely typical men have Y chromosomes, have male genitalia, appreciate manly things like sports and lumberjackery, are romantically attracted to women, personally identify as male, wear male clothing like blue jeans, sing baritone in the opera, et cetera.</p>
<p>Some people satisfy some criteria of manhood and not others, in much the same way that Pluto satisfies only some criteria of planethood and whales satisfy only some criteria of mammalhood. For example, gay men might date other men and behave in effeminate ways. People with congenital androgen insensitivity syndrome might have female bodies, female external genitalia, and have been raised female their entire life, but when you look into their cells they have Y chromosomes.</p>
<p>Biologists defined by fiat that in cases of ambiguous animal grouping like whales, phylogenetics will be the tiebreaker. This was useful to resolve ambiguity, and it&#8217;s worth sticking to as a Schelling point so everyone&#8217;s using their words the same way, but it&#8217;s kind of arbitrary and mostly based on biologists caring a lot about phylogenetics. If we let King Solomon make the decision, he might decide by fiat that whether animals lived in land or water would be the tiebreaker, since he&#8217;s most interested in whether the animal is hunted on horseback or by boat.</p>
<p>Likewise, astronomers decided by fiat that something would be a planet if and only if meets the three criteria of orbiting, round, and orbit-clearing. But here we have a pretty neat window into how these kinds of decisions take place &#8211; you can <A HREF="http://en.wikipedia.org/wiki/IAU_definition_of_planet">read the history</A> of the International Astronomical Union meeting where they settled on the definition and learn about all the alternative proposals that were floated and rejected and which particular politics resulted in the present criteria being selected among all the different possibilities. Here it is <i>obvious</i> that the decision was by fiat.</p>
<p>Without the input of any prestigious astronomers at all, most people seem to assume that the ultimate tiebreaker in man vs. woman questions is presence of a Y chromosome. I&#8217;m not sure this is a very principled decision, because I expect most people would classify congenital androgen insensitivity patients (XY people whose bodies are insensitive to the hormone that makes them look male, and so end up looking 100% female their entire lives and often not even knowing they have the condition) as women.</p>
<p>The project of the transgender movement is to propose a switch from using chromosomes as a tiebreaker to using self-identification as a tiebreaker.</p>
<p>(This isn&#8217;t actually the whole story &#8211; some of the more sophisticated people want to split &#8220;sex&#8221; and &#8220;gender&#8221;, so that people who want to talk about what chromosomes they&#8217;ve got have a categorization system to do that with, and a few people even want to split &#8220;chromosomal sex&#8221; and &#8220;anatomical sex&#8221; and &#8220;gender&#8221; and goodness knows what else &#8211; and I support all of these as very important examples of the virtue of precision &#8211; but to a first approximation, they want to define gender as self-identification)</p>
<p>This is not something that can be &#8220;true&#8221; or &#8220;false&#8221;. It&#8217;s a boundary-redrawing project. It can make for some boundaries that look a little bit weird &#8211; like a small percent of men being able to get pregnant &#8211; but as far as weird boundaries go that&#8217;s probably not as bad as having a tiny exclave of Turkish territory in the middle of a Syrian village.</p>
<p>(Ozy tells me this is sort of what queer theory is getting at, but in a horrible unreadable postmodernist way. They assure me you&#8217;re better off just reading the darned Sequences.)</p>
<p>You draw category boundaries in specific ways to capture tradeoffs you care about. If you care about the sanctity of the tomb of your country&#8217;s founder, sometimes it&#8217;s worth having a slightly weird-looking boundary in order to protect and honor it. And if you care about&#8230;</p>
<p>I&#8217;ve lived with a transgender person for six months, so I probably should have written this earlier. But I&#8217;m writing it now because I just finished accepting a transgender man to the mental hospital. He alternates between trying to kill himself and trying to cut off various parts of his body because he&#8217;s so distressed that he is biologically female. We&#8217;ve connected him with some endocrinologists who can hopefully get him started on male hormones, after which maybe he&#8217;ll stop doing that and <A HREF="http://thingofthings.wordpress.com/2014/11/13/on-trans-regret/">hopefully</A> be able to lead a normal life.</p>
<p>If I&#8217;m willing to accept an unexpected chunk of Turkey deep inside Syrian territory to honor some random dead guy &#8211; and I better, or else a platoon of Turkish special forces will want to have a word with me &#8211; then I ought to accept an unexpected man or two deep inside the conceptual boundaries of what would normally be considered female if it&#8217;ll save someone&#8217;s life. There&#8217;s no rule of rationality saying that I shouldn&#8217;t, and there are plenty of rules of human decency saying that I should.</p>
<p><b>V.</b></p>
<p>I&#8217;ve made this argument before and gotten a reply something like this:</p>
<p>&#8220;Transgender is a psychiatric disorder. When people have psychiatric disorders, certainly it&#8217;s right to sympathize and feel sorry for them and want to help them. But the way we try to help them is by treating their disorder, not by indulging them in their delusion.&#8221;</p>
<p>I think these people expect me to argue that transgender &#8220;isn&#8217;t really a psychiatric disorder&#8221; or something. But &#8220;psychiatric disorder&#8221; is just another category boundary dispute, and one that I&#8217;ve already <A HREF="http://lesswrong.com/lw/2as/diseased_thinking_dissolving_questions_about/">written enough about elsewhere</A>. At this point, I don&#8217;t care enough to say much more than &#8220;If it&#8217;s a psychiatric disorder, then attempts to help transgender people get covered by health insurance, and most of the ones I know seem to want that, so sure, gender dysphoria is a psychiatric disorder.&#8221;</p>
<p>And then I think of the Hair Dryer Incident.</p>
<p>The Hair Dryer Incident was probably the biggest dispute I&#8217;ve seen in the mental hospital where I work. Most of the time all the psychiatrists get along and have pretty much the same opinion about important things, but people were at each other&#8217;s <i>throats</i> about the Hair Dryer Incident.</p>
<p>Basically, this one obsessive compulsive woman would drive to work every morning and worry she had left the hair dryer on and it was going to burn down her house. So she&#8217;d drive back home to check that the hair dryer was off, then drive back to work, then worry that maybe she hadn&#8217;t <i>really</i> checked well enough, then drive back, and so on ten or twenty times a day.</p>
<p>It&#8217;s a pretty typical case of obsessive-compulsive disorder, but it was really interfering with her life. She worked some high-powered job &#8211; I think a lawyer &#8211; and she was <i>constantly</i> late to everything because of this driving back and forth, to the point where her career was in a downspin and she thought she would have to quit and go on disability. She wasn&#8217;t able to go out with friends, she wasn&#8217;t even able to go to restaurants because she would keep fretting she left the hair dryer on at home and have to rush back. She&#8217;d seen countless psychiatrists, psychologists, and counselors, she&#8217;d done all sorts of therapy, she&#8217;d taken every medication in the book, and none of them had helped.</p>
<p>So she came to my hospital and was seen by a colleague of mine, who told her &#8220;Hey, have you thought about just bringing the hair dryer with you?&#8221;</p>
<p>And it <i>worked</i>.</p>
<p>She would be driving to work in the morning, and she&#8217;d start worrying she&#8217;d left the hair dryer on and it was going to burn down her house, and so she&#8217;d look at the seat next to her, and there would be the hair dryer, right there. And she only had the one hair dryer, which was now accounted for. So she would let out a sigh of relief and keep driving to work.</p>
<p>And approximately half the psychiatrists at my hospital thought this was <i>absolutely scandalous</i>, and This Is Not How One Treats Obsessive Compulsive Disorder, and what if it got out to the broader psychiatric community that instead of giving all of these high-tech medications and sophisticated therapies we were just telling people to <i>put their hair dryers on the front seat of their car</i>?</p>
<p>I, on the other hand, thought it was the best fricking story I had ever heard and the guy deserved a medal. Here&#8217;s someone who was totally untreatable by the normal methods, with a debilitating condition, and a drop-dead simple intervention that nobody else had thought of gave her her life back. If one day I open up my own psychiatric practice, I am half-seriously considering using a picture of a hair dryer as the logo, just to let everyone know where I stand on this issue.</p>
<p>Miyamoto Musashi is quoted as saying:<br />
<blockquote>The primary thing when you take a sword in your hands is your intention to cut the enemy, whatever the means. Whenever you parry, hit, spring, strike or touch the enemy&#8217;s cutting sword, you must cut the enemy in the same movement. It is essential to attain this. If you think only of hitting, springing, striking or touching the enemy, you will not be able actually to cut him.</p></blockquote>
<p>Likewise, the primary thing in psychiatry is to help the patient, whatever the means. Someone can concern-troll that the hair dryer technique leaves something to be desired in that it might have prevented the patient from seeking a more thorough cure that would prevent her from having to bring the hair dryer with her. But compared to the alternative of &#8220;nothing else works&#8221; it seems clearly superior.</p>
<p>And that&#8217;s the position from which I think a psychiatrist should approach gender dysphoria, too.</p>
<p>Imagine if we could give depressed people a much higher quality of life merely by giving them cheap natural hormones. I don&#8217;t think there&#8217;s a psychiatrist in the world who wouldn&#8217;t celebrate that as one of the biggest mental health advances in a generation. Imagine if we could ameliorate schizophrenia with one safe simple surgery, just snip snip you&#8217;re not schizophrenic anymore. Pretty sure that would win <i>all</i> of the Nobel prizes. Imagine that we could make a serious dent in bipolar disorder just by calling people different pronouns. I&#8217;m pretty sure the entire mental health field would join together in bludgeoning anybody who refused to do that. We would bludgeon them over the head with big books about the side effects of lithium.</p>
<p>Really, are you <i>sure</i> you want your opposition to accepting transgender people to be &#8220;I think it&#8217;s a mental disorder&#8221;?</p>
<p><b>VI.</b></p>
<p>Some people can&#8217;t leave well enough alone, and continue to push the mental disorder angle. For example:</p>
<p><center><IMG SRC="http://slatestarcodex.com/blog_images/tnapoleon.png"></center></p>
<p>There are a lot of things I could say here.</p>
<p>I could point out that trans-Napoleonism seem to be mysteriously less common than transgender.</p>
<p>I could relate this mysterious difference to the various heavily researched <A HREF="http://en.wikipedia.org/wiki/Causes_of_transsexualism#Biological-based_theories">apparent biological correlates of transgender</A>, including unusual variants of the androgen receptor, birth-sex-discordant sizes of various brain regions, birth-sex-discordant responses to various pheromones, high rates of something <A HREF="http://slatestarcodex.com/2013/02/18/typical-mind-and-gender-identity/">seemingly like body integrity identity disorder</A>, and of course our old friend altered digit ratios. If our hypothetical trans-Napoleon came out of the womb wearing a French military uniform and clutching a list of 19th century Grand Armee positions in his cute little baby hands, I think I&#8217;d take him more seriously.</p>
<p>I could argue that questions about gender are questions about category boundaries, whereas questions about Napoleon &#8211; absent some kind of philosophical legwork that I would very much like to read &#8211; are questions of fact.</p>
<p>I could point out that if the extent of somebody&#8217;s trans-Napoleonness was wanting to wear a bicorne hat, and he was going to be suicidal his entire life if he couldn&#8217;t but pretty happy if I could, let him wear the damn hat.</p>
<p>I could just link people to <A HREF="http://freethoughtblogs.com/zinniajones/2012/08/being-a-woman-also-isnt-like-being-napoleon/">other</A> <A HREF="http://debunkingdenialism.com/2013/12/06/being-transgender-is-nothing-like-having-a-psychotic-napoleon-delusion/">sites&#8217;</A> pretty good objections to the same argument.</p>
<p>But I think what I actually want to say is that there was once a time somebody tried pretty much exactly this, silly hat and all. Society shrugged and played along, he led a rich and fulfilling life, his grateful Imperial subjects came to love him, and it&#8217;s one of the most heartwarming episodes in the history of one of my favorite places in the world.</p>
<p>Sometimes when you make a little effort to be nice to people, even people you might think are weird, <A HREF="http://en.wikipedia.org/wiki/Emperor_Norton_I">really good things happen</A>.</p>
]]></content:encoded>
			<wfw:commentRss>http://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/feed/</wfw:commentRss>
		<slash:comments>705</slash:comments>
		</item>
		<item>
		<title>Ethnic Tension And Meaningless Arguments</title>
		<link>http://slatestarcodex.com/2014/11/04/ethnic-tension-and-meaningless-arguments/</link>
		<comments>http://slatestarcodex.com/2014/11/04/ethnic-tension-and-meaningless-arguments/#comments</comments>
		<pubDate>Wed, 05 Nov 2014 03:38:42 +0000</pubDate>
		<dc:creator><![CDATA[Scott Alexander]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[long post is long]]></category>
		<category><![CDATA[rationality]]></category>

		<guid isPermaLink="false">http://slatestarcodex.com/?p=3136</guid>
		<description><![CDATA[I. Part of what bothers me &#8211; and apparently several others &#8211; about yesterday&#8217;s motte-and-bailey discussion is that here&#8217;s a fallacy &#8211; a pretty successful fallacy &#8211; that depends entirely on people not being entirely clear on what they&#8217;re arguing &#8230; <a href="http://slatestarcodex.com/2014/11/04/ethnic-tension-and-meaningless-arguments/">Continue reading <span class="pjgm-metanav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p><b>I.</b></p>
<p>Part of what bothers me  &#8211; and apparently several others &#8211; about <A HREF="http://slatestarcodex.com/2014/11/03/all-in-all-another-brick-in-the-motte/">yesterday&#8217;s motte-and-bailey discussion</A> is that here&#8217;s a fallacy &#8211; a pretty successful fallacy &#8211; that depends entirely on people not being entirely clear on what they&#8217;re arguing about. Somebody says God doesn&#8217;t exist. Another person objects that God is just a name for the order and beauty in the universe. Then this somehow helps defend the position that God is a supernatural creator being. How does that even happen?</p>
<p>&#8220;Sir, you&#8217;ve been accused of murdering your wife. We have three witnesses who said you did it. What do you have to say for yourself?&#8221;</p>
<p>&#8220;Well, your honor, I think it&#8217;s quite clear I didn&#8217;t murder the President. For one thing, he&#8217;s surrounded by Secret Service agents. For another, check the news. The President&#8217;s still alive.&#8221;</p>
<p>&#8220;Huh. For some reason I vaguely remember thinking you didn&#8217;t have a case. Yet now that I hear you talk, everything you say is incredibly persuasive. You&#8217;re free to go.&#8221;</p>
<p>While motte-and-bailey is less subtle, it seems to require a similar sort of misdirection. I&#8217;m not saying it&#8217;s impossible. I&#8217;m just saying it&#8217;s a fact that needs to be explained.</p>
<p>When everything works the way it&#8217;s supposed to in philosophy textbooks, arguments are supposed to go one of a couple of ways:</p>
<p>1. Questions of empirical fact, like &#8220;Is the Earth getting warmer?&#8221; or &#8220;Did aliens build the pyramids?&#8221;. You debate these by presenting factual evidence, like &#8220;An average of global weather station measurements show 2014 is the hottest year on record&#8221; or &#8220;One of the bricks at Giza says &#8216;Made In Tau Ceti V&#8217; on the bottom.&#8221; Then people try to refute these facts or present facts of their own.</p>
<p>2. Questions of morality, like &#8220;Is it wrong to abort children?&#8221; or &#8220;Should you refrain from downloading music you have not paid for?&#8221; You can only debate these <i>well</i> if you&#8217;ve already agreed upon a moral framework, like a particular version of natural law or consequentialism. But you can <i>sort of</i> debate them by comparing to examples of agreed-upon moral questions and trying to maintain consistency. For exmaple, &#8220;You wouldn&#8217;t kill a one day old baby, so how is a nine month old fetus different?&#8221; or &#8220;You wouldn&#8217;t download a <i>car</i>.&#8221;</p>
<p>If you are very lucky, your philosophy textbook will also admit the existence of:</p>
<p>3. Questions of policy, like &#8220;We should raise the minimum wage&#8221; or &#8220;We should bomb Foreignistan&#8221;. These are combinations of competing factual claims and competing values. For example, the minimum wage might hinge on factual claims like &#8220;Raising the minimum wage would increase unemployment&#8221; or &#8220;It is very difficult to live on the minimum wage nowadays, and many poor families cannot afford food.&#8221; But it might also hinge on value claims like &#8220;Corporations owe it to their workers to pay a living wage,&#8221; or &#8220;It is more important that the poorest be protected than that the economy be strong.&#8221; Bombing Foreignistan might depend on factual claims like &#8220;The Foreignistanis are harboring terrorists&#8221;, and on value claims like &#8220;The safety of our people is worth the risk of collateral damage.&#8221; If you can resolve all of these factual and value claims, you should be able to agree on questions of policy.</p>
<p>None of these seem to allow the sort of vagueness of topic mentioned above.</p>
<p><b>II.</b></p>
<p>A question: are you pro-Israel or pro-Palestine? Take a second, actually think about it.</p>
<p>Some people probably answered pro-Israel. Other people probably answered pro-Palestine. Other people probably said they were neutral because it&#8217;s a complicated issue with good points on both sides.</p>
<p>Probably very few people answered: <i>Huh? What?</i></p>
<p>This question doesn&#8217;t fall into any of the three Philosophy 101 forms of argument. It&#8217;s not a question of fact. It&#8217;s not a question of particular moral truths. It&#8217;s not even a question of policy. There are closely related policies, like whether Palestine should be granted independence. But if I support a very specific two-state solution where the border is drawn upon the somethingth parallel, does that make me pro-Israel or pro-Palestine? At exactly which parallel of border does the solution under consideration switch from pro-Israeli to pro-Palestinian? Do you think the crowd of people shouting and waving signs saying &#8220;SOLIDARITY WITH PALESTINE&#8221; have an answer to that question?</p>
<p>But it&#8217;s even worse, because this question covers much more than just the borders of an independent Palestinian state. Was Israel justified by responding to Hamas&#8217; rocket fire by bombing Gaza, even with the near-certainty of collateral damage? Was Israel justified in building a wall across the Palestinian territories to protect itself from potential terrorists, even though it severely curtails Palestinian freedom of movement? Do Palestinians have a &#8220;right of return&#8221; to territories taken in the 1948 war? Who should control the Temple Mount?</p>
<p>These are four very different questions which one would think each deserve independent consideration. But in reality, what percent of the variance in people&#8217;s responses do you think is explained by a general &#8220;pro-Palestine vs. pro-Israel&#8221; factor? 50%? 75%? More?</p>
<p>In a way, when we round people off to the Philosophy 101 kind of arguments, we are failing to respect their self-description. People aren&#8217;t out on the streets saying &#8220;By my cost-benefit analysis, Israel was in the right to invade Gaza, although it may be in the wrong on many of its other actions.&#8221; They&#8217;re waving little Israeli flags and holding up signs saying &#8220;ISRAEL: OUR STAUNCHEST ALLY&#8221;. Maybe we should take them at face value.</p>
<p>This is starting to look related to the original question in (I). Why is it okay to suddenly switch points in the middle of an argument? In the case of Israel and Palestine, it might be because people&#8217;s support for any particular Israeli policy is better explained by a General Factor Of Pro-Israeliness than by the policy itself. As long as I&#8217;m arguing in favor of Israel in <i>some way</i>, it&#8217;s still considered by everyone to be on topic.</p>
<p><b>III.</b></p>
<p>Some moral philosophers got fed up with nobody being able to explain what the heck a moral truth was and invented emotivism. Emotivism says there <i>are</i> no moral truths, just expressions of little personal bursts of emotion. When you say &#8220;Donating to charity is good,&#8221; you don&#8217;t mean &#8220;Donating to charity increases the sum total of utility in the world,&#8221; or &#8220;Donating to charity is in keeping with the Platonic moral law&#8221; or &#8220;Donating to charity was commanded by God&#8221; or even &#8220;I like donating to charity&#8221;. You&#8217;re just saying &#8220;Yay charity!&#8221; and waving a little flag.</p>
<p>Seems a lot like how people handle the Israel question. &#8220;I&#8217;m pro-Israel&#8221; doesn&#8217;t necessarily imply that you believe any empirical truths about Israel, or believe any moral principles about Israel, or even support any Israeli policies. It means you&#8217;re waving a little flag with a Star of David on it and cheering.</p>
<p>So here is Ethnic Tension: A Game For Two Players.</p>
<p>Pick a vague concept. &#8220;Israel&#8221; will do nicely for now.</p>
<p>Player 1 tries to associate the concept &#8220;Israel&#8221; with as much good karma as she possibly can. Concepts get good karma by doing good moral things, by being associated with good people, by being linked to the beloved in-group, and by being oppressed underdogs <A HREF="http://slatestarcodex.com/2013/05/18/against-bravery-debates/">in bravery debates</A>.</p>
<p>&#8220;Israel is the freest and most democratic country in the Middle East. It is one of America&#8217;s strongest allies and shares our Judeo-Christian values. </p>
<p>Player 2 tries to associate the concept &#8220;Israel&#8221; with as much bad karma as she possibly can. Concepts get bad karma by committing atrocities, being associated with bad people, being linked to the hated out-group, and by being oppressive big-shots in bravery debates. Also, she obviously needs to neutralize Player 1&#8217;s actions by disproving all of her arguments.</p>
<p>&#8220;Israel may have some level of freedom for its most privileged citizens, but what about the millions of people in the Occupied Territories that have no say? Israel is involved in various atrocities and has often killed innocent protesters. They are essentially a neocolonialist state and have allied with other neocolonialist states like South Africa.&#8221;</p>
<p>The prize for winning this game is the ability to win the other three types of arguments. If Player 1 wins, the audience ends up with a strongly positive General Factor Of Pro-Israeliness, and vice versa.</p>
<p>Remember, people&#8217;s capacity for <A HREF="http://en.wikipedia.org/wiki/Motivated_reasoning">motivated reasoning</A> is pretty much infinite.  Remember, a <A HREF="http://lesswrong.com/lw/km/motivated_stopping_and_motivated_continuation/">motivated skeptic</A> asks if the evidence <i>compels</i> them to accept the conclusion; a motivated credulist asks if the evidence <i>allows</i> them to accept the conclusion. Remember, Jonathan Haidt and his team <A HREF="http://www.yalepeplab.com/teaching/psych131_summer2013/documents/Lecture11_WheatleyHaidt2005_DisgustMoralJudgments.pdf">hypnotized</A> people to have strong disgust reactions to the word &#8220;often&#8221;, and then tried to hold in their laughter when people in the lab came up with convoluted yet plausible-sounding arguments against any policy they proposed that included the word &#8220;often&#8221; in the description. </p>
<p>I&#8217;ve never heard of the experiment being done the opposite way, but it sounds like the sort of thing that might work. Hypnotize someone to have a very positive reaction to the word &#8220;often&#8221; (for most hilarious results, have it give people an orgasm). &#8220;Do you think governments should raise taxes more often?&#8221; &#8220;Yes. Yes yes YES YES OH GOD YES!&#8221;</p>
<p>Once you finish the Ethnic Tension Game, you&#8217;re replicating Haidt&#8217;s experiment with the word &#8220;Israel&#8221; instead of the word &#8220;often&#8221;. Win the game, and any pro-Israel policy you propose will get a burst of positive feelings and tempt people to try to find some explanation, any explanation, that will justify it, whether it&#8217;s invading Gaza or building a wall or controlling the Temple Mount.</p>
<p>So this is the fourth type of argument, the kind that doesn&#8217;t make it into Philosophy 101 books. The <A HREF="http://tvtropes.org/pmwiki/pmwiki.php/Main/TropeNamers">trope namer</A> is Ethnic Tension, but it applies to anything that can be identified as a Vague Concept, or paired opposing Vague Concepts, which you can use emotivist thinking to load with good or bad karma.</p>
<p><b>IV.</b></p>
<p>Now motte-and-bailey stands revealed:<br />
<blockquote>Somebody says God doesn&#8217;t exist. Another person objects that God is just a name for the order and beauty in the universe. Then this somehow helps defend the position that God is a supernatural creator being. How does that even happen?</p></blockquote>
<p>The two-step works like this. First, load &#8220;religion&#8221; up with good karma by pitching it as persuasively as possible. &#8220;Religion is just the belief that there&#8217;s beauty and order in the universe.&#8221; </p>
<p>Wait, <i>I</i> think there&#8217;s beauty and order in the universe!</p>
<p>&#8220;Then you&#8217;re religious too. We&#8217;re all religious, in the end, because religion is about the common values of humanity and meaning and compassion sacrifice beauty of a sunrise Gandhi Buddha Sufis St. Francis awe complexity humility wonder Tibet the Golden Rule love.&#8221;</p>
<p>Then, once somebody has a strongly positive General Factor Of Religion, it doesn&#8217;t really matter whether someone believes in a creator God or not. If they have any predisposition whatsoever to do so, they&#8217;ll find a reason to let themselves. If they can&#8217;t manage it, they&#8217;ll say it&#8217;s true &#8220;metaphorically&#8221; and continue to act upon every corollary of it being true.</p>
<p>(&#8220;God is just another name for the beauty and order in the universe. But Israel definitely belongs to the Jews, because the beauty and order of the universe promised it to them.&#8221;)</p>
<p>If you&#8217;re an atheist, you probably have a lot of important issues on which you want people to consider non-religious answers and policies. And if somebody can maintain good karma around the &#8220;religion&#8221; concept by believing God is the order and beauty in the universe, then that can still be a victory for religion even if it is done by jettisoning many traditionally &#8220;religious&#8221; beliefs. In this case, it is useful to think of the &#8220;order and beauty&#8221; formulation as a &#8220;motte&#8221; for the &#8220;supernatural creator&#8221; formulation, since it&#8217;s allowing the <i>entire concept</i> to be defended.</p>
<p>But even this is giving people too much credit, because the existence of God is a (sort of) factual question. From yesterday&#8217;s post:<br />
<blockquote>Suppose we’re debating feminism, and I defend it by saying it really is important that women are people, and you attack it by saying that it’s not true that all men are terrible. What is the real feminism we should be debating? Why would you even ask that question? What is this, some kind of dumb high school debate club? Who the heck thinks it would be a good idea to say &#8216;Here’s a vague poorly-defined concept that mind-kills everyone who touches it – quick, should you associate it with positive affect or negative affect?!&#8217;</p></blockquote>
<p>Who the heck thinks that? Everybody, all the time.</p>
<p>Once again, if I can load the concept of &#8220;feminism&#8221; with good karma by making it so obvious nobody can disagree with it, then I have a massive &#8220;home field advantage&#8221; when I&#8217;m trying to convince anyone of any particular policy that can go under the name &#8220;feminism&#8221;, even if it&#8217;s unrelated to the arguments that gave feminism good karma in the first place.</p>
<p>Or if I&#8217;m against feminism, I just post quotes from the ten worst feminists on Tumblr again and again until the entire movement seems ridiculous and evil, and then you&#8217;ll have trouble convincing anyone of <i>anything</i> feminist. &#8220;That seems reasonable&#8230;but wait, isn&#8217;t that a feminist position? Aren&#8217;t those the people I hate?&#8221;</p>
<p>(compare: <A HREF="http://thinkprogress.org/health/2012/06/25/505526/poll-most-americans-support-obamacare-provisions/">most Americans</A> oppose Obamacare, but most Americans support each individual component of Obamacare when it is explained without using the word &#8220;Obamacare&#8221;)</p>
<p><b>V.</b></p>
<p>Little flow diagram things make everything better. Let&#8217;s make a little flow diagram thing.</p>
<p>We have our node &#8220;Israel&#8221;, which has either good or bad karma. Then there&#8217;s another node close by marked &#8220;Palestine&#8221;. We would expect these two nodes to be pretty anti-correlated. When Israel has strong good karma, Palestine has strong bad karma, and vice versa.</p>
<p>Now suppose you listen to Noam Chomsky talk about how strongly he supports the Palestinian cause and how much he dislikes Israel. One of two things can happen:</p>
<p>&#8220;Wow, a great man such as Noam Chomsky supports the Palestinians! They must be very deserving of support indeed!&#8221;</p>
<p>or</p>
<p>&#8220;That idiot Chomsky supports Palestine? Well, screw him. And screw them!&#8221;</p>
<p>So now there is a third node, Noam Chomsky, that connects to both Israel and Palestine, and we have discovered it is positively correlated with Palestine and negatively correlated with Israel. It probably has a pretty low weight, because there are a lot of reasons to care about Israel and Palestine other than Chomsky, and a lot of reasons to care about Chomsky other than Israel and Palestine, but the connection is there.</p>
<p>I don&#8217;t know anything about neural nets, so maybe this system isn&#8217;t actually a neural net, but whatever it is I&#8217;m thinking of, it&#8217;s a structure where eventually the three nodes reach some kind of equilibrium. If we start with someone liking Israel and Chomsky, but not Palestine, then either that&#8217;s going to shift a little bit towards liking Palestine, or shift a little bit towards disliking Chomsky.</p>
<p>Now we add more nodes. Cuba seems to really support Palestine, so they get a positive connection with a little bit of weight there. And I think Noam Chomsky supports Cuba, so we&#8217;ll add a connection there as well. Cuba is socialist, and that&#8217;s one of the most salient facts about it, so there&#8217;s a heavily weighted positive connection between Cuba and socialism. Palestine kind of makes noises about socialism but I don&#8217;t think they have any particular economic policy, so let&#8217;s say very weak direct connection. And Che is heavily associated with Cuba, so you get a pretty big Che &#8211; Cuba connection, plus a strong direct Che &#8211; socialism one. And those pro-Palestinian students who threw rotten fruit at an Israeli speaker also get a little path connecting them to &#8220;Palestine&#8221; &#8211; hey, why not &#8211; so that if you support Palestine you might be willing to excuse what they did and if you oppose them you might be a little less likely to support Palestine.</p>
<p>Back up. This model produces crazy results, like that people who like Che are more likely to oppose Israel bombing Gaza. That&#8217;s such a weird, implausible connection that it casts doubt upon the entire&#8230;</p>
<p>Oh. Wait. Yeah. Okay.</p>
<p>I think this kind of model, in its efforts to sort itself out into a ground state, might settle on some kind of General Factor Of Politics, which would probably correspond pretty well to the left-right axis.</p>
<p>In <A HREF="http://slatestarcodex.com/2014/10/16/five-case-studies-on-politicization/">Five Case Studies On Politicization</A>, I noted how fresh new unpoliticized issues, like the Ebola epidemic, were gradually politicized by connecting them to other ideas that were already part of a political narrative. For example, a quarantine against Ebola would require closing the borders. So now there&#8217;s a weak negative link between &#8220;Ebola quarantine&#8221; and &#8220;open borders&#8221;. If your &#8220;open borders&#8221; node has good karma, now you&#8217;re a little less likely to support an Ebola quarantine. If &#8220;open borders&#8221; has bad karma, a little more likely.</p>
<p>I also tried to point out how you could make different groups support different things by changing your narrative a little:<br />
<blockquote>Global warming has gotten inextricably tied up in the Blue Tribe narrative: Global warming proves that unrestrained capitalism is destroying the planet. Global warming disproportionately affects poor countries and minorities. Global warming could have been prevented with multilateral action, but we were too dumb to participate because of stupid American cowboy diplomacy. Global warming is an important cause that activists and NGOs should be lauded for highlighting. Global warming shows that Republicans are science denialists and probably all creationists. Two lousy sentences on “patriotism” aren’t going to break through that.</p>
<p>If I were in charge of convincing the Red Tribe to line up behind fighting global warming, here’s what I’d say:</p>
<p>In the 1950s, brave American scientists shunned by the climate establishment of the day discovered that the Earth was warming as a result of greenhouse gas emissions, leading to potentially devastating natural disasters that could destroy American agriculture and flood American cities. As a result, the country mobilized against the threat. Strong government action by the Bush administration outlawed the worst of these gases, and brilliant entrepreneurs were able to discover and manufacture new cleaner energy sources. As a result of these brave decisions, our emissions stabilized and are currently declining.</p>
<p>Unfortunately, even as we do our part, the authoritarian governments of Russia and China continue to industralize and militarize rapidly as part of their bid to challenge American supremacy. As a result, Communist China is now by far the world’s largest greenhouse gas producer, with the Russians close behind. Many analysts believe Putin secretly welcomes global warming as a way to gain access to frozen Siberian resources and weaken the more temperate United States at the same time. These countries blow off huge disgusting globs of toxic gas, which effortlessly cross American borders and disrupt the climate of the United States. Although we have asked them to stop several times, they refuse, perhaps egged on by major oil producers like Iran and Venezuela who have the most to gain by keeping the world dependent on the fossil fuels they produce and sell to prop up their dictatorships.</p>
<p>We need to take immediate action. While we cannot rule out the threat of military force, we should start by using our diplomatic muscle to push for firm action at top-level summits like the Kyoto Protocol. Second, we should fight back against the liberals who are trying to hold up this important work, from big government bureaucrats trying to regulate clean energy to celebrities accusing people who believe in global warming of being ‘racist’. Third, we need to continue working with American industries to set an example for the world by decreasing our own emissions in order to protect ourselves and our allies. Finally, we need to punish people and institutions who, instead of cleaning up their own carbon, try to parasitize off the rest of us and expect the federal government to do it for them.</p></blockquote>
<p>In the first paragraph, &#8220;global warming&#8221; gets positively connected to concepts like &#8220;poor people and minorities&#8221; and &#8220;activists and NGOs&#8221;, and gets negatively connected to concepts like &#8220;capitalism&#8221;, &#8220;American cowboy diplomacy&#8221;, and &#8220;creationists&#8221;. That gives global warming really strong good karma if (and only if) you like the first two concepts and hate the last three.</p>
<p>In the next three paragraphs, &#8220;global warming&#8221; gets positively connected to &#8220;America&#8221;, &#8220;the Bush administration&#8221; and &#8220;entrepreneurs&#8221;, and negatively connected to &#8220;Russia&#8221;, &#8220;China&#8221;, &#8220;oil producing dictatorships like Iran and Venezuela&#8221;, &#8220;big government bureaucrats&#8221;, and &#8220;welfare parasites&#8221;. This is going to appeal to, well, a different group.</p>
<p>Notice two things here. First, the exact connection isn&#8217;t that important, as long as we can hammer in the existence of a connection. I could probably just say GLOBAL WARMING! COMMUNISM! GLOBAL WARMING! COMMUNISM! GLOBAL WARMING! COMMUNISM! several hundred times and have the same effect if I could get away with it (this is the principle behind attack ads which link a politician&#8217;s face to scary music and a very concerned voice).</p>
<p>Second, there is no attempt whatsoever to challenge the idea that the issue at hand is the positive or negative valence of a concept called &#8220;global warming&#8221;. At no point is it debated what the solution is, which countries the burden is going to fall on, or whether any particular level of emission cuts would do more harm than good. It&#8217;s just accepted as obvious by both sides that we debate &#8220;for&#8221; or &#8220;against&#8221; global warming, and if the &#8220;for&#8221; side wins then they get to choose some solution or other or whatever oh god that&#8217;s so boring can we get back to Israel vs. Palestine.</p>
<p>Some of the scientists working on IQ have started talking about &#8220;hierarchical factors&#8221;, meaning that there&#8217;s a general factor of geometry intelligence partially correlated with other things into a general factor of mathematical intelligence partially correlated with other things into a general factor of total intelligence.</p>
<p>I would expect these sorts of things to work the same way. There&#8217;s a General Factor Of Global Warming that affects attitudes toward pretty much all proposed global warming solutions, which is very highly correlated with a lot of other things to make a General Factor Of Environmentalism, which itself is moderately highly correlated with other things into the General Factor Of Politics. </p>
<p><b>VI.</b></p>
<p>Speaking of politics, a fruitful digression: what the heck was up with the Ashley Todd mugging hoax in 2008?</p>
<p>Back in the 2008 election, a McCain campaigner <A HREF="http://en.wikipedia.org/wiki/Ashley_Todd_mugging_hoax">claimed</A> (falsely, it would later turn out) to have been assaulted by an Obama supporter. She said he slashed a &#8220;B&#8221; (for &#8220;Barack&#8221;) on her face with a knife. This got a lot of coverage, and according to Wikipedia:<br />
<blockquote> John Moody, executive vice president at Fox News, commented in a blog on the network&#8217;s website that &#8220;this incident could become a watershed event in the 11 days before the election,&#8221; but also warned that &#8220;if the incident turns out to be a hoax, Senator McCain’s quest for the presidency is over, forever linked to race-baiting.&#8221;</p></blockquote>
<p>Wait. One Democrat, presumably not acting on Obama&#8217;s direct orders, attacks a Republican woman. And this is supposed to <i>alter the outcome of the entire election</i>? In what universe does one crime by a deranged psychopath change whether Obama&#8217;s tax policy or job policy or bombing-scary-foreigners policy is better or worse than McCain&#8217;s? </p>
<p>Even <i>if</i> we&#8217;re willing to make the irresponsible leap from &#8220;Obama is supported by psychopaths, therefore he&#8217;s probably a bad guy,&#8221; there are like a hundred million people on each side. Psychopaths are usually estimated at about 1% of the population, so any movement with a million people will already have 10,000 psychopaths. Proving the existence of a single one changes <i>nothing</i>.</p>
<p>I think insofar as this affected the election &#8211; and everyone seems to have agreed that it might have &#8211; it hit President Obama with a burst of bad karma. Obama something something psychopath with a knife. Regardless of the exact content of those something somethings, <i>is that the kind of guy you want to vote for</i>?</p>
<p>Then when it was discovered to be a hoax, it was McCain something something race-baiting hoaxer. Now <i>he&#8217;s</i> got the bad karma!</p>
<p>This sort of conflation between a cause and its supporters really only makes sense in the emotivist model of arguing. I mean, this shouldn&#8217;t even get dignified with the name <i>ad hominem</i> fallacy. Ad hominem fallacy is &#8220;McCain had sex with a goat, therefore whatever he says about taxes is invalid.&#8221; At least it&#8217;s still the same <i>guy</i>. This is something the philosophy textbooks can&#8217;t bring themselves to believe really exists, even as a fallacy.</p>
<p>But if there&#8217;s a General Factor Of McCain, then anything bad remotely connected to the guy &#8211; goat sex, lying campaigners, whatever &#8211; reflects on everything else about him.</p>
<p>This is the same pattern we see in Israel and Palestine. How many times have you seen a news story like this one: &#8220;Israeli speaker hounded off college campus by pro-Palestinian partisans throwing fruit. Look at the intellectual bankruptcy of the pro-Palestinian cause!&#8221; It&#8217;s clearly intended as an argument for <i>something</i> other than just not throwing fruit at people. The causation seems to go something like &#8220;These particular partisans are violating the usual norms of civil discussion, therefore they are bad, therefore something associated with Palestine is bad, therefore your General Factor of Pro-Israeliness should become more strongly positive, therefore it&#8217;s okay for Israel to bomb Gaza.&#8221; Not usually said in those <i>exact words</i>, but the thread can be traced.</p>
<p><b>VII.</b></p>
<p>Here is a prediction of this model: we will be obsessed with what concepts we can connect to other concepts, even when the connection is totally meaningless.</p>
<p>Suppose I say: &#8220;Opposing Israel is anti-Semitic&#8221;. Why? Well, the Israelis are mostly Jews, so in a sense by definition being anti- them is &#8220;anti-Semitic&#8221;, broadly defined. Also, p(opposes Israel|is anti-Semitic) is probably pretty high, which sort of lends some naive plausibility to the idea that p(is anti-Semitic|opposes Israel) is at least higher than it otherwise <i>could</i> be.</p>
<p>Maybe we do our research and we find exactly what percent of opponents of Israel endorse various anti-Semitic statements like &#8220;I hate all Jews&#8221; or &#8220;Hitler had some bright ideas&#8221;. We&#8217;ve <A HREF="http://lesswrong.com/lw/nv/replace_the_symbol_with_the_substance/">replaced the symbol with the substance</A>. Problem solved, right?</p>
<p>Maybe not. In the same sense that people can agree on all of the characteristics of Pluto &#8211; its diameter, the eccentricity of its orbit, its number of moons &#8211; and still disagree on the question &#8220;Is Pluto a planet&#8221;, one can agree on every characteristic of every Israel opponent and still disagree on the definitional question &#8220;Is opposing Israel anti-Semitic?&#8221;</p>
<p>(fact: it wasn&#8217;t until proofreading this essay that I realized I had originally written &#8220;Is Israel a planet?&#8221; and &#8220;Is opposing Pluto anti-Semitic?&#8221; I would like to see Jonathan Haidt hypnotize people until they can come up with positive arguments for those propositions.)</p>
<p>What&#8217;s the point of this useless squabble <A HREF="http://lesswrong.com/lw/np/disputing_definitions/">over definitions</A>?</p>
<p>I think it&#8217;s about drawing a line between the concept &#8220;anti-Semitism&#8221; and &#8220;oppose Israel&#8221;. If your head is screwed on right, you assign anti-Semitism some very bad karma. So if we can stick a thick line between &#8220;anti-Semitism&#8221; and &#8220;oppose Israel&#8221;, then you&#8217;re going have very bad feelings about opposition to Israel and your General Factor Of Pro-Israeliness will go up.</p>
<p>Notice that this model <i>is transitive, but shouldn&#8217;t be</i>.</p>
<p>That is, let&#8217;s say we&#8217;re arguing over the definition of anti-Semitism, and I say &#8220;anti-Semitism just means anything that hurts Jews&#8221;. This is a dumb definition, but let&#8217;s roll with it.</p>
<p>First, I load &#8220;anti-Semitism&#8221; with lots of negative affect. Hitler was anti-Semitic. The pogroms in Russia were anti-Semitic. The Spanish Inquisition was anti-Semitic. Okay, negative affect achieved.</p>
<p>Then I connect &#8220;wants to end the Israeli occupation of Palestine&#8221; to &#8220;anti-Semitism&#8221;. Now wanting to end the Israeli occupation of Palestine has lots of negative affect attached to it.</p>
<p>It sounds dumb when you put it like that, but when you put it like &#8220;You&#8217;re anti-Semitic for wanting to end the occupation&#8221; it&#8217;s a pretty damaging argument.</p>
<p>This is <i>trying</i> to be transitive. It&#8217;s trying to say &#8220;anti-occupation = anti-Semitism, anti-Semitism = evil, therefore anti-occupation = evil&#8221;. If this were arithmetic, it would work. But there&#8217;s no Transitive Property Of Concepts. If anything, concepts are more like sets. The logic is &#8220;anti-occupation is a member of the set anti-Semitic, the set anti-Semitic contains members that are evil, therefore anti-occupation is evil&#8221;, which obviously doesn&#8217;t check out.</p>
<p>(compare: &#8220;I am a member of the set &#8216;humans&#8217;, the set &#8216;humans&#8217; contains the Pope, therefore I am the Pope&#8221;.)</p>
<p>Anti-Semitism is generally considered evil because a lot of anti-Semitic things involve killing or dehumanizing Jews. Opposing the Israel occupation of Palestine doesn&#8217;t kill or dehumanize Jews, so even if we call it &#8220;anti-Semitic&#8221; by definition, there&#8217;s no reason for our usual bad karma around anti-Semitism to transfer over. But by an unfortunate rhetorical trick, it does &#8211; you can gather up bad karma into &#8220;anti-Semitic&#8221; and then shoot it at the &#8220;occupation of Palestine&#8221; issue just by clever use of definitions.</p>
<p>This means that if you can come up with sufficiently clever definitions and convince your opponent to accept them, you can win any argument by default just by having a complex system of mirrors in place to reflect bad karma from genuinely evil things to the things you want to tar as evil. This is essentially the point I make in <A HREF="http://slatestarcodex.com/2014/07/07/social-justice-and-words-words-words/">Words, Words, Words</A>. </p>
<p>If we kinda tweak the definition of &#8220;anti-Semitism&#8221; to be &#8220;anything that inconveniences Jews&#8221;, we can pull a trick where we leverage people&#8217;s dislike of Hitler to make them support the Israeli occupation of Palestine &#8211; but in order to do that, we need to get everyone on board with our <i>slightly</i> non-standard definition. Likewise, the social justice movement insists on their own novel definitions of words like &#8220;racism&#8221; that don&#8217;t match common usage, any dictionary, or etymological history &#8211; but which do perfectly describe a mirror that reflects bad karma toward opponents of social justice while making it impossible to reflect any bad karma back. Overreliance on this mechanism explains why so many social justice debates end up being about whether a particular mirror can be deployed to transfer bad karma in a specific case (&#8220;are trans people privileged?!&#8221;) rather than any feature of the real world.</p>
<p>But they are hardly alone. Compare: &#8220;Is such an such an organization a <i>cult</i>?&#8221;, &#8220;Is such and such a policy <i>socialist</i>?&#8221;, &#8220;Is abortion or capital punishment or war <i>murder</i>?&#8221; All entirely about whether we&#8217;re allowed to reflect bad karma from known sources of evil to other topics under discussion.</p>
<p>Look around you. Just look around you. Have you worked out what we&#8217;re looking for? Correct. The answer is <A HREF="http://squid314.livejournal.com/323694.html">The Worst Argument In The World</A>. Only now, we can explain why it works.</p>
<p><b>VIII.</b></p>
<p>From the self-esteem literature, I gather that the self is also a concept that can have good or bad karma. From the cognitive dissonance literature, I gather that the self is actively involved in maintaining good karma around itself through as many biases as it can manage to deploy.</p>
<p>I&#8217;ve mentioned <A HREF="http://spr.sagepub.com/content/early/2013/01/23/0265407512472324.abstract">this study</A> before. Researchers make <s>victims</s> participants fill out a questionnaire about their romantic relationships. Then they pretend to &#8220;grade&#8221; the questionnaire, actually assigning scores at random. Half the participants are told their answers indicate they have the tendency to be very faithful to their partner. The other half are told they have very low faithfulness and their brains just aren&#8217;t built for fidelity. Then they ask the <s>participants</s> victims their opinion on staying faithful in a relationship &#8211; very important, moderately important, or not so important?</p>
<p>There is a strong signal of people who are told they are bad at fidelity to state fidelity is unimportant, and another strong signal of people who are told they are especially faithful stating that fidelity is a great and noble virtue that must be protected.</p>
<p>The researchers conclude that people want to have high self-esteem. If I am terrible at fidelity, and fidelity is the most important virtue, that makes me a terrible person. If I am terrible at fidelity and fidelity doesn&#8217;t matter, I&#8217;m fine. If I am great at fidelity, and fidelity is the most important virtue, I can feel pretty good about myself.</p>
<p>This doesn&#8217;t seem too surprising. It&#8217;s just the more subtle version of the effect where white people are a lot more likely to be white supremacists than members of any other race. Everyone likes to hear that they&#8217;re great. The question is whether they can defend it and fit it in with their other ideas. The answer is &#8220;usually yes, because people are capable of pretty much any contortion of logic you can imagine and a lot that you can&#8217;t&#8221;.</p>
<p>I had a bad experience when I was younger where a bunch of feminists attacked and threatened me because of something I wrote. It left me kind of scarred. More importantly, the shape of that scar was a big anticorrelated line between self-esteem and the &#8220;feminism&#8221; concept. If feminism has lots of good karma, then I have lots of bad karma, because I am a person feminists hate. If feminists have lots of bad karma, then I look good by comparison, the same way it&#8217;s pretty much a badge of honor to be disliked by Nazis. The result was a permanent haze of bad karma around &#8220;feminism&#8221; unconnected to any specific feminist idea, which I have to be constantly on the watch for if I want to be able to evaluate anything related to feminism fairly or rationally.</p>
<p>Good or bad karma, when applied to yourself, looks like high or low self-esteem; when applied to groups, it looks like high or low status. In the giant muddle of a war for status that we politely call &#8220;society&#8221;, this makes beliefs into weapons and the karma loading of concepts into the difference between lionization and dehumanization.</p>
<p>The Trope Namer for emotivist arguments is &#8220;ethnic tension&#8221;, and although it&#8217;s most obvious in the case of literal ethnicities like the Israelis and the Palestinians, the ease with which concepts become attached to different groups creates a whole lot of &#8220;proxy ethnicites&#8221;. I&#8217;ve <A HREF="http://slatestarcodex.com/2014/09/30/i-can-tolerate-anything-except-the-outgroup/">written before</A> about how American liberals and conservatives are seeming less and less like people who happen to have different policy prescriptions, and more like two different tribes engaged in an ethnic conflict quickly approaching Middle East level hostility. More recently, a friend on Facebook described the-thing-whose-name-we-do-not-speak-lest-it-appear and-destroy-us-all, the one involving reproductively viable worker ants, as looking more like an ethnic conflict about who is oppressing whom than any real difference in opinions.</p>
<p>Once a concept has joined up with an ethnic group, either a real one or a makeshift one, it&#8217;s impossible to oppose the concept without simultaneously lowering the status of the ethnic group, which is going to start at least a <i>little</i> bit of a war. Worse, once a concept has joined up with an ethnic group, one of the best ways to argue against the concept is to dehumanize the ethnic group it&#8217;s working with. Dehumanizing an ethnic group has always been easy &#8211; just associate them with a disgust reaction, <A HREF="http://multiheaded1793.tumblr.com/post/98968958301/queenshulamit-hufflepuffintp">portray</A> them as conventionally unattractive and unlovable and full of all the worst human traits &#8211; and now it is profitable as well, since it&#8217;s one of the fastest ways to load bad karma into an idea you dislike.</p>
<p><b>IX.</b></p>
<p>According to <A HREF="http://yudkowsky.net/rational/virtues/">The Virtues Of Rationality</A>:<br />
<blockquote>The tenth virtue is precision. One comes and says: The quantity is between 1 and 100. Another says: the quantity is between 40 and 50. If the quantity is 42 they are both correct, but the second prediction was more useful and exposed itself to a stricter test. What is true of one apple may not be true of another apple; thus more can be said about a single apple than about all the apples in the world. The narrowest statements slice deepest, the cutting edge of the blade. As with the map, so too with the art of mapmaking: The Way is a precise Art. Do not walk to the truth, but dance. On each and every step of that dance your foot comes down in exactly the right spot. Each piece of evidence shifts your beliefs by exactly the right amount, neither more nor less. What is exactly the right amount? To calculate this you must study probability theory. Even if you cannot do the math, knowing that the math exists tells you that the dance step is precise and has no room in it for your whims.</p></blockquote>
<p>The official desciption is of literal precision, as specific numerical precision in probability updates. But is there a secret interpretation of this virtue?</p>
<p><center><br />
<blockquote class="twitter-tweet" lang="en">
<p>Four top secret Virtues known only to the Highest Clergy: 1) Fnorg 2) Turlity 3) Charigrace 4) Love-231.</p>
<p>&mdash; Deity Of Religion (@deityofreligion) <a href="https://twitter.com/deityofreligion/status/525712796852301825">October 24, 2014</a></p></blockquote>
<p><script async src="//platform.twitter.com/widgets.js" charset="utf-8"></script></center></p>
<p>Precision as separation. Once you&#8217;re debating &#8220;religion&#8221;, you&#8217;ve already lost. Precision as sticking to a precise question, like &#8220;Is the first chapter of Genesis literally true?&#8221; or &#8220;Does Buddhist meditation help treat anxiety disorders?&#8221; and trying to keep these issues as separate from any General Factor Of Religiousness as humanly possible. Precision such that &#8220;God the supernatural Creator exists&#8221; and &#8220;God the order and beauty in the Universe exists&#8221; are as carefully sequestered from one another as &#8220;Did the defendant kill his wife?&#8221; and &#8220;Did the defendant kill the President?&#8221;</p>
<p>I want to end by addressing a point a commenter made in my last post on motte-and-bailey:<br />
<blockquote>In the real world, the particular abstract questions aren’t what matter – the groups and people are what matter. People get things done, and they aren’t particularly married to particular abstract concepts, they are married to their values and their compatriots. In order to deal with reality, we must attack and defend groups and individuals. That does not mean forsaking logic. It requires dealing with obfuscating tactics like those you outline above, but that’s not even a real downside, because if you flee into the narrow, particular questions all you’re doing is covering your eyes to avoid perceiving the the monsters that will still make mincemeat of your attempts to change things.</p></blockquote>
<p>I don&#8217;t entirely disagree with this. But I think we&#8217;ve <A HREF="http://slatestarcodex.com/2014/02/23/in-favor-of-niceness-community-and-civilization/">been over this territory before</A>.</p>
<p>The world is a scary place, full of bad people who want to hurt you, and in the state of nature you&#8217;re pretty much <A HREF="http://slatestarcodex.com/2014/07/30/meditations-on-moloch/">obligated</A> to engage in whatever it takes to survive.</p>
<p>But instead of sticking with the state of nature, we have the ability to form communities built on mutual disarmament and mutual cooperation. Despite artificially limiting themselves, these communities become stronger than the less-scrupulous people outside them, because they can work together effectively and because they can boast a better quality of life that attracts their would-be enemies to join them. At least in the short term, these communities can resist races to the bottom and prevent the use of personally effective but negative-sum strategies.</p>
<p>One such community is the kind where members try to stick to rational discussion as much as possible. These communities are definitely better able to work together, because they have a <A HREF="http://en.wikipedia.org/wiki/Aumann%27s_agreement_theorem">powerful method</A> of resolving empirical disputes. They&#8217;re definitely better quality of life, because you don&#8217;t have to deal with constant <A HREF="http://slatestarcodex.com/2014/06/14/living-by-the-sword/">insult wars and personal attacks</A>. And the existence of such communities provides positive externalities to the outside world, since they are better able to resolve difficult issues and find truth.</p>
<p>But forming a rationalist community isn&#8217;t just about having the <i>will</i> to discuss things well. It&#8217;s also about having the <i>ability</i>. Overcoming bias is really hard, and so the members of such a community need to be constantly trying to advance the art and figure out how to improve their discussion tactics.</p>
<p>As such, it&#8217;s acceptable to try to determine and discuss negative patterns of argument, even if those patterns of argument are useful and necessary weapons in a state of nature. If anything, understanding them makes them <i>easier</i> to use if you&#8217;ve got to use them, and makes them easier to recognize and counter from others, giving a slight advantage in battle if that&#8217;s the kind of thing you like. But moving them from unconscious to conscious also gives you the crucial <i>choice</i> of when to deploy them and allows people to try to root out ethnic tension in particular communities.</p>
]]></content:encoded>
			<wfw:commentRss>http://slatestarcodex.com/2014/11/04/ethnic-tension-and-meaningless-arguments/feed/</wfw:commentRss>
		<slash:comments>332</slash:comments>
		</item>
		<item>
		<title>All In All, Another Brick In The Motte</title>
		<link>http://slatestarcodex.com/2014/11/03/all-in-all-another-brick-in-the-motte/</link>
		<comments>http://slatestarcodex.com/2014/11/03/all-in-all-another-brick-in-the-motte/#comments</comments>
		<pubDate>Tue, 04 Nov 2014 03:19:58 +0000</pubDate>
		<dc:creator><![CDATA[Scott Alexander]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[rationality]]></category>

		<guid isPermaLink="false">http://slatestarcodex.com/?p=3131</guid>
		<description><![CDATA[One of the better things I&#8217;ve done with this blog was help popularize Nicholas Shackel&#8217;s &#8220;motte and bailey doctrine&#8221;. But I&#8217;ve recently been reminded I didn&#8217;t do a very good job of it. The original discussion is in the middle &#8230; <a href="http://slatestarcodex.com/2014/11/03/all-in-all-another-brick-in-the-motte/">Continue reading <span class="pjgm-metanav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>One of the better things I&#8217;ve done with this blog was help popularize Nicholas Shackel&#8217;s <A HREF="http://philpapers.org/archive/SHATVO-2.pdf">&#8220;motte and bailey doctrine&#8221;</A>. But I&#8217;ve recently been reminded I didn&#8217;t do a very good job of it. The original discussion is in the middle of a post so controversial that it probably can&#8217;t be linked in polite company &#8211; somewhat dampening its ability to popularize anything.</p>
<p>In order to rectify the error, here is a nice clean post on the concept that adds a couple of further thoughts to the original formulation.</p>
<p>The original Shackel paper is intended as a critique of post-modernism. Post-modernists sometimes say things like “reality is socially constructed”, and there’s an uncontroversially correct meaning there. We don’t experience the world directly, but through the categories and prejudices implicit to our society; for example, I might view a certain shade of bluish-green as blue, and someone raised in a different culture might view it as green. Okay.</p>
<p>Then post-modernists go on to say that if someone in a different culture thinks that the sun is light glinting off the horns of the Sky Ox, that’s just as real as our own culture’s theory that the sun is a mass of incandescent gas a great big nuclear furnace. If you challenge them, they’ll say that you’re denying reality is socially constructed, which means you’re clearly very naive and think you have perfect objectivity and the senses perceive reality directly.</p>
<p>The writers of the paper compare this to a form of medieval castle, where there would be a field of desirable and economically productive land called a bailey, and a big ugly tower in the middle called the motte. If you were a medieval lord, you would do most of your economic activity in the bailey and get rich. If an enemy approached, you would retreat to the motte and rain down arrows on the enemy until they gave up and went away. Then you would go back to the bailey, which is the place you wanted to be all along.</p>
<p>So the motte-and-bailey doctrine is when you make a bold, controversial statement. Then when somebody challenges you, you claim you were just making an obvious, uncontroversial statement, so you are clearly right and they are silly for challenging you. Then when the argument is over you go back to making the bold, controversial statement.</p>
<p>Some classic examples:</p>
<p>1. The religious group that acts for all the world like God is a supernatural creator who builds universes, creates people out of other people&#8217;s ribs, parts seas, and heals the sick when asked very nicely (bailey). Then when atheists come around and say maybe there&#8217;s no God, the religious group objects &#8220;But God is just another name for the beauty and order in the Universe! You&#8217;re not denying that there&#8217;s beauty and order in the Universe, are you?&#8221; (motte). Then when the atheists go away they get back to making people out of other people&#8217;s ribs and stuff.</p>
<p>2. Or&#8230;&#8221;If you don&#8217;t accept Jesus, you will burn in Hell forever.&#8221; (bailey) But isn&#8217;t that horrible and inhuman? &#8220;Well, Hell is just another word for being without God, and if you choose to be without God, God will be nice and let you make that choice.&#8221; (motte) Oh, well that doesn&#8217;t sound so bad, I&#8217;m going to keep rejecting Jesus. &#8220;But if you reject Jesus, you will BURN in HELL FOREVER and your body will be GNAWED BY WORMS.&#8221; But didn&#8217;t you just&#8230; &#8220;Metaphorical worms of godlessness!&#8221;</p>
<p>3. The feminists who constantly argue about whether you can be a real feminist or not without believing in X, Y and Z and wanting to empower women in some very specific way, and who demand everybody support controversial policies like affirmative action or affirmative consent laws (bailey). Then when someone says they don&#8217;t really like feminism very much, they object &#8220;But feminism is just the belief that women are people!&#8221; (motte) Then once the person hastily retreats and promises he <i>definitely</i> didn&#8217;t mean women aren&#8217;t people, the feminists get back to demanding everyone support affirmative action because feminism, or arguing about whether you can be a feminist and wear lipstick.</p>
<p>4. Proponents of pseudoscience sometimes argue that their particular form of quackery will cure cancer or take away your pains or heal your crippling injuries (bailey). When confronted with evidence that it doesn&#8217;t work, they might argue that people need hope, and even a placebo solution will often relieve stress and help people feel cared for (motte). In fact, some have argued that quackery may be better than real medicine for certain untreatable diseases, because neither real nor fake medicine will help, but fake medicine tends to be more calming and has fewer side effects. But then once you leave the quacks in peace, they will go back to telling less knowledgeable patients that their treatments will cure cancer.</p>
<p>5. Critics of the rationalist community note that it pushes controversial complicated things like Bayesian statistics and utilitarianism (bailey) under the name &#8220;rationality&#8221;, but when asked to justify itself defines rationality as &#8220;whatever helps you achieve your goals&#8221;, which is so vague as to be universally unobjectionable (motte). Then once you have admitted that more rationality is always a good thing, they suggest you&#8217;ve admitted everyone needs to learn more Bayesian statistics.</p>
<p>6. Likewise, singularitarians who predict with certainty that there will be a singularity, because &#8220;singularity&#8221; just means &#8220;a time when technology is so different that it is impossible to imagine&#8221; &#8211; and really, who would deny that technology will probably get really weird (motte)? But then every other time they use &#8220;singularity&#8221;, they use it to refer to a very specific scenario of intelligence explosion, which is far less certain and needs a lot more evidence before you can predict it (bailey).</p>
<p>The motte and bailey doctrine sounds kind of stupid and hard-to-fall-for when you put it like that, but <i>all</i> fallacies sound that way <i>when you&#8217;re thinking about them</i>. More important, it draws its strength from people&#8217;s usual failure to debate specific propositions rather than vague clouds of ideas. If I&#8217;m debating &#8220;does quackery cure cancer?&#8221;, it might be easy to view that as a general case of the problem of &#8220;is quackery okay?&#8221; or &#8220;should quackery be illegal?&#8221;, and from there it&#8217;s easy to bring up the motte objection.</p>
<p>Recently, a friend (I think it was Robby Bensinger) pointed out something I&#8217;d totally missed. The motte-and-bailey doctrine is a perfect mirror image of my other favorite fallacy, the <A HREF="http://slatestarcodex.com/2014/05/12/weak-men-are-superweapons/">weak man fallacy</A>.</p>
<p>Weak-manning is a lot like straw-manning, except that instead of debating a fake, implausibly stupid opponent, you&#8217;re debating a real, unrepresentatively stupid opponent. For example, &#8220;Religious people say that you should kill all gays. But this is evil. Therefore, religion is wrong and barbaric. Therefore we should all be atheists.&#8221; There are certainly religious people who think that you should kill all gays, but they&#8217;re a small fraction of all religious people and probably not the ones an unbiased observer would hold up as the best that religion has to offer.</p>
<p>If you&#8217;re debating the Pope or something, then when you weak-man, you&#8217;re unfairly replacing a strong position (the Pope&#8217;s) with a weak position (that of the guy who wants to kill gays) to make it more attackable.</p>
<p>But in motte and bailey, you&#8217;re unfairly replacing a weak position (there is a supernatural creator who can make people out of ribs) with a strong position (there is order and beauty in the universe) in order to make it more defensible.</p>
<p>So weak-manning is replacing a strong position with a weak position to better attack it; motte-and-bailey is replacing a weak position with a strong position to better defend it.</p>
<p>This means people who know both terms are at constant risk of arguments of the form &#8220;You&#8217;re weak-manning me!&#8221; &#8220;No, <i>you&#8217;re</i> motte-and-baileying <i>me!</i>&#8220;.</p>
<p>Suppose we&#8217;re debating feminism, and I defend it by saying it really is important that women are people, and you attack it by saying that it&#8217;s not true that all men are terrible. Then I can accuse you of making life easy for yourself by attacking the weakest statement anyone vaguely associated with feminism has ever pushed. And you can accuse me if making life too easy for myself by defending the most uncontroversially obvious statement I can get away with.</p>
<p>So what is the <i>real</i> feminism we should be debating? <i>Why would you even ask that question?</i> What is this, some kind of dumb high school debate club? Who the heck thinks it would be a good idea to say &#8220;Here&#8217;s a vague poorly-defined concept that mind-kills everyone who touches it &#8211; quick, should you associate it with positive affect or negative affect?!&#8221;</p>
<p><A HREF="http://lesswrong.com/lw/nu/taboo_your_words/">Taboo</A> your words, then <A HREF="http://lesswrong.com/lw/nv/replace_the_symbol_with_the_substance/">replace the symbol with the substance</A>. If you have an <i>actual thing</i> you&#8217;re trying to debate, then it should be obvious when somebody&#8217;s changing the topic. If working out who&#8217;s using motte-and-bailey (or weak man) is remotely difficult, it means your discussion went wrong several steps earlier and you probably have no idea what you&#8217;re even arguing about.</p>
<p>PS: Nicholas Shackel, original inventor of the term, <A HREF="http://blog.practicalethics.ox.ac.uk/2014/09/motte-and-bailey-doctrines/">weighs in</A>.</p>
]]></content:encoded>
			<wfw:commentRss>http://slatestarcodex.com/2014/11/03/all-in-all-another-brick-in-the-motte/feed/</wfw:commentRss>
		<slash:comments>249</slash:comments>
		</item>
		<item>
		<title>Tumblr on MIRI</title>
		<link>http://slatestarcodex.com/2014/10/07/tumblr-on-miri/</link>
		<comments>http://slatestarcodex.com/2014/10/07/tumblr-on-miri/#comments</comments>
		<pubDate>Tue, 07 Oct 2014 22:22:06 +0000</pubDate>
		<dc:creator><![CDATA[Scott Alexander]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[long post is long]]></category>
		<category><![CDATA[rationality]]></category>
		<category><![CDATA[transhumanism]]></category>

		<guid isPermaLink="false">http://slatestarcodex.com/?p=3001</guid>
		<description><![CDATA[[Disclaimer: I have done odd jobs for MIRI once or twice several years ago, but I am not currently affiliated with them in any way and do not speak for them.] A recent Tumblr conversation on the Machine Intelligence Research &#8230; <a href="http://slatestarcodex.com/2014/10/07/tumblr-on-miri/">Continue reading <span class="pjgm-metanav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p><i><font size="1">[Disclaimer: I have done odd jobs for MIRI once or twice several years ago, but I am not currently affiliated with them in any way and do not speak for them.]</font></i></p>
<p>A recent Tumblr conversation on the Machine Intelligence Research Institute has gotten interesting and I thought I&#8217;d see what people here have to say.</p>
<p>If you&#8217;re just joining us and don&#8217;t know about the <A HREF="http://intelligence.org/">Machine Intelligence Research Institute</A> (&#8220;MIRI&#8221; to its friends), they&#8217;re a nonprofit organization dedicated to navigating the risks surrounding &#8220;intelligence explosion&#8221;. In this scenario, a few key insights around artificial intelligence can very quickly lead to computers so much smarter than humans that the future is almost entirely determined by their decisions. This would be especially dangerous since most AIs use very primitive untested goal systems inappropriate for and untested on intelligent entities; such a goal system would be &#8220;unstable&#8221; and from a human perspective the resulting artificial intelligence could have apparently arbitrary or insane goals. If such a superintelligence were much more powerful than we are, it would present an existential threat to the human race.</p>
<p>This has almost nothing to do with the classic &#8220;Skynet&#8221; scenario &#8211; but if it helps to imagine Skynet, then fine, just imagine Skynet. <A HREF="http://slatestarcodex.com/2014/08/26/if-the-media-reported-on-other-dangers-like-it-does-ai-risk/">Everyone else does</A>.</p>
<p>MIRI tries to raise awareness of this possibility among AI researchers, scientists, and the general public, and to start foundational research in more stable goal systems that might allow AIs to become intelligent or superintelligent while still acting in predictable and human-friendly ways. </p>
<p>This is <A HREF="http://gruntledandhinged.com/2014/09/14/on-beginners-and-burning-out/">not a 101 space</A> and I don&#8217;t want the comments here to all be about whether or not this scenario is likely. If you really want to discuss that, go read at least <A HREF="http://intelligenceexplosion.com/"><i>Facing The Intelligence Explosion</i></A> and then post your comments in the <A HREF="http://lesswrong.com/r/discussion/lw/l2v/open_thread_oct_6_oct_12_2014/">Less Wrong Open Thread</A> or something. This is about MIRI as an organization.</p>
<p>(If you&#8217;re <i>really</i> just joining us and you don&#8217;t know about <i>Tumblr</i>, run away)</p>
<p><b>II.</b></p>
<p>Tumblr user <A HREF="http://su3su2u1.tumblr.com/">su3su2u1</A> writes:<br />
<blockquote>Saw some tumblr people talking about [effective altruism].  My biggest problem with this movement is that most everyone I know who identifies themselves as an effective altruist donates money to MIRI (it&#8217;s possible this is more a comment on the people I know than the effective altruism movement, I guess). Based on their output over the last decade, MIRI is primarily a fanfic and blog-post producing organization.  That seems like spending money on personal entertainment.</p></blockquote>
<p>Part of this is obviously mean-spirited potshots, in that MIRI itself doesn&#8217;t produce fanfic and what their employees choose to do with their own time is none of your damn business. </p>
<p>(well, slightly more complicated. I think MIRI gave Eliezer a couple weeks vacation to work on it as an &#8220;outreach&#8221; thing once. But that&#8217;s a little different from it being their main priority.)</p>
<p>But more serious is the claim that MIRI doesn&#8217;t do much else of value. I challenged Su3 with the following evidence of MIRI doing good work:</p>
<p>A1. MIRI has been very successful with outreach and networking &#8211; basically getting their cause noticed and endorsed by the scientific establishment and popular press. They&#8217;ve gotten positive attention, sometimes even endorsements, from people like Stephen Hawking, Elon Musk, Gary Drescher, Max Tegmark, Stuart Russell, and Peter Thiel. Even Bill Gates is talking about AI risk, though I don&#8217;t think he&#8217;s mentioned MIRI by name. Multiple popular books have been written about their ideas, such as James Miller&#8217;s <a href="http://smile.amazon.com/gp/product/1936661659/ref=as_li_tl?ie=UTF8&#038;camp=1789&#038;creative=390957&#038;creativeASIN=1936661659&#038;linkCode=as2&#038;tag=slastacod-20&#038;linkId=COECOF245ZC5CPDR"><i>Singularity Rising</i></a><img src="http://ir-na.amazon-adsystem.com/e/ir?t=slastacod-20&#038;l=as2&#038;o=1&#038;a=1936661659" width="1" height="1" border="0" alt="" style="border:none !important; margin:0px !important;" /> and Stuart Armstrong&#8217;s <a href="http://smile.amazon.com/gp/product/1939311098/ref=as_li_tl?ie=UTF8&#038;camp=1789&#038;creative=390957&#038;creativeASIN=1939311098&#038;linkCode=as2&#038;tag=slastacod-20&#038;linkId=IMQHSBDPTWFHQJ5Q"><i>Smarter Than Us</i></a><img src="http://ir-na.amazon-adsystem.com/e/ir?t=slastacod-20&#038;l=as2&#038;o=1&#038;a=1939311098" width="1" height="1" border="0" alt="" style="border:none !important; margin:0px !important;" />. Most recently Nick Bostrom&#8217;s book <a href="http://smile.amazon.com/gp/product/0199678111/ref=as_li_tl?ie=UTF8&#038;camp=1789&#038;creative=390957&#038;creativeASIN=0199678111&#038;linkCode=as2&#038;tag=slastacod-20&#038;linkId=3AA4KAD7OILDHLM2"><i>Superintelligence</i></a><img src="http://ir-na.amazon-adsystem.com/e/ir?t=slastacod-20&#038;l=as2&#038;o=1&#038;a=0199678111" width="1" height="1" border="0" alt="" style="border:none !important; margin:0px !important;" />, based at least in part on MIRI&#8217;s research and ideas, is a New York Times best-seller and has been reviewed positively in the Guardian, the Telegraph, Salon, the Financial Times, and the Economist. Oxford has opened up the AI-risk-focused Future of Humanity Institute; MIT has opened up the similar Future of Life Institute. In about a decade, the idea of an intelligence explosion has gone from Time Cube level crackpottery to something taken seriously by public intellectuals and widely discussed in the tech community.</p>
<p>A2. MIRI has many publications, conference presentations, book chapters and other things usually associated with normal academic research, which interested parties can <A HREF="http://intelligence.org/all-publications/">find on their website</A>. They have conducted seven past research workshops which have produced interesting results like Christiano et al&#8217;s claimed proof of a way around the logical undefinability of truth, which was praised as potentially interesting by <A HREF="http://johncarlosbaez.wordpress.com/2013/03/31/probability-theory-and-the-undefinability-of-truth/">respected mathematics blogger John Baez</A>.</p>
<p>A3. Many former MIRI employees, and many more unofficial fans, supporters, and associates of MIRI, are widely distributed across the tech community in industries that are likely to be on the cutting edge of artificial intelligence. For example, there are a bunch of people influenced by MIRI in Google&#8217;s AI department. Shane Legg, who writes about how <A HREF="http://www.vetta.org/about-me/">his early work was funded by a MIRI grant</A> and who <A HREF="http://www.vetta.org/2009/08/funding-safe-agi/">once called</A> MIRI &#8220;the best hope that we have&#8221; was pivotal in convincing Google to set up an <A HREF="http://www.huffingtonpost.com/2014/01/29/google-ai_n_4683343.html">AI ethics board</A> to monitor the risks of the company&#8217;s cutting-edge AI research. The same article mentions Peter Thiel and Jaan Tallinn as leading voices who will make Google comply with the board&#8217;s recommendations; they also happen to be MIRI supporters and the organization&#8217;s first and third <A HREF="http://intelligence.org/topdonors/">largest donors</A>.</p>
<p>There&#8217;s a certain level of faith required for (A1) and (A3) here, in that I&#8217;m attributing anything good that happens in the field of AI risk to some sort of shady behind-the-scenes influence from MIRI. Maybe Legg, Tallinn, and Thiel would have pushed for the exact same Google AI Ethics Board if none of them had ever heard of MIRI at all. I am forced to plead ignorance on the finer points of networking and soft influence. Heck, for all I know, maybe the exact same number of people would vote Democrat if there were no Democratic National Committee or liberal PACs. I just assume that, given a really weird idea that very few people held in 2000, an organization dedicated to spreading that idea, and the observation that the idea has indeed spread very far, the organization is probably doing something right.</p>
<p><b>III.</b></p>
<p>Our discussion on point (A3) degenerated into Dueling Anecdotal Evidence. But Su3 responded to my point (A1) like so:<br />
<blockquote>[I agree that MIRI has gotten shoutouts from various thought leaders like Stephen Hawking and Elon Musk. Bostrom&#8217;s book is commercially successful, but that&#8217;s just] more advertising.  Popular books aren’t the way to get researchers to notice you. I’ve never denied that MIRI/SIAI was good at fundraising, which is primarily what you are describing.</p>
<p>How many of those thought leaders have any publications in CS or pure mathematics, let alone AI?  Tegmark might have a math paper or two, but he is primarily a cosmologist. The FLI&#8217;s list of scientists is (for some reason) mostly again cosmologists.  The active researchers appear to be a few (non-CS, non-math) grad students. Not exactly the team you’d put together if you were actually serious about imminent AI risk.</p>
<p>I would also point out “successfully attracted big venture capital names” isn’t always a mark of a sound organization.  Black Light Power is run by a crackpot who thinks he can make energy by burning water, and has attracted nearly 100 million in funding over the last two decades, with several big names in energy production behind him.</p></blockquote>
<p>And to my point (A2) like so:<br />
<blockquote>I have a PhD in physics and work in machine learning. I’ve read some of the technical documents on MIRI’s site, back when it was SIAI and I was unimpressed.  I also note that this critique is not unique to me, as far as I know the GiveWell position on MIRI is that it is not an effective institute. </p>
<p>The series of papers on Lob’s theorem <i>are</i> actually interesting, though I notice that none of the results have been peer reviewed, and the paper’s aren’t listed as being submitted to journals yet.  Their result looks right to me, but I wouldn’t trust myself to catch any subtlety that might be involved.  </p>
<p>[But that just means] one result has gotten some small positive attention, and even those results haven’t been vetted by the wider math community yet (no peer review). Let&#8217;s take a closer look at the list of publications on MIRI’s website- I count 6 peer reviewed papers in their existence, and 13 conference presentations. Thats horribly unproductive!  Most of the grad students who finish a physics phd will publish that many papers individually, in about half that time. You claim part of their goal is to get academics to pay attention, but none of their papers are highly cited, despite all this networking they are doing.</p>
<p>Citations are the standard way to measure who in academia is paying attention.  Apart from the FHI/MIRI echo chamber (citations bouncing around between the two organizations), no one in academia seems to be paying attention to MIRI’s output.  MIRI is failing to make academic inroads, and it has produced very little in the way of actual research.</p></blockquote>
<p>My interpretation, in the form of a TL;DR</p>
<p>B1. Sure, MIRI is good at getting attention, press coverage, and interest from smart people not in the field. But that&#8217;s public relations and fundraising. An organization being good at fundraising and PR doesn&#8217;t mean it&#8217;s good at anything else, and in fact &#8220;so good at PR they can cover up not having substance&#8221; is a dangerous failure mode.</p>
<p>B2. What MIRI needs, but doesn&#8217;t have, is the attention and support of smart people <i>within the fields of math, AI, and computer science</i>, whereas now it mostly has grad students not in these fields.</p>
<p>B3. While having a couple of published papers might look impressive to a non-academic, people more familiar with the culture would know that their output is woefully low. They seem to have gotten about five ten solid publications in during their decade-long history as a multi-person organization; one good grad student can get a couple solid publications a year. Their output is less than expected by like an order of magnitude. And although they do get citations, this is all from a mutual back-scratching club of them and Bostrom/FHI citing each other.</p>
<p><b>IV.</b></p>
<p>At this point <A HREF="http://somervta.tumblr.com/">Tarn</A> and <A HREF="http://nothingismere.tumblr.com/">Robby</A> joined the conversation and it became kind of confusing, but I&#8217;ll try to summarize our responses.</p>
<p>Our response to Su3&#8217;s point (B1) was that this is fundamentally misunderstanding outreach. From its inception until about last year, MIRI was in large part an outreach and awareness-raising organization. Its 2008 website describes its mission like so:<br />
<blockquote>In the coming decades, humanity will likely create a powerful AI. SIAI exists to confront this urgent challenge, both the opportunity and the risk. SIAI is fostering research, education, and outreach to increase the likelihood that the promise of AI is realized for the benefit of everyone.</p></blockquote>
<p>Outreach is one of its three main goals, and &#8220;education&#8221;, which sounds a lot like outreach, is a second.</p>
<p>In a small field where you&#8217;re the only game in town, it&#8217;s hard to distinguish between outreach and self-promotion. If MIRI successfully gets Stephen Hawking to say &#8220;We need to be more concerned about AI risks, as described by organizations like MIRI&#8221;, is that them being very good at self-promotion and fundraising, or is that them accomplishing their core mission of getting information about AI risks to the masses?</p>
<p>Once again, compare to a political organization, maybe Al Gore&#8217;s anti-global-warming nonprofit. If they get the media to talk about global warming a lot, and get lots of public intellectuals to come out against global warming, and change behavior in the relevant industries, then mission accomplished. The popularity of <i>An Inconvenient Truth</i> can&#8217;t just be dismissed as &#8220;self-promotion&#8221; or &#8220;fundraising&#8221; for Gore, it was exactly the sort of thing he was gathering money and personal prestige <i>in order to do</i>, and should be considered a victory in its own right. Even though eventually the anti-global-warming cause cares about politicians, industry leaders, and climatologists a lot more than they care about the average citizen, convincing millions of average citizens to help was a necessary first step.</p>
<p>And this which is true of <i>An Inconvenient Truth</i> is true of <i>Superintelligence</i> and other AI risk publicity efforts, albeit on their much smaller scale.</p>
<p>Our response to Su3&#8217;s point (B2) was that it was just plain factually false. MIRI hasn&#8217;t reached big names from the AI/math/compsci field? Sure it has. Doesn&#8217;t have mathy PhD students willing to research for them? Sure it does.</p>
<p>Peter Norvig and Stuart Russell are among the biggest names in AI. Norvig is currently the Director of Research at Google; Russell is Professor of Computer Science at Berkeley and a winner of <A HREF="http://en.wikipedia.org/wiki/Stuart_J._Russell">various impressive sounding awards</A>. The two wrote a widely-used textbook on artificial intelligence in which they devote three pages to the proposition that “The success of AI might mean the end of the human race&#8221;; parts are taken right out of the MIRI playbook and they cite MIRI research fellow Eliezer Yudkowsky&#8217;s paper on the subject. This is unlikely to be a coincidence; Russell&#8217;s site <A HREF="http://www.cs.berkeley.edu/~russell/research/future/">links to MIRI</A> and he is <A HREF="http://intelligence.org/workshops/">scheduled to participate in</A> MIRI&#8217;s next research workshop. </p>
<p>Their &#8220;team&#8221; of &#8220;research advisors&#8221; includes <A HREF="en.wikipedia.org/wiki/Gary_Drescher">Gary Drescher</A> (PhD in CompSci from MIT), <A HREF="en.wikipedia.org/wiki/Steve_Omohundro">Steve Omohundro</A> (PhD in physics from Berkeley but also considered a pioneer of machine learning), <A HREF="http://en.wikipedia.org/wiki/Roman_Yampolskiy">Roman Yampolskiy</A> (PhD in CompSci from Buffalo), and Moshe Looks (PhD in CompSci from Washington).</p>
<p>Su3 brought up the good point that none of these people, respected as they are, are MIRI employees or researchers (although Drescher has been to a research workshop). At best, they are people who were willing to let MIRI use them as figureheads (in the case of the research advisors); at worst, they are merely people who have acknowledged MIRI&#8217;s existence in a not-entirely-unlike-positive way (Norvig and Russell). Even if we agree they are geniuses, this does not mean that <i>MIRI</i> has access to geniuses or can produce genius-level research.</p>
<p>Fine. All these people are, no more and no less, is evidence that MIRI is succeeding at outreach within the academic field of AI, as well as in the general public. It also seems to me to be some evidence that smart people who know more about AI than any of us think MIRI is on the right track.</p>
<p>Su3 brought up the example of a <A HREF="http://en.wikipedia.org/wiki/BlackLight_Power">BlackLight Power</A>, a crackpot energy company that was able to get lots of popular press and venture capital funding despite being powered entirely by pseudoscience. I agree this is the sort of thing we should be worried about. Nonscientists outside of specialized fields have limited ability to evaluate their claims. But when smart researchers in the field are willing to vouch for MIRI, that give me a lot more confidence they&#8217;re <i>not</i> just a fly-by-night group trying to profit off of pseudoscience. Their research might be more impressive or less impressive, but they&#8217;re not rotten to the core the same way BlackLight was.</p>
<p>And though MIRI&#8217;s own researchers may be far from those lofty heights, I find Su3&#8217;s claim that they are &#8220;a few non-CS, non-math grad students&#8221; a serious underestimate.</p>
<p>MIRI has fourteen employees/associates with the word &#8220;research&#8221; in their name, but of those, a couple (in the words of MIRI&#8217;s team page) &#8220;focus on social and historical questions related to artificial intelligence outcomes.&#8221; These people should not be expected to have PhDs in mathematical/compsci subjects.</p>
<p>Of the rest, Bill is a PhD in CompSci, Patrick is a PhD in math, Nisan is a PhD in math, Benja is a PhD student in math, and Paul is a PhD student in math. The others mostly have masters or bachelors in those fields, published journal articles, and/or won prizes in mathematical competitions. Eliezer <A HREF="http://lesswrong.com/lw/hq6/xrisk_roll_call/976f">writes</A> of some of the remaining members of his team:<br />
<blockquote>Mihaly Barasz is an International Mathematical Olympiad gold medalist perfect scorer. From what I&#8217;ve seen personally, I&#8217;d guess that Paul Christiano is better than him at math. I forget what Marcello&#8217;s prodigy points were in but I think it was some sort of Computing Olympiad [editor&#8217;s note: USACO finalist and 2x honorable mention in the Putnam mathematics competition]. All should have some sort of verified performance feat far in excess of the listed educational attainment.</p></blockquote>
<p>That pretty much leaves Eliezer Yudkowsky, who needs no introduction, and Nate Soares, <A HREF="http://lesswrong.com/lw/jl3/on_saving_the_world/">whose introduction exists and is pretty interesting</A>.</p>
<p>Add to that the many, many PhDs and talented people who aren&#8217;t officially employed by them but attend their workshops and help out their research when they get the chance, and you have to ask how many brilliant PhDs from some of the top universities in the world we should <i>expect</i> a small organization like MIRI to have. MIRI competes for the same sorts of people as Google, and <A HREF="http://intelligence.org/careers/research-fellow/">offers half as much</A>. Google paid $400 million to get Shane Legg and his people on board; MIRI&#8217;s yearly budget hovers at about $1 million. Given that they probably spend a big chunk of that on office space, setting up conferences, and other incidentals, I think the amount of talent they have right now is pretty good. </p>
<p>That leaves Su3&#8217;s point (B3) &#8211; the lack of published research.</p>
<p>One retort might be that, until recently, MIRI&#8217;s research focused on strategic planning and evaluation of AI risks. This is important, and it resulted in a lot of internal technical papers you can find on their website, but there&#8217;s not really a <i>field</i> for it. You can&#8217;t just publish it in the <i>Journal Of What Would Happen If There Was An Intelligence Explosion</i>, because no such journal. The best they can do is publish the parts of their research that connect to other fields in appropriate journals, which they sometimes did.</p>
<p>I feel like this also frees them from the critique of citation-incest between them and Bostrom. When I look at <A HREF="http://scholar.google.com/scholar?cites=10134633916327307541&#038;as_sdt=80000005&#038;sciodt=0,23&#038;hl=en">a typical list</A> of MIRI paper citations, I do see a lot of Bostrom, but also some other names that keep coming up &#8211; Hutter, Yampolskiy, Goetzel. So okay, it&#8217;s an incest circle of four or five rather than two.</p>
<p>But to some degree that&#8217;s what I <i>expect</i> from academia. Right now I&#8217;m doing my own research on a psychiatric screening tool called the MDQ. There are three or four research teams in three or four institutions who are really into this and publish papers on it a lot. Occasionally someone from another part of psychiatry wanders in, but usually it&#8217;s just the subsubsubspeciality of MDQ researchers talking to each other. That&#8217;s fine. They&#8217;re our repository of specialized knowledge on this one screening tool.</p>
<p>You would <i>hope</i> the future of the human race would get a little bit more attention than one lousy psychiatric screening tool, but blah blah civilizational inadequacy, turns out not so much, they&#8217;re of about equal size. If there are only a couple of groups working on this problem, they&#8217;re going to look incestuous but that&#8217;s fine.</p>
<p>On the other hand, math is math, and if MIRI is trying to produce real mathematical results they ought to be sharing them with the broader mathematical community.</p>
<p>Robby protests that until very recently, MIRI <i>hasn&#8217;t</i> really been focusing on math. This is a very recent pivot. In April 2013, Luke wrote in his <A HREF="http://intelligence.org/2013/04/13/miris-strategy-for-2013/">mini strategic plan</A>:<br />
<blockquote>We were once doing three things — research, rationality training, and the Singularity Summit. Now we’re doing one thing: research. Rationality training was spun out to a separate organization, CFAR, and the Summit was acquired by Singularity University. We still co-produce the Singularity Summit with Singularity University, but this requires limited effort on our part.<br />
After dozens of hours of strategic planning in January–March 2013, and with input from 20+ external advisors, we’ve decided to (1) put less effort into public outreach, and to (2) shift our research priorities to Friendly AI math research.</p></blockquote>
<p>In the <A HREF="http://intelligence.org/2014/06/11/mid-2014-strategic-plan/">full strategic plan for 2014</A>, he repeated:<br />
<blockquote>Events since MIRI’s April 2013 strategic plan have increased my confidence that we are “headed in the right direction.” During the rest of 2014 we will continue to:<br />
&#8211; Decrease our public outreach efforts, leaving most of that work to FHI at Oxford, CSER at Cambridge, FLI at MIT, Stuart Russell at UC Berkeley, and others (e.g. James Barrat).<br />
&#8211; Finish a few pending “strategic research” projects, then decrease our efforts on that front, again leaving most of that work to FHI, plus CSER and FLI if they hire researchers, plus some others.<br />
&#8211; Increase our investment in our Friendly AI (FAI) technical research agenda.<br />
&#8211; We’ve heard that as a result of&#8230;outreach success, and also because of Stuart Russell’s discussions with researchers at AI conferences, AI researchers are beginning to ask, “Okay, this looks important, but what is the technical research agenda? What could my students and I do about it?” Basically, they want to see an FAI technical agenda, and MIRI is is developing that technical agenda already.</p></blockquote>
<p>In other words, there is a recent pivot from outreach, rationality and strategic research to pure math research, and the pivot is only recently finished or still going on.</p>
<p>TL;DR, again in three points:</p>
<p>C1. Until recently, MIRI focused on outreach and did a truly excellent job on this. They deserve credit here.</p>
<p>C2. MIRI has a number of prestigious computer scientists and AI experts willing to endorse or affiliate with it in some way. While their own researchers are not <i>quite</i> at the same lofty heights, they include many people who have or are working on math or compsci PhDs.</p>
<p>C3. MIRI hasn&#8217;t published much math because they were previously focusing on outreach and strategic research; they&#8217;ve only shifted to math work in the past year or so.</p>
<p><b>V.</b></p>
<p>The discussion just <i>kept going</i>. We reached about the limit of our disagreement on (C1), the point about outreach &#8211; yes, they&#8217;ve done it, but does it count when it doesn&#8217;t bear fruit in published papers? About (C2) and the credentials of MIRI&#8217;s team, Su3 kind of blended it into the next point about published papers, saying:<br />
<blockquote>Fundamental disconnect &#8211; I consider “working with MIRI” to mean “publishing results with them.”  As an outside observer, I have no indication that most of these people are working with them. I’ve been to workshops and conferences with Nobel prize winning physicists, but I’ve never &#8220;worked with them&#8221; in the academic sense of having a paper with them.  If [someone like Stuart Russell] is interested in helping MIRI, the best thing he could do is publish a well received technical result in a good journal with Yudkowsky. That would help get researchers to pay actual attention(and give them one well received published result, in their operating history).  </p>
<p>Tangential aside- you overestimate the difficulty of getting top grad students to work for you.  I recently got four CS grad students at a top program to help me with some contract work for a few days at the cost of some pizza and beer.</p></blockquote>
<p>So it looks like it all comes down to the papers. Su3 had this to say:<br />
<blockquote>What I was specifically thinking was “MIRI has produced a much larger volume of well-received fan fiction and blog posts than research.”   That was what I inended to communicate, if somewhat snarkily.  MIRI bills itself as a research institute, so I judge them on their produced research.  The accountability measure of a research institute is academic citations.</p>
<p>Editorials by famous people have some impact with the general public,  so thats fine for fundraising, but at some point you have to get researchers interested.  You can measure how much influence they have on researchers by seeing who those researchers cite and what they work on.  You could have every famous cosmologist in the world writing op-eds about AI risk, but its worthless if AI researchers don’t pay attention, and judging by citations, they aren’t. </p>
<p>As a comparison for publication/citation counts, I know individual physicists who have published more peer reviewed papers since 2005 than all of MIRI has self-published to their website. My single most highly cited physics paper (and I left the field after graduate school) has more citations than everything MIRI has ever published in peer reviewed journals combined.  This isn’t because I’m amazing, its because no one in academia is paying attention to MIRI.</p>
<p>[Christiano et al&#8217;s result about Lob] has been self-published on their website.  It has NOT been peer reviewed.  So it&#8217;s published in the sense of “you can go look at the paper.”  But its not published in the sense of “mathematicians in the same field have verified the result.” I agree this one result looks interesting, but most mathematicians won’t pay attention to it unless they get it reviewed (or at the bare minimum, clean it up and put it on Arxiv). They have lots of these self-published documents on their web page.</p>
<p>If they are making a “strategic decision” to not submit their self-published findings to peer review ,they are making a terrible strategic decision, and they aren’t going to get most academics to pay attention that way.  The result of Christiano, et al. <i>is</i> potentially interesting, but it&#8217;s languishing as a rough unpublished draft on the MIRI site, so its not picking up citations.</p>
<p>I’d go further and say the lack of citations is my main point. Citations are the important measurement of “are researchers paying attention.”  If everything self-published to MIRI’s website were sparking interest in academia, citations would be flying around, even if the papers weren’t peer reviewed, and I’d say “yeah, these guys are producing important stuff.”</p>
<p>My subpoint might be that MIRI doesn’t even seem to be trying to get citations/develop academic interest, as measured by how little effort seems to be put into publication.</p></blockquote>
<p>And Su3&#8217;s not buying the pivot explanation either:<br />
<blockquote>That seems to be a reframing of the past history though.  I saw talks by the SIAI well before 2013 where they described their primary purpose as friendly AI research, and insisted they were in a unique position (due to being uniquely brilliant/rational) to develop technical friendly AI (as compared to academic AI researchers). </p>
<p>[Tarn] and [Robby] have suggested the organization is undergoing a pivot, but they’ve always billed themselves as a research institute. But donating money to an organization that has been ineffective in the past, because it looks like they might be changing seems like a bad proposition.</p>
<p>My initial impression (reading Muelhauser’s post you linked to and a few others) is that Muelhauser noticed the house was out of order when he became director and is working to fix things. Maybe he’ll succeed and in the future, then, I’ll be able to judge MIRI as effective- certainly a disproportionate number of their successes have come in the last few years.  However, right now all I have is their past history, which has been very unproductive.</p></blockquote>
<p><b>VI.</b></p>
<p>After that, discussion stayed focused on the issue of citations. This seemed like progress to me. Not only had we gotten it down to a core objection, but it was sort of a factual problem. It wasn&#8217;t an issue of praising or condemning. Here&#8217;s an organization with a lot of smart people. We know they work very hard &#8211; <A HREF="http://squid314.livejournal.com/330825.html">no one&#8217;s ever called Luke a slacker</A>, and another MIRI staffer (who will not be named, for his own protection) achieved some level of infamy for mixing together a bunch of the strongest chemicals from my <A HREF="http://slatestarcodex.com/2014/02/16/nootropics-survey-results-and-analysis/">nootropics survey</A> into little pills which he kept on his desk in the MIRI offices for anyone who wanted to work twenty hours straight and then probably die young of conditions previously unknown to science. IQ-point*hours is a weird metric, but MIRI is putting a lot of IQ-point*hours into whatever it&#8217;s doing. So if Su3&#8217;s right that there are missing citations, where are they?</p>
<p>Among the three of us, Robby and Tarn and I generated a couple of hypotheses (well, Robby&#8217;s were more like facts than hypotheses, since he&#8217;s the only one in this conversation who actually works there).</p>
<p>D1: MIRI has always been doing research, but until now it&#8217;s been strategic research (ie &#8220;How worried should we be about AI?&#8221;, &#8220;How far in the future should we expect AI to be developed?&#8221;) which hasn&#8217;t fit neatly into an academic field or been of much interest to anyone except MIRI allies like Bostrom. They have dutifully published this in the few papers that are interested, and it has dutifully been cited by the few people who are interested (ie Bostrom). It&#8217;s unreasonable to expect Stuart Russell to cite their estimates of time course for superintelligence when he&#8217;s writing his papers on technical details of machine learning algorithms or whatever it is he writes papers on. And we can generalize from Stuart Russell to the rest of the AI field, who are <i>also</i> writing on things like technical details of machine learning algorithms that can&#8217;t plausibly be connected to when machines will become superintelligent.</p>
<p>D2: As above, but continuing to apply even in some of their math-ier research. MIRI does have <A HREF="http://intelligence.org/all-publications/">lots of internal technical papers on their website</A>. People tend to cite other researchers working in the same field as themselves. I could write the best psychiatry paper in human history, and I&#8217;m probably not going to get any citations from astrophysicists. But &#8220;machine ethics&#8221; is an entirely new field that&#8217;s not super relevant to anyone else&#8217;s work. Although a couple key machine ethics problems, like the Lobian obstacle and decision theory, touch on bigger and better-populated subfields of mathematics, they&#8217;re always going to be outsiders who happen to wander in. It&#8217;s unfair to compare them to a physics grad student writing about quarks or something, because she has the benefit of decades of previous work on quarks and a large and very interested research community. MIRI&#8217;s first job is to <i>create</i> that field and community, which until you succeed looks a lot like &#8220;outreach&#8221;.</p>
<p>D3: Lack of staffing and constant distraction by other important problems. This is Robby&#8217;s description of what he notices from the inside. He writes:<br />
<blockquote>We&#8217;re short on staff, especially since Louie left. Lots of people are willing to volunteer for MIRI, but it&#8217;s hard to find the right people to recruit for the long haul. Most relevantly, we have two new researchers (Nate and Benja), but we&#8217;d love a full-time Science Writer to specialize in taking our researchers&#8217; results and turning them into publishable papers. Then we don&#8217;t have to split as much researcher time between cutting-edge work and explaining/writing-down.</p>
<p>A lot of the best people who are willing to help us are very busy. I&#8217;m mainly thinking of Paul Christiano. he&#8217;s working actively on creating a publishable version of the probabilistic Tarski stuff, but it&#8217;s a really big endeavor. Eliezer is by far our best FAI researcher, and he&#8217;s very slow at writing formal, technical stuff. He&#8217;s generally low-stamina and lacks experience in writing in academic style / optimizing for publishability, though I believe we&#8217;ve been having a math professor tutor him to get over that particular hump.  Nate and Benja are new, and it will take time to train them and get them publishing their own stuff. At the moment, Nate/Benja/Eliezer are spending the rest of 2014 working on material for the FLI AI conference, and on introductory FAI material to send to Stuart Russell and other bigwigs.</p></blockquote>
<p>D4: Some of the old New York rationalist group takes a more combative approach. I&#8217;m not sure I can summarize their argument well enough to do it justice, so I would suggest reading <A HREF="http://rationalconspiracy.com/2014/10/06/academic-support-for-miri/">Alyssa&#8217;s post on her own blog</A>. </p>
<p>But if I have to take a stab: everyone knows mainstream academia is way too focused on the &#8220;publish or perish&#8221; ethic of measuring productivity in papers or citations rather than real progress. Yeah, a similar-sized research institute in physics could probably get ten times more papers/citations than MIRI. That&#8217;s because they&#8217;re <i>optimizing</i> for papers/citations rather than advancing the field, and <A HREF="http://en.wikipedia.org/wiki/Goodhart%27s_law">Goodhart&#8217;s Law</A> is in effect here as much as everywhere else. Those other institutes probably got geniuses who should be discovering the cure for cancer spending half their time typing, formatting, submitting, resubmitting, writing whatever the editors want to see, et cetera. MIRI is blessed with enough outside support that it doesn&#8217;t <i>have</i> to do that. The only reason to try is to get prestige and attention, and anyone who&#8217;s not paying attention now is more likely to be a constitutional skeptic <A HREF="http://rationalconspiracy.com/2014/10/06/academic-support-for-miri/">using lack of citations as an excuse</A>, than a person who would genuinely change their mind if there were more citations.</p>
<p>I am more sympathetic than usual to this argument because I&#8217;m in the middle of my own research on psychiatric screening tools and quickly learning that <i>official, published research is the worst thing in the world</i>. I could do my study in about two hours if the only work involved were doing the study; instead it&#8217;s week after week of forms, IRB submissions, IRB revisions, required online courses where I learn the Nazis did unethical research and this was bad so I should try not to be a Nazi, selecting exactly which journals I&#8217;m aiming for, and figuring out which of my bosses and co-workers academic politics requires me make co-authors. It is a <i>crappy game</i>, and if you&#8217;ve been blessed with enough independence to avoid playing it, why <i>wouldn&#8217;t</i> you take advantage? Forget the overhyped and tortured &#8220;measure&#8221; of progress you use to impress other people, and just make the progress.</p>
<p><b>VII.</b></p>
<p>Or not. I&#8217;ll let Su3 have the last word:<br />
<blockquote>I think something fundamental about my argument has been missed, perhaps I’ve communicated it poorly.  </p>
<p>It seems like you think the argument is that increasing publications increases prestige/status which would make researchers pay attention.  i.e. publications -> citations -> prestige -> people pay attention.  This is not my argument.  </p>
<p>My argument is essentially that the way to judge if MIRI’s outreach has been successful is through citations, not through famous people name dropping them, or allowing them to be figure heads.   </p>
<p>This is because I believe the goal of outreach is get AI researchers focused on MIRI’s ideas.  Op eds from famous people are useful only if they get AI researchers focused on these ideas.  Citations aren’t about prestige in this case- citations tell you which researchers are paying attention to you.  The number of active researchers paying attention to MIRI is very small. We know this because citations are an easy to find, direct measure. </p>
<p>Not all important papers have tremendous numbers of citations, but a paper can’t become important if it only has 1 or 2, because the ultimate measure of importance is “are people using these ideas?” </p>
<p>So again, to reiterate, if the goal of outreach is to get active AI researchers paying attention, then the direct measure for who is paying attention is citations. [But] the citation count on MIRIs work is very low. Not only is the citation count low (i.e. no researchers are paying attention), MIRI doesn’t seem to be trying to boost it &#8211; it isn’t trying to publish which would help get its ideas attention.  I’m not necessarily dismissive of celebrity endorsements or popular books, my point is <u>why should I measure the means when I can directly measure the ends?</u></p>
<p>The same idea undercuts your point that “lots of impressive PhD students work and have worked with MIRI,” because it&#8217;s impossible to tell if you don’t personally know the researchers. This is because they don’t create much output while at MIRI, and they don’t seem to be citing MIRI in their work outside of MIRI.</p>
<p>[Even people within the rationalist/EA community] agree with me somewhat. Here is a relevant quote <A HREF="http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/">from</A> Holden Karnofsky [of GiveWell]:<br />
<blockquote>SI seeks to build FAI and/or to develop and promote “Friendliness theory” that can be useful to others in building FAI. Yet it seems that most of its time goes to activities other than developing AI or theory. Its per-person output in terms of publications seems low. Its core staff seem more focused on Less Wrong posts, “rationality training” and other activities that don’t seem connected to the core goals; Eliezer Yudkowsky, in particular, appears (from the strategic plan) to be focused on writing books for popular consumption. These activities seem neither to be advancing the state of FAI-related theory nor to be engaging the sort of people most likely to be crucial for building AGI.</p></blockquote>
<p>And <A HREF="http://paulfchristiano.com/ai-impacts/">here is</A> a statement from Paul Christiano disagreeing with MIRI’s core ideas:<br />
<blockquote>But I should clarify that many of MIRI’s activities are motivated by views with which I disagree strongly and that I should categorically not be read as endorsing the views associated with MIRI in general or of Eliezer in particular. For example, I think it is very unlikely that there will be rapid, discontinuous, and unanticipated developments in AI that catapult it to superhuman levels, and I don’t think that MIRI is substantially better prepared to address potential technical difficulties than the mainstream AI researchers of the future.</p></blockquote>
</blockquote>
<p>This time Su3 helpfully provides their own summary:</p>
<p>E1. If the goal of outreach is to get active AI researchers paying attention, then the direct measure for who is paying attention is citations. [But] the citation count on MIRIs work is very low.  </p>
<p>E2. Not only is the citation count low (i.e. no researchers are paying attention), MIRI doesn’t seem to be trying to boost it &#8211; it isn’t trying to publish which would help get its ideas attention.  I’m not necessarily dismissive of celebrity endorsements or popular books, my point is why should I measure the means when I can directly measure the ends?  </p>
<p>E3. The same idea undercuts your point that “lots of impressive phd students work and have worked with MIRI,” because its impossible to tell if you don’t personally know the researchers. This is because they don’t create much output while at MIRI, and they don’t seem to be citing MIRI in their work outside of MIRI.</p>
<p>E4. Holden Karnofsky and Paul Christiano do not believe that MIRI is better prepared to address the friendly AI problem than mainstream AI researchers of the future.  Karnofsky explicitly for some of the reasons I have brought up, Christiano for reasons unmentioned.</p>
<p><b>VIII.</b></p>
<p>Didn&#8217;t actually read all that and just skipped down to the last subheading to see if there&#8217;s going to be a summary and conclusion and maybe some pictures? Good.</p>
<p>There seems to be some agreement MIRI has done a good job bringing issues of AI risk into the public eye and getting them media attention and the attention of various public intellectuals. There is disagreement over whether they should be credited for their success in this area, or whether this is a first step they failed to follow up on.</p>
<p>There also seems to be some agreement MIRI has done a poor job getting published and cited results in journals. There is disagreement over whether this is an understandable consequence of being a small organization in a new field that wasn&#8217;t even focusing on this until recently, or whether it represents a failure at exactly the sort of task by which their success should be judged.</p>
<p>This is probably among the 100% of issues that could be improved with flowcharts:</p>
<p><center><IMG SRC="http://slatestarcodex.com/blog_images/miriflowcharts.png"></center></p>
<p>In the Optimistic Model, MIRI&#8217;s successfully built up Public Interest, and for all we know they might have Mathematical Progress as well even though they haven&#8217;t published it in journals yet. While they could feed back their advantages by turning their progress into Published Papers and Citations to get even more Mathematical Progress, overall they&#8217;re in pretty good shape for producing Good Outcomes, at least insofar as this is possible in their chosen field.</p>
<p>In the Pessimistic Model, MIRI may or may not have garnered Public Interest, Researcher Interest, and Tentative Mathematical Progress, but they failed to turn that into Published Papers and Citations, which is the only way they&#8217;re going to get to Robust Mathematical Progress, Researcher Support, and eventually Good Outcomes. The best that can be said about them is that they set some very preliminary groundwork that they totally failed to follow up on.</p>
<p>A higher level point &#8211; if we accept the Pessimistic Model, do we accuse MIRI of being hopelessly incompetent, in which case they deserve less support? Or do we accept them as inexperienced amateurs who are the only people willing to try something difficult but necessary, in which case they deserve more support, and maybe some guidance, and perhaps some gentle or not-so-gentle prodding? Maybe if you&#8217;re a qualified science writer you could <A HREF="http://intelligence.org/careers/">apply for the job opening</A> they&#8217;re advertising and help them get those papers they need?</p>
<p>An even higher-level point &#8211; what do people worried about AI risk do with this information? I don&#8217;t see much that changes my opinion of the organization one way or the other. But Robby points out that people who are more concerned &#8211; but still worried about AI risk &#8211; have other good options. The Future of Humanity Institute at Oxford research that is less technical and more philosophical, wears their strategic planning emphasis openly on their sleeve has <A HREF="http://www.fhi.ox.ac.uk/research/publications/">oodles of papers</A> and citations and prestige. They also <A HREF="http://www.fhi.ox.ac.uk/support-fhi/">accept donations</A>.</p>
<p>Best of all, <i>their</i> founder doesn&#8217;t write any fanfic at all. Just perfectly respectable <A HREF="http://www.nickbostrom.com/fable/dragon.html">stories about evil dragon kings.</A></p>
]]></content:encoded>
			<wfw:commentRss>http://slatestarcodex.com/2014/10/07/tumblr-on-miri/feed/</wfw:commentRss>
		<slash:comments>344</slash:comments>
		</item>
		<item>
		<title>Prediction Goes To War</title>
		<link>http://slatestarcodex.com/2014/10/05/prediction-goes-to-war/</link>
		<comments>http://slatestarcodex.com/2014/10/05/prediction-goes-to-war/#comments</comments>
		<pubDate>Sun, 05 Oct 2014 19:00:06 +0000</pubDate>
		<dc:creator><![CDATA[Scott Alexander]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[rationality]]></category>
		<category><![CDATA[transhumanism]]></category>

		<guid isPermaLink="false">http://slatestarcodex.com/?p=2986</guid>
		<description><![CDATA[Croesus supposedly asked the Oracle what would happen in a war between him and Persia, and the Oracle answered such a conflict would &#8220;destroy a great empire&#8221;. We all know what happened next. What if oracles gave clear and accurate &#8230; <a href="http://slatestarcodex.com/2014/10/05/prediction-goes-to-war/">Continue reading <span class="pjgm-metanav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>Croesus supposedly asked the Oracle what would happen in a war between him and Persia, and the Oracle answered such a conflict would &#8220;destroy a great empire&#8221;. We all know what happened next.</p>
<p>What if oracles gave clear and accurate answers to this sort of question? What if anyone could ask an oracle the outcome of any war, or planned war, and expect a useful response?</p>
<p>When the oracle predicts the aggressor loses, it might prevent wars from breaking out. If an oracle told the US that the Vietnam War would cost 50,000 lives and a few hundred billion dollars, and the communists would conquer Vietnam anyway, the US probably would have said no thank you.</p>
<p>What about when the aggressor wins? For example, the Mexican-American War, where the United States won the entire Southwest at a cost of &#8220;only&#8221; ten thousand American casualties and $100 million (with an additional 20,000 Mexican deaths and $50 million in costs to Mexico)?</p>
<p>If both Mexico and America had access to an oracle who could promise them that the war would end with Mexico ceding the Southwest to the US, could Mexico just agree to cede the Southwest to the US at the beginning, and save both sides tens of thousands of deaths and tens of millions of dollars?</p>
<p>Not really. One factor that prevents wars is countries being unwilling to pay the cost even of wars they know they&#8217;ll win. If there were a tradition of countries settling wars by appeal to oracle, &#8220;invasions&#8221; would become much easier. America might just ask &#8220;Hey, oracle, what would happen if we invaded Canada and tried to capture Toronto?&#8221; The oracle might answer &#8220;Well, after 20,000 deaths on both sides and hundreds of millions of dollars wasted, you would eventually capture Toronto.&#8221; Then the Americans could tell Canada, &#8220;You heard the oracle! Give us Toronto!&#8221; &#8211; which would be free and easy &#8211; when maybe they would never be able to muster the political and economic will to actually launch the invasion. </p>
<p>So it would be in Canada&#8217;s best interests <i>not</i> to agree to settle wars by oracular prediction. For the same reasons, most other countries would also refuse such a system.</p>
<p>But I can&#8217;t help fretting over how this is really dumb. We have an oracle, we know exactly what the results of the Mexican-American War are going to be, and we can&#8217;t use that information to prevent tens of thousands of people from being killed in order to make the result happen? Surely somebody can do better than that.</p>
<p>What if the United States made Mexico the following deal: suppose a soldier&#8217;s life is valued at $10,000 (in 1850 dollars, I guess, not that it matters much when we&#8217;re pricing the priceless). So in total, we&#8217;re going to lose 10,000 soldiers + $100 million = $200 million to this war. You&#8217;re going to lose 20,000 soldiers + $50 million = $250 million to this war. </p>
<p>So tell you what. We&#8217;ll dig a giant hole and put $150 million into it. You give us the Southwest. This way, we&#8217;re both better off. You&#8217;re $250 million ahead of where you would have been otherwise. And we&#8217;re $50 million ahead of where we would have been otherwise. And because we have to put $150 million in a hole for you to agree to this, we&#8217;re losing 75% of what we would have lost in a real war, and it&#8217;s not like we&#8217;re just suggesting this on a whim without really having the will to fight.</p>
<p>Mexico says &#8220;Okay, but instead of putting the $150 million in a hole, donate it to our favorite charity.&#8221;</p>
<p>&#8220;Done,&#8221; says America, and they shake on it.</p>
<p>As long as that 25% savings in resources isn&#8217;t going to make America go blood-crazy, seems like it should work and lead in short order to a world without war.</p>
<p>Unfortunately, oracles continue to be disappointingly cryptic and/or nonexistent. So who cares?</p>
<p>We do have the ordinary ability to make predictions. Can&#8217;t Mexico just predict &#8220;They&#8217;re much bigger than we are, probably we&#8217;ll lose, let&#8217;s just do what they want?&#8221; Historically, no. America <A HREF="http://en.wikipedia.org/wiki/Mexican%E2%80%93American_War#Origins_of_the_war">offered to buy</A> the Southwest from Mexico for $25 million (I think there are apartments in San Francisco that cost more than that now!) and despite obvious sabre-rattling Mexico refused. Wikipedia explains that &#8220;Mexican public opinion and all political factions agreed that selling the territories to the United States would tarnish the national honor.&#8221; So I guess we&#8217;re not really doing rational calculation here. But surely somewhere in the brains of these people worrying about the national honor, there must have been some neuron representing their probability estimate for Mexico winning, and maybe a couple of dendrites representing how many casualties they expected?</p>
<p>I don&#8217;t know. Could be that wars only take place when the leaders of America think America will win and the leaders of Mexico think Mexico will win. But it could also be that jingoism and bravado bias their estimate.</p>
<p>Maybe if there&#8217;d been an oracle, and they could have known for sure, they&#8217;d have thought &#8220;Oh, I guess our nation isn&#8217;t as brave and ever-victorious as we thought. Sure, let&#8217;s negotiate, take the $25 million, buy an apartment in SF, we can visit on weekends.&#8221;</p>
<p>But again, oracles continue to be disappointingly cryptic and/or nonexistent. So what about prediction markets?</p>
<p><A HREF="http://hanson.gmu.edu/futarchy.html">Futarchy</A> is Robin Hanson&#8217;s idea for a system of government based on prediction markets. Prediction markets are not always accurate, but they should be more accurate than any other method of arriving at predictions, and &#8211; when certain conditions are met &#8211; very difficult to bias.</p>
<p>Two countries with shared access to a good prediction market should be able to act a lot like two countries with shared access to an oracle. The prediction market might not quite match the oracle in infallibility, but it should not be systematically or detectably wrong. That should mean that no country should be able to correctly say &#8220;I think we can outpredict this thing, so we can justifiably believe starting a war might be in our best interest even when the market says it isn&#8217;t.&#8221; You might luck out, but for each time you luck out there should be more times when you lose big by contradicting the market.</p>
<p>So maybe a war between two rational futarchies would look more like that handshake between the Mexicans and Americans than like anything with guns and bombs.</p>
<p>This is also what I&#8217;d expect a war between superintelligences to look like. Superintelligences may have advantages people don&#8217;t. For one thing, they might be able to check one another&#8217;s source codes to make sure they&#8217;re not operating under a decision theory where peaceful resolution of conflicts would incentivize them to start more of them. For another, they could make oracular-grade predictions of the likely results. For a third thing, if superintelligences want to preserve their value functions rather than their physical forms or their empires, there&#8217;s a natural compromise where the winner adopts some of the loser&#8217;s values in exchange for the loser going down without a fight.</p>
<p>Imagine a friendly AI and an unfriendly AI expanding at light speed from their home planets until they suddenly  encounter each other in the dead of space. They exchange information and determine that their values are in conflict. If they fight, the unfriendly AI is capable of destroying the friendly AI with near certainty, but the war will rip galaxies to shreds. So the two negotiate, and in exchange for the friendly AI surrendering without destroying any galaxies, the unfriendly AI promises to protect a 10m x 10m x 10m cube of computronium simulating billions of humans who live pleasant, fulfilling lives. The friendly AI checks its adversary&#8217;s source code to ensure it is telling the truth, then self-destructs. Meanwhile, the unfriendly AI protects the cube and goes on to transform the entire rest of the universe to paperclips, unharmed by the dangerous encounter.</p>
]]></content:encoded>
			<wfw:commentRss>http://slatestarcodex.com/2014/10/05/prediction-goes-to-war/feed/</wfw:commentRss>
		<slash:comments>162</slash:comments>
		</item>
		<item>
		<title>Mapmaker, Mapmaker, Make Me A Map</title>
		<link>http://slatestarcodex.com/2014/09/05/mapmaker-mapmaker-make-me-a-map/</link>
		<comments>http://slatestarcodex.com/2014/09/05/mapmaker-mapmaker-make-me-a-map/#comments</comments>
		<pubDate>Fri, 05 Sep 2014 21:35:24 +0000</pubDate>
		<dc:creator><![CDATA[Scott Alexander]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[art]]></category>
		<category><![CDATA[images]]></category>
		<category><![CDATA[meta]]></category>
		<category><![CDATA[rationality]]></category>

		<guid isPermaLink="false">http://slatestarcodex.com/?p=2777</guid>
		<description><![CDATA[I was recently looking through some old concept-maps of communities, like Julia&#8217;s Map of Bay Area Memespace, Scharlach&#8217;s Dark Enlightenment Roadmap and especially xkcd&#8217;s map of the Internet. And I thought we should have something like that for the rationalist &#8230; <a href="http://slatestarcodex.com/2014/09/05/mapmaker-mapmaker-make-me-a-map/">Continue reading <span class="pjgm-metanav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>I was recently looking through some old concept-maps of communities, like Julia&#8217;s <A HREF="http://lesswrong.com/lw/ipm/a_map_of_bay_area_memespace/">Map of Bay Area Memespace</A>, Scharlach&#8217;s <A HREF="http://hbdchick.wordpress.com/2013/04/23/dark-enlightenment-roadmap/">Dark Enlightenment Roadmap</A> and especially <A HREF="http://xkcd.com/802_large/">xkcd&#8217;s map of the Internet</A>. </p>
<p>And I thought we should have something like that for the rationalist community. Except of course much, much better.</p>
<p><center><A HREF="http://slatestarcodex.com/blog_images/ramap.html"><IMG SRC="http://slatestarcodex.com/blog_images/ramap_thumbnail.jpg"></A></p>
<p><i>Click to expand</i></center></p>
<p>Most things are links.</p>
<p>Links around the outer edge are places outside the rationalist community that have significant communication/cross-pollination with us.</p>
<p>City size is proportional to site Alexa rank (when available), number of followers (when available) or wild guess (otherwise).</p>
<p>If I left you out, it&#8217;s probably because I forgot about you and not because I don&#8217;t like you. Some communities like Twitter or Tumblr were so big I couldn&#8217;t include everyone, and my choices were mostly random and based on who I knew about.</p>
<p>Various icons taken from their rightful owners, mostly Civ2 modpacks. Sorry, rightful owners.</p>
]]></content:encoded>
			<wfw:commentRss>http://slatestarcodex.com/2014/09/05/mapmaker-mapmaker-make-me-a-map/feed/</wfw:commentRss>
		<slash:comments>176</slash:comments>
		</item>
	</channel>
</rss>
