<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Slate Star Codex &#187; studies</title>
	<atom:link href="http://slatestarcodex.com/tag/studies/feed/" rel="self" type="application/rss+xml" />
	<link>http://slatestarcodex.com</link>
	<description>In a mad world, all blogging is psychiatry blogging</description>
	<lastBuildDate>Fri, 24 Jul 2015 02:59:17 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.2.3</generator>
	<item>
		<title>Growth Mindset 3: A Pox On Growth Your Houses</title>
		<link>http://slatestarcodex.com/2015/04/22/growth-mindset-3-a-pox-on-growth-your-houses/</link>
		<comments>http://slatestarcodex.com/2015/04/22/growth-mindset-3-a-pox-on-growth-your-houses/#comments</comments>
		<pubDate>Thu, 23 Apr 2015 01:45:20 +0000</pubDate>
		<dc:creator><![CDATA[Scott Alexander]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[psychology]]></category>
		<category><![CDATA[studies]]></category>

		<guid isPermaLink="false">http://slatestarcodex.com/?p=3623</guid>
		<description><![CDATA[[EDIT: The author of this paper has responded; I list his response here.] Jacques Derrida proposed a form of philosophical literary criticism called deconstruction. I&#8217;ll be the first to admit I don&#8217;t really understand it, but it seems to have &#8230; <a href="http://slatestarcodex.com/2015/04/22/growth-mindset-3-a-pox-on-growth-your-houses/">Continue reading <span class="pjgm-metanav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p><b>[EDIT: The author of this paper has responded; I list his response <A HREF="slatestarcodex.com/2015/05/07/growth-mindset-4-growth-of-office/">here</A>.]</b></p>
<p>Jacques Derrida proposed a form of philosophical literary criticism called deconstruction. I&#8217;ll be the first to admit I don&#8217;t really understand it, but it seems to have something to do with assuming all texts secretly contradict their stated premise and apparent narrative, then hunting down and exposing the plastered-over areas where the author tries to hide this.</p>
<p>I have no idea whether this works for literature or not, but it&#8217;s a useful way to read scientific papers.</p>
<p>Consider a popular field &#8211; or, at least, a field where <i>a certain position</i> is popular. For example, we&#8217;ve been talking a lot about growth mindset recently. There seem to be a lot of researchers working to prove growth mindset and not a lot working to disprove it. Journals are pretty interested in studies showing growth mindset interventions work, and maybe not so interested in studies showing they don&#8217;t. I&#8217;ll admit that my strong suspicions of publication bias don&#8217;t seem to be borne out by the facts here &#8211; see <A HREF="http://faculty.wcas.northwestern.edu/eli-finkel/documents/InPress_BurnetteOBoyleVanEppsPollackFinkel_PsychBull.pdf">this meta-analysis</A> &#8211; but I bet its more sinister cousin &#8220;all experimenters believe the same thing and have the same experimenter effects&#8221; bias is alive and well.</p>
<p>In a field like that, you&#8217;re not going to get the contrarian studies you want, but one way to find the other side of the issue is to look a little more closely at the studies that do get published, the ones that say they&#8217;re in support of the thesis, and see if you can find anything incriminating.</p>
<p>Here&#8217;s a perfect example: <A HREF="http://slatestarcodex.com/Stuff/mindset3_paper.pdf">Mindset Interventions Are A Scalable Treatment For Academic Underachievement</A>, by a team of six researchers including Carol Dweck. </p>
<p>The abstract reads:<br />
<blockquote>The efficacy of academic-mind-set interventions has been demonstrated by small-scale, proof-of-concept interventions, generally delivered in person in one school at a time. Whether this approach could be a practical way to raise school achievement on a large scale remains unknown. We therefore delivered brief growth-mind-set and sense-of-purpose interventions through online modules to 1,594 students in 13 geographically diverse high schools. Both interventions were intended to help students persist when they experienced academic difficulty; thus, both were predicted to be most beneficial for poorly performing students. This was the case. Among students at risk of dropping out of high school (one third of the sample), each intervention raised students’ semester grade point averages in core academic courses and increased the rate at which students performed satisfactorily in core courses by 6.4 percentage points. We discuss implications for the pipeline from theory to practice and for education reform.</p></blockquote>
<p>This sounds really, really impressive! It&#8217;s hard to imagine any stronger evidence in growth mindset&#8217;s favor.</p>
<p>And then you make the mistake of reading the actual paper.</p>
<p>The paper asked a 1,594 students from a bunch of different high schools to take a 45 minute online course. </p>
<p>A quarter of the students took a placebo course that just presented some science about how different parts of the brain do different stuff.</p>
<p>Another quarter took a course that was supposed to teach growth mindset. </p>
<p>Still another quarter took a course about &#8220;sense of purpose&#8221; which talked about how schoolwork was meaningful and would help them accomplish lots of goals and they should be happy to do it. This was also classified as a &#8220;mindset intervention&#8221;, though it seems pretty different.</p>
<p>And the final quarter took both the growth mindset course <i>and</i> the &#8220;sense of purpose&#8221; course.</p>
<p>Then they let all students continue taking their classes for the rest of the semester and saw what happened, which was this:</p>
<p><center><IMG SRC="http://slatestarcodex.com/blog_images/mindset3_1.png"></center></p>
<p><b>[EDIT: I totally bungled these graphs! See discussion of exactly how on the author&#8217;s reply above, without which the information below will be misleading at best]</b></p>
<p>Among ordinary students, the effect on the growth mindset group was completely indistinguishable from zero, and in fact they did nonsignificantly worse than the control group. This was the most basic test they performed, and it should have been the headline of the study. The study should have been titled &#8220;Growth Mindset Intervention Totally Fails To Affect GPA In Any Way&#8221;.</p>
<p>Instead they went to subgroup analysis. Subgroup analysis can be useful to find more specific patterns in the data, but if it&#8217;s done post hoc it can lead to what I previously called <A HREF="http://slatestarcodex.com/2014/01/02/two-dark-side-statistics-papers/">the Elderly Hispanic Woman Effect</A>, after medical papers that can&#8217;t find their drug has any effect on people at large, so they keep checking different subgroups &#8211; young white men&#8230;nothing. Old black men&#8230;nothing. Middle-aged Asian transgender people&#8230;nothing. Newborn Australian aboriginal butch lesbians&#8230;nothing. Elderly Hispanic women&#8230;p = 0.049&#8230;aha! And the study gets billed as &#8220;Scientists Find Exciting New Drug That Treats Diabetes In Elderly Hispanic Women.&#8221;</p>
<p>As per the abstract, the researchers decided to focus on an &#8220;at risk&#8221; subgroup because they had principled reasons to believe mindset interventions would work better on them. In their subgroup of 519 students who had a GPA of 2.0 or less last semester, or who failed one or more academic courses last semester:</p>
<p><center><IMG SRC="http://slatestarcodex.com/blog_images/mindset3_2.png"></center></p>
<p>Growth mindset <i>still</i> doesn&#8217;t differ from zero. And growth mindset does nonsignificantly <i>worse</i> than their &#8220;sense of purpose&#8221; intervention where they tell children to love school. In fact, the students who take both &#8220;sense of purpose&#8221; and growth mindset actually do (nonsignificantly) <i>worse</i> than sense-of-purpose alone!</p>
<p>But the control group mysteriously started doing much worse in all their classes right after the study started, so growth mindset is significantly better than the control group. Hooray!</p>
<p>Why would the control group&#8217;s GPA suddenly decline? The simplest answer would be that by coincidence the class got harder right after the study started, and only the intervention kids were resilient enough to deal with it &#8211; but that can&#8217;t be right, because this was done at eleven different schools, and they wouldn&#8217;t have all had their coursework get harder at the same time. </p>
<p>Another possibility is that sufficiently low-functioning kids are <i>always</i> declining &#8211; that is, as time goes on they get more and more behind in their coursework, so their grades at time t+1 are always less than at time t, and maybe growth mindset has arrested this decline. This is plausible and I&#8217;d be interested in seeing if other studies have found this.</p>
<p>Perhaps aware that this is not very convincing, the authors go on to do another analysis, this one of percent of students passing their classes.</p>
<p><center><IMG SRC="http://slatestarcodex.com/blog_images/mindset3_3.png"></center></p>
<p>This is the same group of at-risk students as the last one. It&#8217;s graphing what percent of these students pass versus fail their courses. The graph on th left shows that a significantly higher number of students in the intervention conditions pass their courses than in the control condition.</p>
<p>This is better, but one part still concerns me.</p>
<p>Did you catch that phrase &#8220;intervention conditions&#8221;? The authors of the study write: &#8220;Because our primary research question concerned the efficacy of academic mindset interventions in general when delivered via online modules, we then collapsed the intervention conditions into a single intervention dummy code (0 = control, 1 = intervention).</p>
<p>We don&#8217;t know whether growth mindset did anything for even these students in this little subgroup, because it was collapsed together with the (more effective) &#8220;sense of purpose&#8221; intervention before any of these tests were done. I don&#8217;t know if this is just for convenience, or if it is to obfuscate that it didn&#8217;t work on its own.</p>
<p><i>[<b>EDIT:</b> Scott McGreal <A HREF="http://slatestarcodex.com/2015/04/22/growth-mindset-3-a-pox-on-growth-your-houses/#comment-199528">looks further</A> and finds in the supplementary material that growth mindset alone did NOT significantly improve pass rates!]</i></p>
<p>The abstract of this study tells you none of this. It just says: &#8220;Mindset Interventions Are A Scalable Treatment For Academic Overachievement&#8230;Among students at risk of dropping out of high school (one third of the sample), each intervention raised students’ semester grade point averages in core academic courses and increased the rate at which students performed satisfactorily in core courses by 6.4 percentage points&#8221; From the abstract, this study is a triumph.</p>
<p>But my own summary of these results, as relevant to growth mindset is as follows: </p>
<p>For students with above a 2.0 GPA, a growth mindset intervention did nothing.</p>
<p>For students with below a 2.0 GPA, the growth mindset interventions may not have improved GPA, but may have prevented GPA from falling, which for some reason it was otherwise going to do. </p>
<p>Even in those students, it didn&#8217;t do any better than a &#8220;sense-of-purpose&#8221; intervention where children were told platitudes about how doing well in school will &#8220;make their families proud&#8221; and &#8220;make a positive impact&#8221;.</p>
<p>In no group of students did it significantly increase chance of passing any classes.</p>
<p>Haishan <A HREF="http://slatestarcodex.com/2015/04/22/growth-mindset-3-a-pox-on-growth-your-houses/#comment-199696">writes</A>:<br />
<blockquote>If ye read only the headlines, what reward have ye? Do not even the policymakers the same? And if ye take the abstract at its face, what do ye more than others? Do not even the science journalists so?”</p></blockquote>
<p>Titles, abstracts, and media presentations are where authors can decide how to report a bunch of different, often contradictory results in a way that makes it look like they have completely proven their point. A careful look at the study may find that their emphasis is misplaced, and give you more than enough ammunition against a theory even where the stated results are glowingly positive.</p>
<p>The only reason we were told these results is that they were in the same place as a &#8220;sense of purpose mindset&#8221; intervention that looked a little better, so it was possible to publish the study and claim it as a victory for mindsets in general. How many studies that show similar results for growth mindset lack a similar way of spinning the data, and so never get seen at all?</p>
]]></content:encoded>
			<wfw:commentRss>http://slatestarcodex.com/2015/04/22/growth-mindset-3-a-pox-on-growth-your-houses/feed/</wfw:commentRss>
		<slash:comments>254</slash:comments>
		</item>
		<item>
		<title>Early Intervention: You *Might* Get What You Pay For</title>
		<link>http://slatestarcodex.com/2015/02/28/early-intervention-you-might-get-what-you-pay-for/</link>
		<comments>http://slatestarcodex.com/2015/02/28/early-intervention-you-might-get-what-you-pay-for/#comments</comments>
		<pubDate>Sat, 28 Feb 2015 22:20:30 +0000</pubDate>
		<dc:creator><![CDATA[Scott Alexander]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[psychiatry]]></category>
		<category><![CDATA[studies]]></category>

		<guid isPermaLink="false">http://slatestarcodex.com/?p=3567</guid>
		<description><![CDATA[I find myself caught between the genetics community &#8211; which takes it as a given that childhood experiences and education have a very limited role in shaping life outcomes &#8211; and the psychiatric community, which takes it as a given &#8230; <a href="http://slatestarcodex.com/2015/02/28/early-intervention-you-might-get-what-you-pay-for/">Continue reading <span class="pjgm-metanav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>I find myself caught between the genetics community &#8211; which takes it as a given that childhood experiences and education have a very limited role in shaping life outcomes &#8211; and the psychiatric community, which takes it as a given that childhood experiences and education are crucial in shaping life outcomes. Both sides have their favorite studies to cite supporting their positions. I&#8217;ve already talked about the genetics studies, so I thought I&#8217;d bring up a recent particularly good study from the other side.</p>
<p>Dodge et al&#8217;s <A HREF="http://slatestarcodex.com/blog_images/early_intervention.pdf">Impact Of Early Intervention On Psychopathology, Crime, And Well-Being At Age 25</A> is published in last month&#8217;s <i>American Journal Of Psychiatry</i>. Gratifyingly, it is a randomized controlled trial. Ten thousand kindergarteners in disadvantaged areas were screened for &#8220;conduct problems&#8221; until they found about 900 who looked like they were at high risk. 445 were randomly selected for the intervention. Another 446 stayed in the control group. The intervention was a bunch of extra classes and &#8216;enrichment programs&#8217; from elementary school (age 5) all the way through high school (age 16). The study mentions &#8220;social skills friendship groups&#8221;, &#8220;guided parent child interaction sessions&#8221;, &#8220;tutoring in reading&#8221;, &#8220;parent-youth groups on topics of adolescent development, alcohol, tobacco, and drugs&#8221;, &#8220;youth forums on vocational opportunities&#8221;, and &#8220;Oysterman&#8217;s School-To-Job possible selves intervention aimed at examining emerging identity&#8221;.</p>
<p>All of these sound so pretentious that I would have loved to be able to report that they had no effect, but in fact the opposite was true. When they caught up with these kids at age 25, the intervention group was found to have an odds ratio of around 0.6 to 0.7 of having developed various psychiatric disorders the study was testing for, including antisocial personality disorder, ADHD, depression, or anxiety. They had odds ratios around 0.7 of developing drug and alcohol abuse problems by various measures. They reported less risky sexual behavior, less domestic abuse, and fewer violent crimes. All of this was significant at the p < 0.05 level, and some of it was significant at much higher levels like p = 0.001 or below. Subgroup analysis found the data were very similar when you restricted the analysis to various subgroups like boys, girls, whites, blacks, highest-risk, lowest-risk, and by study site (it was a multi-site study). As best I can tell there were <i>not</i> an equal number of anaylses they did that came up negative that they covered up.</p>
<p>The apparent conclusion is that intensive interventions can change children&#8217;s outcomes and personalities in important ways ten years down the road, even regarding things believed to be highly genetic like antisocial personality disorder.</p>
<p>A few weak attempts to rebut this. First, there were some things that study didn&#8217;t do that one might have expected it to. It didn&#8217;t change graduation rates or employment rates. The apparent decrease in domestic violence was mediated entirely by the intervention group being less likely to have relationships (!) &#8211; the rate of domestic violence among people in relationships was the same. There was no effect on health. There was no effect on self-reported satisfaction with their parents&#8217; parenting. There were (nonsignificantly) higher death rates and incarceration rates in the intervention group than the control group.</p>
<p>So if I wanted to be maximally mean to the study, I could say that whatever it&#8217;s doing to violent crime and drug use has to be compatible with a (nonsignificantly) <i>raised</i> incarceration rate, and whatever it&#8217;s doing to drug use and risky sexual behavior and criminality has to be compatible with a (nonsignificantly) <i>raised</i> death rate. This suggests the possibility of an attack based on their endpoints being screwy, though I&#8217;m not sure what form such an attack could take. One could argue that since many of their outcomes were based on self-report surveys maybe the kids who had been through all of the enrichment programs had grown to like the study people and had a stronger demand effect to say that they were doing great. But a lot of the survey data was backed up by court records confirming fewer drug and violence convictions. So that doesn&#8217;t really work.</p>
<p>If you&#8217;re less interested in the pure science of individual differences and more interested in policy, one fact that I forgot to mention was that this program cost $60,000 per kid. The paper points out that this is the same cost as a year or two of incarceration, so if it really changes children&#8217;s life outcomes and makes tham less antisocial even that hefty price tag might be justified (although again, remember that it didn&#8217;t affect employment or incarceration when checked directly).</p>
<p>If you&#8217;re looking for an optimistic spin on that number, they freely admit they have no idea which part of their gigantic ten year intervention program produced the positive effects. It could be that all the youth forums and enrichment programs and friendship groups and so on had zero effect, and the entire benefit came from the &#8220;Oysterman&#8217;s School-To-Job possible selves intervention aimed at examining emerging identity&#8221;. And maybe that&#8217;s a piece of paper that can be copied on a copy machine for ten cents a sheet. All this suggests is that at least <i>some</i> part of the ten-year, $60,000 intervention did something.</p>
<p>If you&#8217;re looking for a pessimistic spin on that number, consider. Every so often I see things that claim to have completely shifted children from the most high-risk of high-risk groups to upstanding successful members of society by giving them a year or preschool, or a couple of after-school lessons, or something like that. And these studies always boast that they did it with only $1000 or $5000 or some number like that, so it&#8217;s nice and cost effective. So far, the studies I have seen like this have been wrong. And so far I have not been surprised, because we <i>already</i> spend between $100,000 to $200,000 per child on education and various social programs. If someone ever found a social program that really worked for $1,000, the first thing we would want to do is tar and feather everyone currently in our bureaucracy of social programs, for being so incompetent that changing their $200,000 in spending to $201,000 in spending (with the extra $1000 going to someone besides them) could completely revolutionize life outcomes.</p>
<p>This study seems more in line with everything else. By going from $200,000 to $260,000, we can slightly push a few things in a positive direction a little bit more, maybe. From a scientific view, it&#8217;s pretty interesting. From a policy view, it&#8217;s nothing to write home about.</p>
]]></content:encoded>
			<wfw:commentRss>http://slatestarcodex.com/2015/02/28/early-intervention-you-might-get-what-you-pay-for/feed/</wfw:commentRss>
		<slash:comments>289</slash:comments>
		</item>
		<item>
		<title>The Control Group Is Out Of Control</title>
		<link>http://slatestarcodex.com/2014/04/28/the-control-group-is-out-of-control/</link>
		<comments>http://slatestarcodex.com/2014/04/28/the-control-group-is-out-of-control/#comments</comments>
		<pubDate>Tue, 29 Apr 2014 00:46:27 +0000</pubDate>
		<dc:creator><![CDATA[Scott Alexander]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[long post is long]]></category>
		<category><![CDATA[science]]></category>
		<category><![CDATA[statistics]]></category>
		<category><![CDATA[studies]]></category>

		<guid isPermaLink="false">http://slatestarcodex.com/?p=1921</guid>
		<description><![CDATA[I. Allan Crossman calls parapsychology the control group for science. That is, in let&#8217;s say a drug testing experiment, you give some people the drug and they recover. That doesn&#8217;t tell you much until you give some other people who &#8230; <a href="http://slatestarcodex.com/2014/04/28/the-control-group-is-out-of-control/">Continue reading <span class="pjgm-metanav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p><b>I.</b></p>
<p>Allan Crossman calls parapsychology <A HREF="http://lesswrong.com/lw/1ib/parapsychology_the_control_group_for_science/">the control group for science</A>.</p>
<p>That is, in let&#8217;s say a drug testing experiment, you give some people the drug and they recover. That doesn&#8217;t tell you much until you give some other people who are taking a placebo drug you <i>know</i> doesn&#8217;t work &#8211; but which they themselves believe in &#8211; and see how many of <i>them</i> recover. That number tells you how many people will recover whether the drug works or not. Unless people on your real drug do significantly better than people on the placebo drug, you haven&#8217;t found anything.</p>
<p>On the meta-level, you&#8217;re studying some phenomenon and you get some positive findings. That doesn&#8217;t tell you much until you take some other researchers who are studying a phenomenon you <i>know</i> doesn&#8217;t exist &#8211; but which they themselves believe in &#8211; and see how many of <i>them</i> get positive findings. That number tells you how many studies will discover positive results whether the phenomenon is real or not. Unless studies of the real phenomenon do significantly better than studies of the placebo phenomenon, you haven&#8217;t found anything.</p>
<p>Trying to set up placebo science would be a logistical nightmare. You&#8217;d have to find a phenomenon that definitely doesn&#8217;t exist, somehow convince a whole community of scientists across the world that it does, and fund them to study it for a couple of decades without them figuring out the gig.</p>
<p>Luckily we have a natural experiment in terms of parapsychology &#8211; the study of psychic phenomena &#8211; which most reasonable people don&#8217;t believe exists but which a community of practicing scientists does and publishes papers on all the time.</p>
<p>The results are pretty dismal. Parapsychologists are able to produce experimental evidence for psychic phenomena about as easily as normal scientists are able to produce such evidence for normal, non-psychic phenomena. This suggests the existence of a very large &#8220;placebo effect&#8221; in science &#8211; ie with enough energy focused on a subject, you can <i>always</i> produce &#8220;experimental evidence&#8221; for it that meets the usual scientific standards. As Eliezer Yudkowsky puts it:<br />
<blockquote>Parapsychologists are constantly protesting that they are playing by all the standard scientific rules, and yet their results are being ignored &#8211; that they are unfairly being held to higher standards than everyone else. I&#8217;m willing to believe that. It just means that the standard statistical methods of science are so weak and flawed as to permit a field of study to sustain itself in the complete absence of any subject matter.</p></blockquote>
<p>These sorts of thoughts have become more common lately in different fields. Psychologists admit to a <A HREF="http://blogs.nature.com/news/2012/11/psychologists-do-some-soul-searching.html">crisis of replication</A> as some of their most interesting findings turn out to be spurious. And in medicine, John Ioannides and others have been criticizing the research for a decade now and telling everyone they need to up their standards.</p>
<p>&#8220;Up your standards&#8221; has been a complicated demand that cashes out in a lot of technical ways. But there is broad agreement among the most intelligent voices I read (<A HREF="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1182327/">1</A>, <A HREF="http://lesswrong.com/lw/ajj/how_to_fix_science/">2</A>, <A HREF="http://www.newyorker.com/reporting/2010/12/13/101213fa_fact_lehrer">3</A>, <A HREF="http://blogs.plos.org/mindthebrain/author/jcyone/">4</A>, <A HREF="http://www.haas.berkeley.edu/groups/online_marketing/facultyCV/papers/nelson_false-positive.pdf">5</A>) about a couple of promising directions we could go:</p>
<p>1. Demand very large sample size.</p>
<p>2. Demand replication, preferably exact replication, most preferably multiple exact replications.</p>
<p>3. Trust systematic reviews and meta-analyses rather than individual studies. Meta-analyses must prove homogeneity of the studies they analyze.</p>
<p>4. Use Bayesian rather than frequentist analysis, or even combine both techniques.</p>
<p>5. Stricter p-value criteria. It is far too easy to massage p-values to get less than 0.05. Also, make meta-analyses look for &#8220;p-hacking&#8221; by examining the distribution of p-values in the included studies.</p>
<p>6. Require pre-registration of trials.</p>
<p>7. Address publication bias by searching for unpublished trials, displaying funnel plots, and using statistics like &#8220;fail-safe N&#8221; to investigate the possibility of suppressed research.</p>
<p>8. Do heterogeneity analyses or at least observe and account for differences in the studies you analyze.</p>
<p>9. Demand randomized controlled trials. None of this &#8220;correlated even after we adjust for confounders&#8221; BS.</p>
<p>10. Stricter effect size criteria. It&#8217;s easy to get small effect sizes in <i>anything</i>.</p>
<p>If we follow these ten commandments, then we avoid the problems that allowed parapsychology and probably a whole host of other problems we don&#8217;t know about to sneak past the scientific gatekeepers.</p>
<p>Well, <A HREF="http://papers.ssrn.com/sol3/Delivery.cfm/SSRN_ID2427865_code1602198.pdf?abstractid=2423692&#038;mirid=1">what now, motherfuckers?</A></p>
<p><b>II.</b></p>
<p>Bem, Tressoldi, Rabeyron, and Duggan (2014), full text available for download at the top bar of the link above, is parapsychology&#8217;s way of saying &#8220;thanks but no thanks&#8221; to the idea of a more rigorous scientific paradigm making them quietly wither away.</p>
<p>You might remember Bem as the prestigious establishment psychologist who decided to try his hand at parapsychology and to his and everyone else&#8217;s surprise got positive results. Everyone had a lot of criticisms, some of which were <A HREF="http://www.talyarkoni.org/blog/2011/01/10/the-psychology-of-parapsychology-or-why-good-researchers-publishing-good-articles-in-good-journals-can-still-get-it-totally-wrong/">very very good</A>, and the study <A HREF="http://news.discovery.com/human/psychology/controversial-esp-study-fails-yet-again-120912.htm">failed replication several times</A>. Case closed, right?</p>
<p>Earlier this month Bem came back with a meta-analysis of ninety replications from tens of thousands of participants in thirty three laboratories in fourteen countries confirming his original finding, p < 1.2 * -10<sup>10</sup>, Bayes factor 7.4 * 10<sup>9</sup>, funnel plot beautifully symmetrical, p-hacking curve nice and right-skewed, Orwin fail-safe n of 559, et cetera, et cetera, et cetera.</p>
<p>By my count, Bem follows all of the commandments except [6] and [10]. He apologizes for not using pre-registration, but says it&#8217;s okay because the studies were exact replications of a previous study that makes it impossible for an unsavory researcher to change the parameters halfway through and does pretty much the same thing. And he apologizes for the small effect size but points out that some effect sizes are legitimately very small, this is no smaller than a lot of other commonly-accepted results, and that a high enough p-value ought to make up for a low effect size.</p>
<p>This is <i>far</i> better than the average meta-analysis. Bem has always been pretty careful and this is no exception.</p>
<p>So &#8211; once again &#8211; what now, motherfuckers?</p>
<p><b>III.</b></p>
<p>In retrospect, that list of ways to fix science above was a little optimistic.</p>
<p>The first nine items (large sample sizes, replications, low p-values, Bayesian statistics, meta-analysis, pre-registration, publication bias, heterogeneity) all try to solve the same problem: accidentally mistaking noise in the data for a signal.</p>
<p>We&#8217;ve placed so much emphasis on not mistaking noise for signal that when someone like Bem hands us a beautiful, perfectly clear signal on a silver platter, it briefly stuns us. &#8220;Wow, of the three hundred different terrible ways to mistake noise for signal, Bem has proven beyond a shadow of a doubt he hasn&#8217;t done any of them.&#8221; And we get so stunned we&#8217;re likely to forget that this is only part of the battle.</p>
<p>Bem definitely picked up a signal. The only question is whether it&#8217;s a signal of psi, or a signal of poor experimental technique.</p>
<p><i>None</i> of these five techniques even <i>touch</i> poor experimental technique &#8211; or confounding, or whatever you want to call it. If an experiment is confounded, if it produces a strong signal even when its experimental hypothesis is true, then using a larger sample size will just make that signal even stronger. </p>
<p>Replicating it will just reproduce the confounded results again. </p>
<p>Low p-values will be easy to get if you perform the confounded experiment on a large enough scale.</p>
<p>Meta-analyses of confounded studies will obey the immortal law of &#8220;garbage in, garbage out&#8221;.</p>
<p>Pre-registration only assures that your study will not get any worse than it was the first time you thought of it, which may be very bad indeed.</p>
<p>Searching for publication bias only means you will get <i>all</i> of the confounded studies, instead of just some of them.</p>
<p>Heterogeneity just tells you whether all of the studies were confounded about the same amount. </p>
<p>Bayesian statistics, alone among these first eight, ought to be able to help with this problem. After all, a good Bayesian should be able to say &#8220;Well, I got some impressive results, but my prior for psi is very low, so this raises my belief in psi slightly, but raises my belief that the experiments were confounded <i>a lot</i>.&#8221;</p>
<p>Unfortunately, good Bayesians are hard to come by. People like to mock Less Wrong, saying we&#8217;re amateurs getting all starry-eyed about Bayesian statistics even while real hard-headed researchers who have been experts in them for years understand both their uses and their limitations. Well, maybe that&#8217;s true of some researchers. But the particular ones I see talking about Bayes <i>here</i> could do with reading the Sequences. Here&#8217;s Bem:<br />
<blockquote>An opportunity to calculate an approximate answer to this question emerges from a Bayesian critique of Bem’s (2011) experiments by Wagenmakers, Wetzels, Borsboom, &#038; van der Maas (2011). Although Wagenmakers et al. did not explicitly claim psi to be impossible, they came very close by setting their prior odds at 10^20 against the psi hypothesis. The Bayes Factor for our full database is approximately 10^9 in favor of the psi hypothesis (Table 1), which implies that our meta-analysis should lower their posterior odds against the psi hypothesis to 10^11</p></blockquote>
<p>Let me shame both participants in this debate.</p>
<p>Bem, you are abusing Bayes factor. If Wagenmakers uses your 10^9 Bayes factor to adjust from his prior of 10^-20 to 10^-11, then what happens the next time you come up with another database of studies supporting your hypothesis? We all know you will, because you&#8217;ve amply proven these results weren&#8217;t due to chance, so whatever factor produced these results &#8211; whether real psi or poor experimental technique &#8211; will no doubt keep producing them for the next hundred replication attempts. When those come in, does Wagenmakers have to adjust his probability from 10^-11 to 10^-2? When you get another hundred studies, does he have to go from 10^-2 to 10^7? If so, then by <A HREF="http://lesswrong.com/lw/ii/conservation_of_expected_evidence/">conservation of expected evidence</A> he should just update to 10^+7 right now &#8211; or really to infinity, since you can keep coming up with more studies till the cows come home. But in fact he shouldn&#8217;t do that, because at some point his thought process becomes &#8220;Okay, I already know that studies of this quality can consistently produce positive findings, so either psi is real or studies of this quality aren&#8217;t good enough to disprove it&#8221;. This point should probably happen well before he increases his probability by a factor of 10^9. See <A HREF="http://lesswrong.com/lw/3be/confidence_levels_inside_and_outside_an_argument/">Confidence Levels Inside And Outside An Argument</A> for this argument made in greater detail.</p>
<p>Wagenmakers, you are overconfident. Suppose God came down from Heaven and said in a booming voice &#8220;EVERY SINGLE STUDY IN THIS META-ANALYSIS WAS CONDUCTED PERFECTLY WITHOUT FLAWS OR BIAS, AS WAS THE META-ANALYSIS ITSELF.&#8221; You would see a p-value of less than 1.2 * 10^-10 and think &#8220;I bet that was just coincidence&#8221;? And then they could do another study of the same size, also God-certified, returning exactly the same results, and you would say &#8220;I bet that was just coincidence too&#8221;? YOU ARE NOT THAT CERTAIN OF ANYTHING. Seriously, <i>read the @#!$ing Sequences</i>.</p>
<p>Bayesian statistics, at least the way they are done here, aren&#8217;t gong to be of much use to anybody.</p>
<p>That leaves randomized controlled trials and effect sizes.</p>
<p>Randomized controlled trials are great. They eliminate most possible confounders in one fell swoop, and are excellent at keeping experimenters honest. Unfortunately, most of the studies in the Bem meta-analysis were already randomized controlled trials.</p>
<p>High effect sizes are really the only thing the Bem study lacks. And it is very hard to experimental technique so bad that it consistently produces a result with a high effect size.</p>
<p>But as Bem points out, demanding high effect size limits our ability to detect real but low-effect phenomena. Just to give an example, many physics experiments &#8211; like the ones that detected the Higgs boson or neutrinos &#8211; rely on detecting extremely small perturbations in the natural order, over millions of different trials. Less esoterically, Bem mentions the example of aspirin decreasing heart attack risk, which it definitely does and which is very important, but which has an effect size lower than that of his psi results. If humans have some kind of <i>very weak</i> psionic faculty that under regular conditions operates poorly and inconsistently, but does indeed exist, then excluding it by definition from the realm of things science can discover would be a bad idea.</p>
<p>All of these techniques are about reducing the chance of confusing noise for signal. But when we think of them as the be-all and end-all of scientific legitimacy, we end up in awkward situations where they come out super-confident in a study&#8217;s accuracy simply because the issue was one they weren&#8217;t geared up to detect. Because a lot of the time the problem is something more than just noise.</p>
<p><b>IV.</b></p>
<p>Wiseman &#038; Schlitz&#8217;s <A HREF="http://www.richardwiseman.com/resources/staring1.pdf">Experimenter Effects And The Remote Detection Of Staring</A> is my favorite parapsychology paper ever and sends me into fits of nervous laughter every time I read it.</p>
<p>The backstory: there is a classic parapsychological experiment where a subject is placed in a room alone, hooked up to a video link. At random times, an experimenter stares at them menacingly through the video link. The hypothesis is that this causes their galvanic skin response (a physiological measure of subconscious anxiety) to increase, even though there is no non-psychic way the subject could know whether the experimenter was staring or not. </p>
<p>Schiltz is a psi believer whose staring experiments had consistently supported the presence of a psychic phenomenon. Wiseman, in accordance with <A HREF="http://en.wikipedia.org/wiki/Nominative_determinism">nominative determinism</A> is a psi skeptic whose staring experiments keep showing nothing and disproving psi. Since they were apparently the only two people in all of parapsychology with a smidgen of curiosity or rationalist virtue, they decided to team up and figure out why they kept getting such different results.</p>
<p>The idea was to plan an experiment together, with both of them agreeing on every single tiny detail. They would then go to a laboratory and set it up, again both keeping close eyes on one another. Finally, they would conduct the experiment in a series of different batches. Half the batches (randomly assigned) would be conducted by Dr. Schlitz, the other half by Dr. Wiseman. Because the two authors had very carefully standardized the setting, apparatus and procedure beforehand, &#8220;conducted by&#8221; pretty much just meant greeting the participants, giving the experimental instructions, and doing the staring.</p>
<p>The results? Schlitz&#8217;s trials found strong evidence of psychic powers, Wiseman&#8217;s trials found no evidence whatsoever.</p>
<p>Take a second to reflect on how this <i>makes no sense</i>. Two experimenters in the same laboratory, using the same apparatus, having no contact with the subjects except to introduce themselves and flip a few switches &#8211; and whether one or the other was there that day completely altered the result. For a good time, watch the gymnastics they have to do to in the paper to make this sound sufficiently sensical to even get published. This is the only journal article I&#8217;ve ever read where, in the part of the Discussion section where you&#8217;re supposed to propose possible reasons for your findings, both authors suggest maybe their co-author hacked into the computer and altered the results.</p>
<p>While it&#8217;s nice to see people exploring Bem&#8217;s findings further, <i>this</i> is the experiment people should be replicating ninety times. I expect <i>something</i> would turn up. </p>
<p>As it is, Kennedy and Taddonio <A HREF="http://jeksite.org/psi/jp76.pdf">list ten similar studies</A> with similar results. One cannot help wondering about publication bias (if the skeptic and the believer got similar results, who cares?). But the phenomenon is sufficiently well known in parapsychology that it has led to its own host of theories about how skeptics emit negative auras, or the enthusiasm of a proponent is a necessary kindling for psychic powers.</p>
<p>Other fields don&#8217;t have this excuse. In psychotherapy, for example, practically the only consistent finding is that whatever kind of psychotherapy the person running the study likes is most effective. Thirty different meta-analyses on the subject have confirmed this with strong effect size (d = 0.54) and good significance (p = .001).</p>
<p>Then there&#8217;s <A HREF="http://criticalscience.com/researcher-allegiance-psychotherapy-research-bias.html">Munder (2013)</A>, which is a meta-meta-analysis on whether meta-analyses of confounding by researcher allegiance effect were themselves meta-confounded by meta-researcher allegiance effect. He found that indeed, meta-researchers who believed in researcher allegiance effect were more likely to turn up positive results in their studies of researcher allegiance effect (p < .002).     It gets worse. There's <A HREF="http://www.npr.org/blogs/health/2012/09/18/161159263/teachers-expectations-can-influence-how-students-perform">a famous story</A> about an experiment where a scientist told teachers that his advanced psychometric methods had predicted a couple of kids in their class were about to become geniuses (the students were actually chosen at random). He followed the students for the year and found that their intelligence actually increased. This was supposed to be a Cautionary Tale About How Teachers&#8217; Preconceptions Can Affect Children.</p>
<p>Less famous is that the same guy did the same thing with rats. He sent one laboratory a box of rats saying they were specially bred to be ultra-intelligent, and another lab a box of (identical) rats saying they were specially bred to be slow and dumb. Then he had them do standard rat learning tasks, and sure enough the first lab found very impressive results, the second lab very disappointing ones.</p>
<p>This scientist &#8211; let&#8217;s give his name, Robert Rosenthal &#8211; <A HREF="http://www.lscp.net/persons/dupoux/teaching/JOURNEE_AUTOMNE_CogMaster_2011-12/readings_deontology/Rosenthal_1994_interpersonal_expectancy_effects_a_review.pdf">then investigated three hundred forty five different studies</A> for evidence of the same phenomenon. He found effect sizes of anywhere from 0.15 to 1.7, depending on the type of experiment involved. Note that this could also be phrased as &#8220;between twice as strong and twenty times as strong as Bem&#8217;s psi effect&#8221;. Mysteriously, animal learning experiments displayed the highest effect size, supporting the folk belief that animals are hypersensitive to subtle emotional cues.</p>
<p>Okay, fine. Subtle emotional cues. That&#8217;s way more scientific than saying &#8220;negative auras&#8221;. But the question remains &#8211; what went wrong for Schlitz and Wiseman? Even if Schlitz had done everything short of saying &#8220;The hypothesis of this experiment is for your skin response to increase when you are being stared at, please increase your skin response at that time,&#8221; and subjects had tried to comply, the whole point was that they didn&#8217;t <i>know</i> when they were being stared at, because to find that out you&#8217;d have to be psychic. And how are these rats figuring out what the experimenters&#8217; subtle emotional cues mean anyway? <i>I</i> can&#8217;t figure out people&#8217;s subtle emotional cues half the time!</p>
<p>I know that standard practice here is to tell <A HREF="http://en.wikipedia.org/wiki/Clever_Hans">the story of Clever Hans</A> and then say That Is Why We Do Double-Blind Studies. But first of all, I&#8217;m pretty sure no one does double-blind studies with rats. Second of all, I think most social psych studies aren&#8217;t double blind &#8211; I just checked the first one I thought of, Aronson and Steele on stereotype threat, and it certainly wasn&#8217;t. Third of all, this effect seems to be just as common in cases where it&#8217;s hard to imagine how the researchers&#8217; subtle emotional cues could make a difference. Like Schlitz and Wiseman. Or like the psychotherapy experiments, where most of the subjects were doing therapy with individual psychologists and never even saw whatever prestigious professor was running the study behind the scenes.</p>
<p>I think it&#8217;s a combination of subconscious emotional cues, subconscious statistical trickery, perfectly conscious fraud which for all we know happens much more often than detected, and things we haven&#8217;t discovered yet which are at least as weird as subconscious emotional cues. But rather than speculate, I prefer to take it as a brute fact. Studies are going to be confounded by the allegiance of the researcher. When researchers who don&#8217;t believe something discover it, that&#8217;s when it&#8217;s worth looking into.</p>
<p><b>V.</b></p>
<p>So what exactly happened to Bem?</p>
<p>Although Bem looked hard to find unpublished material, I don&#8217;t know if he succeeded. Unpublished material, in this context, has to mean &#8220;material published enough for Bem to find it&#8221;, which in this case was mostly things presented at conferences. What about results so boring that they were never even mentioned?</p>
<p>And I predict people who believe in parapsychology are more likely to conduct parapsychology experiments than skeptics. Suppose this is true. And further suppose that for some reason, experimenter effect is real and powerful. That means most of the experiments conducted will support Bem&#8217;s result. But this is still a weird form of &#8220;publication bias&#8221; insofar as it ignores the contrary results of hypotheticaly experiments that were never conducted.</p>
<p>And worst of all, maybe Bem really did do an excellent job of finding every little two-bit experiment that no journal would take. How much can we trust these non-peer-reviewed procedures?</p>
<p>I looked through his list of ninety studies for all the ones that were both exact replications and had been peer-reviewed (with one caveat to be mentioned later). I found only seven:</p>
<p>Batthyany, Kranz, and Erber: .268<br />
Ritchie 1: 0.015<br />
Ritchie 2: -0.219<br />
Richie 3: -0.040<br />
Subbotsky 1: 0.279<br />
Subbotsky 2: 0.292<br />
Subbotsky 3: -.399</p>
<p>Three find large positive effects, two find approximate zero effects, and two find large negative effects. Without doing any calculatin&#8217;, this seems pretty darned close to chance for me.</p>
<p>Okay, back to that caveat about replications. One of Bem&#8217;s strongest points was how many of the studies included were exact replications of his work. This is important because if you do your own novel experiment, it leaves a lot of wiggle room to keep changing the parameters and statistics a bunch of times until you get the effect you want. This is why lots of people want experiments to be preregistered with specific committments about what you&#8217;re going to test and how you&#8217;re going to do it. These experiments weren&#8217;t preregistered, but conforming to a previously done experiment is a pretty good alternative.</p>
<p>Except that I think the criteria for &#8220;replication&#8221; here were exceptionally loose. For example, Savva et al was listed as an &#8220;exact replication&#8221; of Bem, but it was performed in 2004 &#8211; seven years before Bem&#8217;s original study took place. I know Bem believes in precognition, but that&#8217;s going <i>too far</i>. As far as I can tell &#8220;exact replication&#8221; here means &#8220;kinda similar psionic-y thing&#8221;. Also, Bem classily lists his own experiments as exact replications of themselves, which gives a big boost to the &#8220;exact replications return the same results as Bem&#8217;s original studies&#8221; line. I would want to see much stricter criteria for replication before I relax the &#8220;preregister your trials&#8221; requirement.</p>
<p>(Richard Wiseman &#8211; the same guy who provided the negative aura for the Wiseman and Schiltz experiment &#8211; has started <A HREF="http://www.richardwiseman.com/BemReplications.shtml">a pre-register site for Bem replications</A>. He says he has received five of them. This is very promising. There is also <A HREF="http://www.koestler-parapsychology.psy.ed.ac.uk/TrialRegistry.html">a separate pre-register for parapsychology trials in general</A>. I am both extremely pleased at this victory for good science, and ashamed that my own field is apparently behind parapsychology in the &#8220;scientific rigor&#8221; department)</p>
<p>That is my best guess at what happened here &#8211; a bunch of poor-quality, peer-unreviewed studies that weren&#8217;t as exact replications as we would like to believe, all subject to mysterious experimenter effects.</p>
<p>This is not a criticism of Bem or a criticism of parapsychology. It&#8217;s something that is inherent to the practice of meta-analysis, and even more, inherent to the practice of science. Other than a few very exceptional large medical trials, there is not a study in the world that would survive the level of criticism I am throwing at Bem right now.</p>
<p>I think Bem is wrong. The level of criticism it would take to prove a wrong study wrong is higher than that almost any existing study can withstand. That is not encouraging for existing studies.</p>
<p><b>VI.</b></p>
<p>The motto of the Royal Society &#8211; Hooke, Boyle, Newton, some of the people who arguably invented modern science &#8211; was <i>nullus in verba</i>, &#8220;take no one&#8217;s word&#8221;.</p>
<p>This was a proper battle cry for seventeenth century scientists. Think about the (admittedly kind of mythologized) history of Science. The scholastics saying that matter was this, or that, and justifying themselves by long treatises about how based on A, B, C, the word of the Bible, Aristotle, self-evident first principles, and the Great Chain of Being all clearly proved their point. Then other scholastics would write different long treatises on how D, E, and F, Plato, St. Augustine, and the proper ordering of angels all indicated that clearly matter was something different. Both groups were pretty sure that the other had make a subtle error of reasoning somewhere, and both groups were perfectly happy to spend centuries debating exactly which one of them it was.</p>
<p>And then Galileo said &#8220;Wait a second, instead of debating exactly how objects fall, let&#8217;s just drop objects off of something really tall and see what happens&#8221;, and after that, Science.</p>
<p>Yes, it&#8217;s kind of mythologized. But like all myths, it contains a core of truth. People are terrible. If you let people debate things, they will do it forever, come up with horrible ideas, get them entrenched, play politics with them, and finally reach the point where they&#8217;re coming up with theories why people who disagree with them are probably secretly in the pay of the Devil. </p>
<p>Imagine having to conduct the global warming debate, except that you couldn&#8217;t appeal to scientific consensus and statistics because scientific consensus and statistics hadn&#8217;t been invented yet. In a world without science, <i>everything</i> would be like that.</p>
<p>Heck, just look at <i>philosophy</i>.</p>
<p>This is the principle behind the Pyramid of Scientific Evidence. The lowest level is your personal opinions, no matter how ironclad you think the logic behind them is. Just above that is expert opinion, because no matter how expert someone is they&#8217;re still only human. Above that is anecdotal evidence and case studies, because even though you&#8217;re finally getting out of people&#8217;s heads, it&#8217;s still possible for the content of people&#8217;s heads to influence which cases they pay attention to. At each level, we distill away more and more of the human element, until presumably at the top the dross of humanity has been purged away entirely and we end up with pure unadulterated reality.</p>
<p><center><IMG SRC="http://slatestarcodex.com/blog_images/se_pyramid.png"></p>
<p><i>The Pyramid of Scientific Evidence</i></center></p>
<p>And for a while this went <i>well</i>. People would drop things off towers, or see how quickly gases expanded, or observe chimpanzees, or whatever.</p>
<p>Then things started getting more complicated. People started investigating more subtle effects, or effects that shifted with the observer. The scientific community became bigger, everyone didn&#8217;t know everyone anymore, you needed more journals to find out what other people had done. Statistics became more complicated, allowing the study of noisier data but also bringing more peril. And a lot of science done by smart and honest people ended up being wrong, and we needed to figure out exactly which science that was.</p>
<p>And the result is a lot of essays like this one, where people who think they&#8217;re smart take one side of a scientific &#8220;controversy&#8221; and say which studies you should believe. And then other people take the other side and tell you why you should believe different studies than the first person thought you should believe. And there is much argument and many insults and citing of authorities and interminable debate for, if not centuries, at least a pretty long time.</p>
<p>The highest level of the Pyramid of Scientific Evidence is meta-analysis. But a lot of meta-analyses are crap. This meta-analysis got p < 1.2 * 10^-10 for a conclusion I'm pretty sure is false, and <i>it isn&#8217;t even one of the crap ones</i>. Crap meta-analyses look <A HREF="http://www.psychologytoday.com/blog/the-skeptical-sleuth/201112/editor-should-have-caught-bias-and-flaws-in-review-mental-health-ef">more like this</A>, or even worse. </p>
<p>How do I know it&#8217;s crap? Well, I use my personal judgment. How do I know my personal judgment is right? Well, a smart well-credentialed person like James Coyne agrees with me. How do I know James Coyne is smart? I can think of lots of cases where he&#8217;s been right before. How do I know those count? Well, John Ioannides has published a lot of studies analyzing the problems with science, and confirmed that cases like the ones Coyne talks about are pretty common. Why can I believe Ioannides&#8217; studies? Well, there have been good meta-analyses of them. But how do I know if those meta-analyses are crap or not? Well&#8230;</p>
<p><center><IMG SRC="http://slatestarcodex.com/blog_images/se_ouroboros.png"></p>
<p><i>The Ouroboros of Scientific Evidence</i></center></p>
<p>Science! YOU WERE THE CHOSEN ONE! It was said that you would destroy reliance on biased experts, not join them! Bring balance to epistemology, not leave it in darkness! </p>
<p><center><IMG SRC="http://slatestarcodex.com/blog_images/se_obiwan.png"></p>
<p><i>I LOVED YOU!!!!</i></center></p>
<p><b>Edit:</b> <A HREF="http://andrewgelman.com/2013/08/25/a-new-bem-theory/">Conspiracy theory</A> by Andrew Gelman</p>
]]></content:encoded>
			<wfw:commentRss>http://slatestarcodex.com/2014/04/28/the-control-group-is-out-of-control/feed/</wfw:commentRss>
		<slash:comments>191</slash:comments>
		</item>
		<item>
		<title>Stop Confounding Yourself! Stop Confounding Yourself!</title>
		<link>http://slatestarcodex.com/2014/04/26/stop-confounding-yourself-stop-confounding-yourself/</link>
		<comments>http://slatestarcodex.com/2014/04/26/stop-confounding-yourself-stop-confounding-yourself/#comments</comments>
		<pubDate>Sun, 27 Apr 2014 01:18:17 +0000</pubDate>
		<dc:creator><![CDATA[Scott Alexander]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[psychology]]></category>
		<category><![CDATA[statistics]]></category>
		<category><![CDATA[studies]]></category>

		<guid isPermaLink="false">http://slatestarcodex.com/?p=1909</guid>
		<description><![CDATA[As a perk of my job, I get a free subscription to the American Journal of Psychiatry. I am still not used to this. No enraging struggles with paywalls. No &#8220;one year embargo on full text&#8221;. I just come home &#8230; <a href="http://slatestarcodex.com/2014/04/26/stop-confounding-yourself-stop-confounding-yourself/">Continue reading <span class="pjgm-metanav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>As a perk of my job, I get a free subscription to the <i>American Journal of Psychiatry</i>. I am still not used to this. No enraging struggles with paywalls. No &#8220;one year embargo on full text&#8221;. I just come home and find all of the latest and most interesting journal articles have been <i>shipped directly to my house</i>. Modern technology is truly amazing.</p>
<p>Its latest is Takizawa et al&#8217;s <A HREF="http://journals.psychiatryonline.org/data/Journals/AJP/0/appi.ajp.2014.13101401.pdf"><b>Adult Health Outcomes of Childhood Bullying Victimization: Evidence From A Five-Decade Longitudinal British Birth Cohort</b></A>. It has since been picked up by <A HREF="http://www.foxnews.com/health/2014/04/18/effects-childhood-bullying-still-evident-40-years-later-study-says/">Fox</A>, <A HREF="http://www.washingtonpost.com/news/to-your-health/wp/2014/04/21/bullying-victims-suffer-psychological-impacts-for-decades/">the Washington Post</A>, and even <A HREF="http://news.xinhuanet.com/english/health/2014-04/19/c_133274051.htm">Xinhua</A>. I think that&#8217;s enough to qualify for &#8220;made world headlines&#8221;.</p>
<p>The study took some British kids in 1958, sorted them by how much they got bullied, and checked how they did forty years later. In fact, the frequently bullied kids had nearly twice as much psychiatric disease, were twice as likely to attempt suicide, were twice as likely to drop out of high school, and even had double the unemployment rate. Worse physical health, worse cognitive function, less likely to get married, et cetera, et cetera.</p>
<p>Those must be <i>some</i> bullies.</p>
<p>But correlation is not causation. There&#8217;s an alternative possibility. Maybe bullies only pick on unpopular disadvantaged kids. And maybe these kinds of things are stable, so that unpopular disadvantaged kids are more likely to grow up to be unpopular disadvantaged adults. The sort of adults who are more likely to have psychiatric disease, drop out of school, be unemployed, et cetera. <i>That</i> sure sounds plausible.</p>
<p>So the researchers &#8220;controlled for confounders&#8221;. They used a scale called the Bristol Social Adjustment Guide to figure out how socially well-adjusted the kids were, then aded in their social class, their family&#8217;s level of contact with child protective services, their IQ, their attractiveness, and even how much their parents loved them (really! check the study!)</p>
<p>They controlled for all these things and found that the bullying-outcomes link was still robust. They concluded that this meant their finding wasn&#8217;t just that bullies were bullying kids with problems, it was that bullies were causing the damage themselves.</p>
<p>Do you believe that? It all comes down to one question.</p>
<p>Who is better able to look deep inside you and judge the mettle of your soul? A playground bully? Or the Bristol Social Adjustment Guide?</p>
<p>My money is on the bully. Bullies are like sharks: horrible pinnacles of evolution. Animals have been learning to navigate social dominance hierarchies through violence since pecking orders in chickens, on through wolf packs and chimpanzees, and up into humans &#8211; <A HREF="http://en.wikipedia.org/wiki/Machiavellian_intelligence">and we are very good at it</A>. The bully is the purest manifestation of the primal instinct, which is why he crops up untaught and unbidden in near-identical form in schoolyards from Los Angeles to London to Lanzhou. And like sharks, a good bully should be able to smell blood in the water and know when an opportunity to attack presents itself.</p>
<p>Most of the findings of this study were in the &#8220;frequently bullied&#8221; population, and part of the criteria for &#8220;frequently&#8221; was bullying both at age 7 and age 11. Unless that&#8217;s just one <i>really</i> persistent guy, that means the child has gotten independently selected for targeting in two different environments. That could be bad luck but could also be the effect of high inter-bully reliability in what (persistent) qualities make a good victim.</p>
<p>So let&#8217;s take another look at those confounders we supposedly controlled for. Where&#8217;s height? You think short kids are bullied more often than tall kids? I do. Height is closely related to <A HREF="http://www.timothy-judge.com/Height%20paper--JAP%20published.pdf">career success</A>, to <A HREF="http://www.epjournal.net/wp-content/uploads/ep07477489.pdf">attractiveness to the opposite sex</A>, <A HREF="http://www.sciencedirect.com/science/article/pii/S1570677X0900046X">increased happiness and self-esteem</A>, and <A HREF="http://www.ncbi.nlm.nih.gov/pubmed/15994722">decreased psychological morbidity</A>. This is something every bully knows intuitively, but which the Takizawa study didn&#8217;t think of and therefore couldn&#8217;t control for.</p>
<p>But it&#8217;s giving them too much credit to be bringing in weird stuff like height-mental-health correlations. What about social skills? Yeah, sure, they did that Bristol Social Adjustment Guide. I&#8217;m looking at it right now, and it&#8217;s asking the students&#8217; teachers to rate items like &#8220;hostility towards adults&#8221; and &#8220;depression&#8221;. I don&#8217;t believe that teachers filling numbers into hokey little boxes can capture an assessment of a kid&#8217;s social skills as well as a bully trying to decide who can safely be picked on can.</p>
<p>So I will come out and say it: I do not trust the practice of &#8220;adjusting for confounders&#8221;, at least not the way this study does it. You are adjusting for an imperfect measurement of the confounders you can think of. If you find that there is lingering correlation, then either you your hypothesis is true, or you didn&#8217;t adjust for confounders well enough. Given extraordinary results, like being bullied at age seven making you 25% less likely to be married at age fifty, the &#8220;you didn&#8217;t adjust for confounders well enough&#8221; option starts to look really good.</p>
<p>I think the proper way to do this study would have been to do an anti-bullying intervention at a couple schools, leave a couple similar schools as controls, and if the anti-bullying intervention successfully decreases bullying, compare outcomes for children at the two schools. I understand this probably would be logistically impossible, plus you&#8217;d have to wait another forty years. But given that you cannot do the study right, I am not sure that doing the study this way adds anything, except of course widely-read articles in every news source in the world.</p>
<p>I would also compare to <A HREF="http://link.springer.com/article/10.1007/s00127-008-0395-0">Reming et al</A>, which attempts much the same study and finds no association after adjusting for <i>their</i> confounders of choice (which, oddly, are much fewer than in the current study). They also find that parent reports about bullying (the method Takizawa et al used) are wildly unreliable, with an inter-rater agreement of just 0.11 with reports by teachers or the children themselves (the statistic goes from perfect agreement being 1.0 to zero information being 0.0). For a <i>completely false</i> measure of bullying to find such spectacular effects is <i>really suspicious</i>, and now we need to consider not only the differences between the types of kids who are and aren&#8217;t bullied, but the differences between the types of parents who do and don&#8217;t think their kids are being bullied.</p>
<p>Since I insisted on giving this post a silly title, I will now share with you the most interesting perspective on psychology and the &#8220;stop hitting yourself&#8221; phenomenon I have read all week. This is from Jonathan Haidt on Kohlberg&#8217;s moral stages:<br />
<blockquote>During elementary school, most children move on to the two conventional stages, becoming adept at understanding and even manipulating rules and social conventions. This is the age of petty legalism that most of us who grew up with siblings remember well (&#8220;I&#8217;m not hitting you. I&#8217;m using your hand to hit you. Stop hitting yourself!&#8221;). Kids at this stage rarely question the legitimacy of authority, but learn to maneuver within and around the constraints that adults impose on them.</p></blockquote>
<p>I always just thought that was a really dickish joke. I didn&#8217;t realize it had a <i>deep philosophical underpinning.</i></p>
<p><center><IMG SRC="http://slatestarcodex.com/blog_images/hit_angel.png"></center></p>
]]></content:encoded>
			<wfw:commentRss>http://slatestarcodex.com/2014/04/26/stop-confounding-yourself-stop-confounding-yourself/feed/</wfw:commentRss>
		<slash:comments>91</slash:comments>
		</item>
		<item>
		<title>Wheat: Much More Than You Wanted To Know</title>
		<link>http://slatestarcodex.com/2014/03/30/wheat-much-more-than-you-wanted-to-know/</link>
		<comments>http://slatestarcodex.com/2014/03/30/wheat-much-more-than-you-wanted-to-know/#comments</comments>
		<pubDate>Mon, 31 Mar 2014 00:26:51 +0000</pubDate>
		<dc:creator><![CDATA[Scott Alexander]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[diet]]></category>
		<category><![CDATA[medicine]]></category>
		<category><![CDATA[studies]]></category>

		<guid isPermaLink="false">http://slatestarcodex.com/?p=1791</guid>
		<description><![CDATA[After hearing conflicting advice from diet books and the medical community, I decided to look into wheat. There are two sets of arguments against including wheat in the diet. First, wheat is a carbohydrate, and some people support low carbohydrate &#8230; <a href="http://slatestarcodex.com/2014/03/30/wheat-much-more-than-you-wanted-to-know/">Continue reading <span class="pjgm-metanav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>After hearing conflicting advice from diet books and the medical community, I decided to look into wheat.</p>
<p>There are two sets of arguments against including wheat in the diet. First, wheat is a carbohydrate, and some people support low carbohydrate diets. Second, something might be especially dangerous about wheat itself.</p>
<p>It was much easier to figure out the state of the evidence on low-carbohydrate diets. They seem to be <A HREF="http://www.annualreviews.org/doi/full/10.1146/annurev-publhealth-032013-182351">at least as good and maybe a little better</A> for weight loss than traditional diets, but this might just be because there are lots of carbohydrates that taste very good and when forced to avoid them, people eat less stuff. They may or may not positively affect metabolic parameters and quality of life. (<A HREF="http://link.springer.com/article/10.1007%2Fs11883-009-0069-8">1</A>, <A HREF="http://press.endocrine.org/doi/abs/10.1210/jc.2002-021480">2</A>, <A HREF="http://jama.jamanetwork.com/article.aspx?articleid=200094">3</A>, <A HREF="http://link.springer.com/article/10.1007%2Fs11136-009-9444-8">4</A>). They don&#8217;t seem to cause either major health benefits or major health risks in the medium term, which is the longest term for which there is good data available &#8211; for example, they have <A HREF="http://www.nutritionj.com/content/12/1/58">no effect on cancer rates</A>. Overall they seem solid but unspectacular. But there&#8217;s a long way between &#8220;low carbohydrate diet&#8221; and &#8220;stop eating wheat&#8221;.</p>
<p>So I was more interested in figuring out what was going on with wheat in particular.</p>
<p>Wheat contains chemicals [citation needed]. The ones that keep cropping up (no pun intended) in these kinds of discussions are phytates, lectins, gluten, gliadin, and agglutinin, the last three of which for your convenience have been given names that all sound alike.</p>
<p>Various claims have been made about these chemicals&#8217; effects on health. These have some prima facie plausibility. Plants don&#8217;t want to be eaten [citation needed] and they sometimes fill their grains with toxins to discourage animals from eating them. Ricin, a lectin in the seeds of the castor oil plant so toxic it gets used in chemical warfare, is a pretty good example. Most toxins are less dramatic, and most animals have enzymes that break down the toxins in their preferred food sources effectively. But if humans are insufficiently good at this, maybe because they didn&#8217;t evolve to eat wheat, some of these chemicals could be toxic to humans.</p>
<p>On the other hand, this same argument covers every pretty much every grain and vegetable and a lot of legumes &#8211; pretty much every plant-based food source except edible fruits. So we need a lot more evidence to start worrying about wheat.</p>
<p>I found the following claims about negative effects of wheat:</p>
<p>1. Some people without celiac disease are nevertheless sensitive to gluten.<br />
2. Wheat increases intestinal permeability, causing a leaky gut and autoimmune disease.<br />
3. Digestion of wheat produces opiates, which get you addicted to wheat.<br />
4. Wheat something something something autism and schizophrenia.<br />
5. Wheat has been genetically modified recently in ways that make it much worse for you.<br />
6. The lectins in wheat interfere with leptin receptors, making people leptin resistant and therefore obese.</p>
<p>I&#8217;ll try to look at each of those and then turn to the positive claims made about wheat to see if they&#8217;re strong enough to counteract them.</p>
<p><b>Some People Without Celiac Disease Are Sensitive To Gluten</b> &#8211; <i>Mostly true but of limited significance</i></p>
<p>Celiac disease is one source of concern. Everybody on all sides of the wheat debate agree about the basic facts of this condition, which affects a little less than 1% of the population. They have severe reactions to the gluten in wheat. Celiac disease is mostly marked by gastroentereological complaints &#8211; diarrhea, bloating, abdominal pain &#8211; but it is also associated with vitamin deficiencies, anaemia, skin reactions, infertility, and &#8220;malaise&#8221;. It can be pretty straightforwardly detected by blood tests and gut biopsies and is not subtle.</p>
<p>People start to disagree about the existence of &#8220;gluten sensitivity&#8221;, which if it existed would be a bad reaction to gluten even in people who don&#8217;t test positive for celiac disease. Many people believe they have gastrointestinal (or other) symptoms that go away when they eat gluten-free diets, but science can&#8217;t find anything wrong with their intestines that could be causing the problems.</p>
<p>A recent study somewhat vindicated these people. <A HREF="http://www.ncbi.nlm.nih.gov/pubmed/21224837">Biesiekierski 2011</A> describes a double-blind randomized controlled trial: people who said they had &#8220;gluten-sensitive&#8221; irritable bowel syndrome were put on otherwise gluten-free diets and then randomly given either gluten or a placebo. They found that the patients given gluten reported symptoms (mostly bowel-related and tiredness) much more than those given placebo (p = 0.0001) but did not demonstrate any of the chemical, immunological, or histological markers usually associated with celiac disease. A similar <A HREF="http://www.ncbi.nlm.nih.gov/pubmed/22825366">Italian study</A> found the same thing, except that they did find a higher rate of anti-gluten antibodies in their patients. Another study found that non-celiacs with antibodies to gluten <A HREF="http://www.ncbi.nlm.nih.gov/pubmed/17206762">had higher rates of mortality</A>. And another study <A HREF="http://www.ncbi.nlm.nih.gov/pubmed/23357715">did find</A> a histological change in bowel barrier function on this group of patients with the introduction of gluten. And <A HREF="http://www.ncbi.nlm.nih.gov/pubmed/23648697">another study from the same group</A> found that maybe FODMAPs, another component of wheat, are equally or more responsible. </p>
<p>The journal <i>Gastroenterology</i>, which you may not be surprised to learn is the leading journal in the field of gastroenterology, proclaims:<br />
<blockquote>The current working definition of nonceliac gluten sensitivity (NCGS) is the occurrence of irritable bowel syndrome (IBS)-like symptoms after the ingestion of gluten and improvement after gluten withdrawal from the diet after exclusion of celiac disease based on negative celiac serologies and/or normal intestinal architecture and negative immunoglobulin (Ig)E-mediated allergy tests to wheat. Symptoms reported to be consistent with NCGS are both intestinal (diarrhea, abdominal discomfort or pain, bloating, and flatulence) and extra-intestinal (headache, lethargy, poor concentration, ataxia, or recurrent oral ulceration). These criteria strongly and conveniently suggest that NCGS is best understood as a subset of IBS or perhaps a closely related but distinct functional disorder. Although the existence of NCGS has been slowly gaining ground with physicians and scientists, NCGS has enjoyed rapid and widespread adoption by the general public.</p></blockquote>
<p>But even this isn&#8217;t really that interesting. Maybe some people with irritable bowel syndrome or certain positive antibodies should try avoiding gluten to see if it helps their specific and very real symptoms. At most ten percent of people are positive antibody testing, and not all of those even have symptoms. That&#8217;s still a far cry from saying no one should eat wheat.</p>
<p>But the anti-wheat crowd says an alternative more sensitive antibody test could raise sensitivity <A HREF="http://web.archive.org/web/20081214094000/http://wholehealthsource.blogspot.com/2008/12/gluten-sensitivity-celiac-disease-is.html">as high as a third of the population</A>. The test seems to have been developed by a well-respected and legitimate doctor, but it hasn&#8217;t as far as I can tell been submitted for peer review or been confirmed by any other source. Meh.</p>
<p>That&#8217;s boring anyway. The real excitement comes from sweeping declarations that <i>the entire</i> population is sensitive to wheat.</p>
<p><b>Wheat Increases Intestinal Permeability Causing A Leaky Gut</b> &#8211; <i>Probably true, of uncertain significance</i></p>
<p>There are <A HREF="http://www.ncbi.nlm.nih.gov/pubmed/6111631">gluten-induced mucosal changes in subjects without small bowel disease</A>. And <A HREF="http://informahealthcare.com/doi/abs/10.1080/00365520500235334">gliadin increases intestinal permeability in the test tube</A>, which should be extremely concerning to any test tubes reading this.</p>
<p>But probably the bigger worry here are lectins, which include wheat germ agglutinin. WGA <A HREF="http://www.ncbi.nlm.nih.gov/pubmed/6207112">affects the intestinal permeability of rats</A>, which should be extremely concerning to any rats reading this. The same substance has been found to <A HREF="www.researchgate.net/publication/24244425_Effects_of_wheat_germ_agglutinin_on_human_gastrointestinal_epithelium_insights_from_an_experimental_model_of_immuneepithelial_cell_interaction/file/e0b49519c6c2ce8691.pdf">produce pro-inflammatory cytokines</A> and <A HREF="http://www.ncbi.nlm.nih.gov/pubmed/8399111">interfere with the growth of various organs including the gut</A>.</p>
<p>So there&#8217;s pretty good evidence that chemicals in wheat can increase intestinal permeability. Who cares?</p>
<p>For years, &#8220;leaky gut syndrome&#8221; was an alternative medicine diagnosis that was soundly mocked by the mainstream medical establishment. Then the mainstream medical establishment confirmed it existed and did that thing where they totally excused their own mocking of it but were ABSOLUTELY OUTRAGED that the alternative medicine community might have in some cases been overenthusiastic about it.</p>
<p>Maybe I&#8217;m being too harsh. The alternative medicine community often does take &#8220;leaky gut syndrome&#8221; <A HREF="http://www.nhs.uk/conditions/leaky-gut-syndrome/Pages/Introduction.aspx">way too far</A>.</p>
<p>On the other hand, it&#8217;s <A HREF="http://www.thedailybeast.com/articles/2014/03/27/new-research-shows-poorly-understood-leaky-gut-syndrome-is-real-may-be-the-cause-of-several-diseases.html">probably real</A> and <i>Nature Clinical Practice</i> is now <A HREF="http://www.direct-ms.org/pdf/LeakyGutMS/Fasano%20intestinal%20barrier%20autoimmunity.pdf">publishing papers</A> saying it is &#8220;a key ingredient in the pathogenesis of autoimmune diseases&#8221; and &#8220;offers innovative, unexplored approaches for the treatment of these devastating diseases&#8221; and gut health <A HREF="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3065426/">has been deemed</A> &#8220;a new objective in medicine&#8221;. Preliminary changes to intestinal permeability have been found <A HREF="http://www.ncbi.nlm.nih.gov/pubmed/8648009">in asthma</A>, <A HREF="http://www.ncbi.nlm.nih.gov/pubmed/19122519">in diabetes</A>, and even <A HREF="http://www.scientificamerican.com/article/gut-bacteria-may-exacerbate-depress/">in depression</A>. </p>
<p>But it&#8217;s not yet clear if this is cause and effect. Maybe the stress of having asthma increases intestinal permeability somehow. Or maybe high intestinal permeability causes asthma somehow. It sure seems like the latter might work &#8211; all sorts of weird antigens and stuff from food can make it into the bloodstream and alarm the immune system &#8211; but right now this is all speculative.</p>
<p>So what we have is some preliminary evidence that wheat increases intestinal permeability, and some preliminary evidence that increased intestinal permeability is bad for you in a variety of ways.</p>
<p>And I don&#8217;t doubt that those two facts are true, but my knowledge of this whole area is so weak that I wonder how much to worry.</p>
<p>What other foods increase intestinal permeability? Do they do it more or less than wheat? Has anyone been investigating this? Are there common things that affect intestinal permeability a thousand times more than wheat does, such that everything done by wheat is totally irrelevant in comparison?</p>
<p>Do people without autoimmune diseases suffer any danger from increased intestinal permeability? How much? Is it enough to offset the many known benefits of eating wheat (to be discussed later?) Fiber seems <A HREF="http://europepmc.org/abstract/MED/3003293">to decrease intestinal permeability</A> and most people get their fiber from bread; would decreasing bread consumption make leaky gut even worse?</p>
<p>I find this topic really interesting, but in a &#8220;I hope they do more research&#8221; sort of way, not an &#8220;I shall never eat bread ever again&#8221; sort of way.</p>
<p><b>Digestion Of Wheat Produces Opiates, Which Get You Addicted To Wheat</b> &#8211; <i>Probably false, but just true enough to be weird</i></p>
<p>Dr. William Davis, a cardiologist, most famously makes this claim in his book <i>Wheat Belly</i>. He says that gliadin (a component of gluten) gets digested into opiates, chemicals similar to morphine and heroin with a variety of bioactive effects. This makes you addicted to food in general and wheat in particular, the same way you would get addicted to morphine or heroin. This is why people are getting fat nowadays &#8211; they&#8217;re eating not because they&#8217;re hungry, but because they&#8217;re addicted. He notes that drugs that block opiates make people want wheat less.</p>
<p><A HREF="http://www.sciencedirect.com/science/article/pii/S0733521013000969">Does Wheat Make Us Fat And Sick</A>, a review published in the <i>Journal of Cereal Science</i> (they have journals for <i>everything</i> nowadays) is a good rebuttal to some of Davis&#8217; claims and a good pro-wheat resource in general.</p>
<p>They say that although gliadin does digest into opiates, those opiates are seven unit peptides and so too big to be absorbed from the gut to the bloodstream.</p>
<p>(note that having opiates <i>in your gut</i> isn&#8217;t a great idea either since there are lots of nerves there controlling digestion that can be affected by these drugs)</p>
<p>But I&#8217;m not sure this statement about absorption is even true. First, <A HREF="http://www.ncbi.nlm.nih.gov/pubmed/4023632">large proteins can sometimes make it into the gut</A>. Second, if all that leaky gut syndrome stuff above is right, maybe the gut is unusually permeable after wheat consumption. Third, <A HREF="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3747763/">there have been sporadically reported cases of gliadin-derived opiates found in the urine</A>, which implied they got absorbed somehow.</p>
<p>There&#8217;s a better counterargument on the blog <A HREF="http://thecuriouscoconut.com/blog/is-wheat-addictive-like-heroin">The Curious Coconut</A>. She notes that there&#8217;s no evidence these peptides can cross the blood-brain barrier, a precondition for having any psychological effects. And although the opiate-blocker naloxone does decrease appetite, this effect is not preferential for wheat, and probably more related to the fact that opiates are the way the brain reminds itself it&#8217;s enjoying itself (so that opiate-blocked people can&#8217;t enjoy eating as much).</p>
<p>And then there&#8217;s the usual absence of qualifiers. Lots of things are &#8220;chemically related&#8221; to other chemicals without having the same effect; are gliadin-derived opiates addictive? Are they produced in quantities high enough to be relevant in real life? Corn, spinach, and maybe meat can all get digested into opiates &#8211; is there any evidence wheat-derived opiates are worse? This is really sketchy.</p>
<p>The most convincing counterargument is that as far as anyone can tell, wheat <A HREF="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3078018/">makes people eat less, not more</A>:<br />
<blockquote>Prospective studies suggest that weight gain and increases in abdominal adiposity over time are lower in people who consume more whole grains. Analyses of the Physicians’ Health Study (27) and the Nurses’ Health Study (26) showed that those who consumed more whole grain foods consistently weighed less than those who consumed fewer whole grain foods at each follow-up period of the study. Koh-Banerjee et al. (27) estimated that for every 40-g increase in daily whole grain intake, the 8-y weight gain was lower by 1.1 kg.</p></blockquote>
<p>I&#8217;ll discuss this in more detail later, but it does seem like a nail in the coffin for the &#8220;people eat too much because they&#8217;re addicted to wheat&#8221; theory.</p>
<p>Still, who would have thought that wheat being digested into opiates was even a <i>little</i> true?</p>
<p><b>Wheat Something Something Something Autism And Schizophrenia</b> &#8211; <i>Definitely weird</i></p>
<p>Since gluten-free diets get tried for everything, and everything gets tried for autism, it was overdetermined that people would try gluten-free diets for autism.</p>
<p>All three of the issues mentioned above &#8211; immune reactivity to gluten, leaky guts, and gliadin-derived opiates &#8211; have been suggested as mechanisms for why gluten free diets might be useful in autism.</p>
<p>Of studies that have investigated, <A HREF="http://www.edb.utexas.edu/education/assets/files/ltc/gfcf_review.pdf">a review found</A> that seven reported positive results, four negative results, and two mixed results &#8211; but that all of the studies involved were terrible and the ones that were slightly less terrible seemed to be more negative. The authors described this as evidence against gluten-free diets for autism, although someone with the opposite bias could have equally well looked at the same review and described it as supportive.</p>
<p>However, a very large epidemiological study found (<A HREF="http://www.autismspeaks.org/science/science-news/autism-study-finds-no-link-celiac-disease-gluten-reactivity-real">popular article</A>, <A HREF="http://archpsyc.jamanetwork.com/article.aspx?articleid=1743008">study abstract</A>) that people with antibodies to gluten had three times the incidence of autism spectrum disease than people without, and that the antibodies preceded the development of the condition. </p>
<p>Also, those wheat-derived opioids from the last section &#8211; as well as milk-derived opioids called casomorphins &#8211; <A HREF="http://atcnts.com/wp-content/uploads/Pathophysiology_of_ASD_and_urinalysis.pdf">seem to be detected at much higher rates in autistic people</A>. </p>
<p>Both of these factors may have less to do with wheat in particular and more to do with some general dysregulation of peptide metabolism in autism. If for some reason the gut kept throwing peptides into the body inappropriately, this would disrupt neurodevelopment, lead to more peptides in the urine, and give the immune system more chance to react to gluten.</p>
<p>The most important thing to remember here is that it would be really wrong to say wheat might be &#8220;the cause&#8221; of autism. Most likely people do not improve on gluten-free diets. While there&#8217;s room to argue that people might have picked up a small signal of them improving <i>a little</i>, the idea that this totally removes the condition is right out. If we were doing this same study with celiac disease, we wouldn&#8217;t be wasting our time with marginally significant results. Besides, we know autism is multifactorial, and we know it probably begins in utero.</p>
<p>Schizophrenia right now is in a similar place. Schizophrenics are <A HREF="http://evolutionarypsychiatry.blogspot.com/2013/11/gluten-and-schizophrenia-again-with.html">five to seven times more likely</A> to have anti-gliadin antibodies as the general population. We can come up with all sorts of weird confounders &#8211; maybe antipsychotic medications increase gut permeability? &#8211; but that&#8217;s a really strong result. And schizophrenics have frank celiac disease at <A HREF="http://www.psychologytoday.com/blog/evolutionary-psychiatry/201103/wheat-and-schizophrenia-0">five to ten times</A> the rate of the general population. Furthermore, a certain subset of schizophrenics sees <A HREF="http://bioinformatics.pbf.hr/cms/images/jura/nutrigen13/seminars/schizophrenia_celiac.pdf">a dramatic reduction in symptoms</A> when put on a strict gluten-free diet (this is psychiatrically useless, both because we don&#8217;t know which subset, and because given how much trouble we have getting schizophrenics to swallow one lousy pill every morning, the chance we can get them to stick to a gluten-free diet is basically nil). And like those with autism, schizophrenics show increased levels of weird peptides in their urine.</p>
<p>But a lot of patients with schizophrenia don&#8217;t have reactions to gluten, a lot don&#8217;t improve on a gluten free diet, and other studies question the research showing that any of them at all do.</p>
<p>The situation here looks a lot like autism &#8211; a complex multifactorial process that probably isn&#8217;t caused by gluten but where we see interesting things going on in the vague territory of gluten/celiac/immune response/gut permeability/peptides, with goodness only knows which ones come first and which are causal.</p>
<p><b>Wheat Has Been Genetically Modified Recently In Ways That Make It Much Worse For You</b> &#8211; <i>Probably true, especially if genetically modified means &#8220;not genetically modified&#8221; and &#8220;recently&#8221; means &#8220;nine thousand years ago&#8221;</i></p>
<p>If you want to blame the &#8220;obesity epidemic&#8221; or &#8220;autism epidemic&#8221; or any other epidemic on wheat, at some point you have to deal with people eating wheat for nine thousand years and not getting epidemics of these things. Dr. Davis and other wheat opponents have turned to claims that wheat has been &#8220;genetically modified&#8221; in ways that improve crop yield but also make it more dangerous. Is this true?</p>
<p>Wheat has not been genetically modified in the classic sense, the one where mad scientists with a god complex inject genes from jellyfish into wheat and all of a sudden your bread has tentacles and every time you try to eat it it stings you. But it has been modified in the same way as all of our livestock, crops, and domestic pets &#8211; by selective breeding. Modern agricultural wheat doesn&#8217;t look much like its ancient wild ancestors.</p>
<p>The <i>Journal Of Cereal Science</i> folk don&#8217;t seem to think this is terribly relevant. They <A HREF="http://www.sciencedirect.com/science/article/pii/S0733521013000969">say</A>:<br />
<blockquote>Gliadins are present in all wheat lines and in related wild species. In addition, seeds of certain ancient types of tetraploid wheat have even greater amounts of total gliadin than modern accessions&#8230;There is no evidence that selective breeding has resulted in detrimental effects on the nutritional properties or health benefits of the wheat grain, with the exception that the dilution of other components with starch occurs in modern high yielding lines (starch comprising about 80% of the grain dry weight). Selection for high protein content has been carried out for bread making, with modern bread making varieties generally containing about 1–2% more protein (on a grain dry weight basis) than varieties bred for livestock feed when grown under the same conditions. However, this genetically determined difference in protein content is less than can be achieved by application of nitrogen fertilizer. We consider that statements made in the book of Davis, as well as in related interviews, cannot be substantiated based on published scientific studies.</p></blockquote>
<p>In support of this proposition, in the test tube ancient grains <A HREF="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3354720/">were just as bad</A> for celiac patients&#8217; immune systems as modern ones.</p>
<p>And yet in one double-blind randomized-controlled trial, people with irritable bowel syndrome <A HREF="http://www.ncbi.nlm.nih.gov/pubmed/24521561?dopt=Citation">felt better</A> on a diet of ancient grains than modern ones (p < 0.0001); and in another, people on an ancient grain diet had <A HREF="http://www.ncbi.nlm.nih.gov/pubmed/23299714">lower inflammatory markers and generally better nutritional parameters</A> than people on a modern grain one. Isn&#8217;t that interesting?</p>
<p>Even though it&#8217;s a little bit weird and I don&#8217;t think anyone understands the exact nutrients at work, sure, let&#8217;s give this one to the ancient grain people.</p>
<p><b>The Lectins In Wheat Interfere With Leptin Receptors, Making People Leptin Resistant And Therefore Obese</b> &#8211; <i>Currently at &#8220;mere assertion&#8221; level until I hear some evidence</i></p>
<p>So here&#8217;s the argument. Your brain has receptors for the hormone leptin, which tells you when to stop eating. But &#8220;lectin&#8221; sounds a lot like &#8220;leptin&#8221;, and this confuses the receptors, so they give up and tell you to just eat as much as you want.</p>
<p>Okay, this probably isn&#8217;t the real argument. But even though a lot of wheat opponents cite the heck out of this theory, the only presentation of evidence I can find is <A HREF="http://www.biomedcentral.com/1472-6823/5/10">Jonsson et al (2005)</A>, which points out that there are a lot of diseases of civilization, they seem to revolve around leptin, something common to civilization must be causing them, and maybe that thing could be lectin.</p>
<p>But civilization actually contains more things than a certain class of proteins found in grains! There&#8217;s poor evidence of lectin actually interfering with the leptin receptor in humans. The only piece of evidence they provide is a nonsignificant trend toward more cardiovascular disease in people who eat more whole grains in one study, and as we will see, that is wildly contradicted by all other studies.</p>
<p>This one does not impress me much.</p>
<p><b>Wheat Is Actually Super Good For You And You Should Have It All The Time</b> &#8211; <i>Probably more evidence than the other claims on this list</i></p>
<p>Before I mention any evidence, let me tell you what we&#8217;re going to find.</p>
<p>We&#8217;re going to find very, very many large studies finding conclusively that whole grains are great in a lot of different ways.</p>
<p>And we&#8217;re not going to know whether it&#8217;s at all applicable to the current question.</p>
<p>Pretty much all these studies show that people with some high level of &#8220;whole grain consumption&#8221; are much healthier than people with some lower level of same. That sounds impressive.</p>
<p>But what none of these studies are going to do a good job ruling out is that whole grain is just funging against refined grain which is even worse. Like maybe the people who report low whole grain consumption are eating lots of refined grain, and so more total grain, and the high-whole-grain-consumption people are actually eating less grain total.</p>
<p>They&#8217;re also not going to rule out the universal problem that if something is widely known to be healthy (like eating whole grains) then the same health-conscious people who exercise and eat lots of vegetables will start doing it, so when we find that the people doing it are healthier, for all we know it&#8217;s just that the people doing it are exercising and eating vegetables.</p>
<p>That having been said, eating lots of whole grain decreases BMI, metabolic risk factors, fasting insulin, and body weight (<A HREF="http://ajcn.nutrition.org/content/76/2/390.full">1</A>, <A HREF="http://www.ncbi.nlm.nih.gov/pubmed/18005489/">2</A>, <A HREF="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3078018/#bib35">3</A>, <A HREF="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3078018/#bib37">4</A>,<A HREF="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3078018/#bib27">5</A>.)</p>
<p>The American Society For Nutrition Symposium <A HREF="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3078018/">says</A>:<br />
<blockquote>Several mechanisms have been suggested to explain why whole grain intake may play a role in body weight management. Fiber content of whole grain foods may influence food volume and energy density, gastric emptying, and glycemic response. Whole grains has also been proposed to play an important role in promoting satiety; individuals who eat more whole grain foods may eat less because they feel satisfied with less food. Some studies comparing feelings of fullness or actual food intake after ingestion of certain whole grains, such as barley, oats, buckwheat, or quinoa, compared with refined grain controls indicated a trend toward increased satiety with whole grains. These data are in accordance with analyses determining the satiety index of a large number of foods, which showed that the satiety index of traditional white bread was lower than that of whole grain breads. However, in general, these satiety studies have not observed a reduction in energy intake; hence, further research is needed to better understand the satiety effects of whole grains and their impact on weight management.</p>
<p>Whole grains, in some studies, have also been observed to lower the glycemic and insulin responses, affect hunger hormones, and reduce subsequent food intake in adults. Ingestion of specific whole grains has been shown to influence hormones that affect appetite and fullness, such as ghrelin, peptide YY, glucose-dependent insulinotropic polypeptide, glucagon-like peptide 1, and cholecystokinin. Whole grain foods with fiber, such as wheat bran or functional doses of high molecular weight β-glucans, compared with lower fiber or refined counterparts have been observed to alter gastric emptying rates. Although it is likely that whole grains and dietary fiber may have similar effects on satiety, fullness, and energy intake, further research is needed to elucidate how, and to what degree, short-term satiety influences body weight in all age groups.</p>
<p>Differences in particle size of whole grain foods may have an effect on satiety, glycemic response, and other metabolic and biochemical (leptin, insulin, etc.) responses. Additionally, whole grains have been suggested to have prebiotic effects. For example, the presence of oligosaccharides, RS, and other fermentable carbohydrates may increase the number of fecal bifidobacteria and lactobacilli (49), thus potentially increasing the SCFA production and thereby potentially altering the metabolic and physiological responses that affect body weight regulation.</p>
<p>In summary, the current evidence among a predominantly Caucasian population suggests that consuming 3 or more servings of whole grains per day is associated with lower BMI, lower abdominal adiposity, and trends toward lower weight gain over time. However, intervention studies have been inconsistent regarding weight loss</p></blockquote>
<p>The studies that combined whole and refined grains are notably fewer. But <A HREF="http://www.ncbi.nlm.nih.gov/pubmed/16339127">Dietary Intake Of Whole And Refined Grain Breakfast Cereals And Weight Gain In Men</A> finds that among 18,000 male doctors, those who ate breakfast cereal (regardless of whether it was whole and refined) were less likely to become overweight several years later than those who did not (p = 0.01). A <A HREF="http://books.google.com/books?id=2ACjAgAAQBAJ&#038;pg=PA266&#038;lpg=PA266&#038;dq=whole+refined+grains+%3Dand+BMI&#038;source=bl&#038;ots=G_NOzHLmKL&#038;sig=46VKixvNv3JaYcWNd7ptuBe5otE&#038;hl=en&#038;sa=X&#038;ei=CKk4U4CPJIqqsQTpzYCAAw&#038;ved=0CC8Q6AEwATgK#v=onepage&#038;q=whole%20refined%20grains%20%3Dand%20BMI&#038;f=false">book with many international studies</A> report several that find a health benefit of whole grains, several that find a health benefit of all grains (Swedes who ate more grains had lower abdominal obesity; Greeks who ate a grain-rich diet were less likely to become obese; Koreans who ate a &#8220;Westernized&#8221; bread-and-dairy diet were less likely to have abdominal obesity) and no studies that showed any positive association between grains and obesity, whether whole or refined.</p>
<p>I cannot find good interventional trials on what happens when a population replaces non-grain with grain.</p>
<p>On the other hand, Dr. Davis and his book <i>Wheat Belly</i> claim:<br />
<blockquote>Typically, people who say goodbye to wheat lose a pound a day for the first 10 days. Weight loss then slows to yield 25-30 pounds over the subsequent 3-6 months (differing depending on body size, quality of diet at the start, male vs. female, etc.) </p>
<p>Recall that people who are wheat-free consume, on average, 400 calories less per day and are not driven by the  90-120 minute cycle of hunger that is common to wheat. It means you eat when you are hungry and you eat less. It means a breakfast of 3 eggs with green peppers and sundried tomatoes, olive oil, and mozzarella cheese for breakfast at 7 am and you’re not hungry until 1 pm. That’s an entirely different experience than the shredded wheat cereal in skim milk at 7 am, hungry for a snack at 9 am, hungry again at 11 am, counting the minutes until lunch. Eat lunch at noon, sleepy by 2 pm, etc. All of this goes away by banning wheat from the diet, provided the lost calories are replaced with real healthy foods.&#8221;</p></blockquote>
<p>Needless to say, he has no studies supporting this assertion. But the weird thing is, his message board is full of people who report having exactly this experience, my friends who have gone paleo have reported exactly this experience, and when I experimented with it, I had pretty much exactly this experience. Even the blogger from whom I took some of the strongest evidence criticizing Davis says <A HREF="http://thecuriouscoconut.com/blog/is-wheat-addictive-like-heroin">she had exactly this experience</A>.</p>
<p>The first and most likely explanation is that anecdotal evidence sucks and we should shut the hell up. Are there other, less satisfying explanations?</p>
<p>Maybe completely removing wheat from the diet has a nonlinear effect relative to cutting down on it? For example, in celiac disease there is no such thing as &#8220;partially gluten free&#8221; &#8211; if you have any gluten at all, your disease comes back in full force. This probably wouldn&#8217;t explain Dr. Davis&#8217; observation &#8211; neither I nor my other wheatless-experimentation friends were as scrupulous as a celiac would have to be. But maybe there&#8217;s a nonlinear discrepancy between people who have 75% the wheat of a normal person and 10% the wheat of a normal person?</p>
<p>Maybe there&#8217;s an effect where people who like wheat but remove it from the diet are eating things they don&#8217;t like, and so eat less of them? But people who don&#8217;t like wheat like other stuff, and so eat lots of that?</p>
<p>Maybe wheat in those studies is totally 100% a confounder for whether people are generally healthy and follow their doctor&#8217;s advice, and the rest of the doctor&#8217;s advice is really good but the wheat itself is terrible?</p>
<p>Maybe cutting out wheat has really positive short-term effects, but neutral to negative long-term effects?</p>
<p>Maybe as usual in these sorts of situations, <A HREF="http://www.troll.me/images/alien-man/aliens.jpg">the simplest explanation</A> is best.</p>
<p><b>Final Thoughts</b></p>
<p>Non-celiac gluten sensitivity is clearly a real thing. It seems to produce irritable bowel type symptoms. If you have irritable bowel type symptoms, it might be worth trying a gluten-free diet for a while. But the excellent evidence for its existence doesn&#8217;t seem to carry over to the normal population who don&#8217;t experience bowel symptoms.</p>
<p>What these people have are vague strands of evidence. Something seems to be going on with autism and schizophrenia &#8211; but most people don&#8217;t have autism or schizophrenia. The intestinal barrier seems to become more permeable with possible implications for autoimmune diseases &#8211; but most people don&#8217;t have autoimmune disease. Some bad things seem to happen in rats and test tubes &#8211; but most people aren&#8217;t rats or test tubes.</p>
<p>You&#8217;d have to want to take a position of maximum caution &#8211; wheat seems to do all these things, and even though none of them in particular obviously hurt me directly, all of them together make it look like the body just doesn&#8217;t do very well with this substance, and probably other ways the body doesn&#8217;t do very well with this substance will turn up, and some of them probably affect me.</p>
<p>There&#8217;s honor in a position of maximum caution, especially in a field as confusing as nutrition. It would not surprise me if the leaky gut connection turned into something very big that had general implications for, for example, mental health. And then people who ate grain might regret it.</p>
<p>But stack that up against the pro-wheat studies. None of them are great, but they mostly do something the anti-wheat studies don&#8217;t: show direct effect on things that are important to you. Most people don&#8217;t have autism or schizophrenia, but most people <i>do</i> have to worry about cardiovascular disease. We <i>do</i> have medium-term data that wheat doesn&#8217;t cause cancer, or increase obesity, or contribute to diabetes, or any of that stuff, and at this point solely based on the empirical data it seems much more likely to help with those things than hurt.</p>
<p>I hope the role of intestinal permeability in autoimmune disease gets the attention it deserves &#8211; and when it does, I might have to change my mind. I hope people stop being jerks about gluten sensitivity, admit it exists, and find better ways to deal with it. And if people find that eliminating bread from their diet makes them feel better or lose weight faster, cool.</p>
<p>But as far as I can tell the best evidence is on the pro-wheat side of things for most people at most times.</p>
<p>[<b>EDIT:</b> An especially good summary of the anti-wheat position is <A HREF="http://authoritynutrition.com/6-ways-wheat-can-destroy-your-health/">6 Ways Wheat Can Destroy Your Health</A>. An especially good pro-wheat summary is <A HREF="http://www.sciencedirect.com/science/article/pii/S0733521013000969">Does Wheat Make Us Fat And Sick?</A>]</p>
]]></content:encoded>
			<wfw:commentRss>http://slatestarcodex.com/2014/03/30/wheat-much-more-than-you-wanted-to-know/feed/</wfw:commentRss>
		<slash:comments>63</slash:comments>
		</item>
		<item>
		<title>E-Cig Study: Much Smoke, Little Light</title>
		<link>http://slatestarcodex.com/2014/03/25/e-cig-study-much-smoke-little-light/</link>
		<comments>http://slatestarcodex.com/2014/03/25/e-cig-study-much-smoke-little-light/#comments</comments>
		<pubDate>Tue, 25 Mar 2014 22:43:44 +0000</pubDate>
		<dc:creator><![CDATA[Scott Alexander]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[medicine]]></category>
		<category><![CDATA[statistics]]></category>
		<category><![CDATA[studies]]></category>

		<guid isPermaLink="false">http://slatestarcodex.com/?p=1758</guid>
		<description><![CDATA[New study shows that e-cigarette users are no more likely to quit smoking tobacco after a year than non-e-cigarette users. In fact, the trend is in the opposite direction &#8211; e-cigarette users are less likely to give up their regular &#8230; <a href="http://slatestarcodex.com/2014/03/25/e-cig-study-much-smoke-little-light/">Continue reading <span class="pjgm-metanav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>New study shows that e-cigarette users are <A HREF="http://archinte.jamanetwork.com/article.aspx?articleid=1846627">no more likely</A> to quit smoking tobacco after a year than non-e-cigarette users. In fact, the trend is in the opposite direction &#8211; e-cigarette users are less likely to give up their regular cigarettes. I&#8217;m <A HREF="http://slatestarcodex.com/2013/03/28/thank-you-for-doing-something-ambiguously-between-smoking-and-not-smoking/">skeptical. r/science is <A HREF="http://www.reddit.com/r/science/comments/219bgd/electronic_cigarettes_dont_aid_quitting_study_says/"> skeptical</A>. The experts are <A HREF="http://www.nature.com/news/electronic-cigarettes-don-t-aid-quitting-study-says-1.14918">skeptical</A>. Even the authors of the study sound maybe a <i>little</i> <A HREF="http://newsatjama.jama.com/2014/03/24/author-insights-e-cigarettes-not-associated-with-quitting/">skeptical</A>.</p>
<p>The study surveyed tobacco smokers for various information including whether they smoked e-cigarettes in addition to their tobacco. A year later, they went back and surveyed everyone again and asked them if they were still smoking tobacco. And the people smoking the e-cigarettes were no more likely to have quit than the others.</p>
<p>Let&#8217;s transition from reality to Hypothetical World. In Hypothetical World, there are only two kinds of smokers, Short Smokers and Long Smokers. The moment someone smokes their first cigarette, God flips a coin and assigns them into one of the two groups based on the result. Short Smokers are predestined to smoke for exactly one year before quitting; Long Smokers are predestined to smoke for exactly fifty years before quitting.</p>
<p>A scientist in Hypothetical World wants to discovery what percent of first-time-smokers become Short Smokers versus Long Smokers (the real proportion is 50-50 since God&#8217;s coin is fair, but she doesn&#8217;t know that). So she uses the same methodology as this study. She hangs around a tobacco shop and accosts the first thousand people who come in to buy cigarettes, getting their names and phone numbers. Then a year and a day later, she calls them all up to ask if they are still smoking &#8211; since anyone who keeps smoking for a year and a day must be a Long Smoker.</p>
<p>So she finds something close to 2% of people are Short Smokers and a whopping 98% are Long Smokers. She incorrectly concludes that God is rolling a d100 and only assigning Short Smoker status to those who come up 99 or 00.</p>
<p>Don&#8217;t see why she would make this mistake? Consider a particular generation of Hypothetical people over their lifetimes. The Short Smokers will only smoke a single year out of their lifetime; the Long Smokers will smoke fifty years. When the scientist does her study in a randomly selected year, she only has a 1/average_lifespan chance of catching any given Short Smoker, but a 50/average_lifespan chance of catching a Long Smoker. So, her original sample will contain fifty times more Long Smokers than Short Smokers, and she will mistakenly conclude that their pattern is fifty times more common.</p>
<p>Now transition back to reality. Suppose there are two types of e-cigarette users &#8211; successful and unsuccessful. The successful e-cigarette users try e-cigarettes, immediately decide they are better than regular cigarettes, and switch to using e-cigarettes exclusively within one month. The unsuccessful e-cigarette users try e-cigarettes but just don&#8217;t get everything they love about tobacco from them. They sort of futz around with e-cigarettes and regular cigarettes and tell themselves that one of these days, they&#8217;re really going to stop the regular ones entirely and transition totally to e-cigarettes. These people continue futzing for let&#8217;s say ten years before they either finally quit tobacco, give up on e-cigarettes, or die.</p>
<p>In that case, any sample of tobacco smokers taken at a particular time will include a hundred twenty times as many unsuccessful e-cigarette users as successful ones. We expect unsuccessful e-cigarette users to continue their past pattern of futzing around, so it&#8217;s not surprising that this sort of sample finds most e-cigarette users not only can&#8217;t easily quit tobacco using e-cigarettes, but actually have a harder time quitting tobacco than normal smokers &#8211; they&#8217;ve already been preselected as The Group That Even E-Cigarettes Can&#8217;t Help; as The Group That Tried Something Billed As An Anti-Smoking Aid But Failed At It. It&#8217;s a pretty general rule of medicine that people who failed treatment once are more likely to fail treatment a second time.</p>
<p>This is a very speculative explanation and I haven&#8217;t heard anyone respectable at a major institution advance it yet, but it seems to me like the most likely reason for these findings. All I have to go for with the study right now is a preliminary &#8220;research letter&#8221;, but hopefully we&#8217;ll know more when the real thing comes out.</p>
<p>Lest this post be <i>entirely</i> pro-drug, here&#8217;s a clip of my addiction-medicine teacher and sometime-boss <A HREF="http://www.myfoxdetroit.com/story/25049309/let-it-rip-weekend-going-to-pot-special-edition">lecturing people about marijuana on Fox News</A> last weekend. He is a great doctor and it&#8217;s neat to see him finally getting some of the celebrity he deserves. Even though his politics are terrible.</p>
]]></content:encoded>
			<wfw:commentRss>http://slatestarcodex.com/2014/03/25/e-cig-study-much-smoke-little-light/feed/</wfw:commentRss>
		<slash:comments>24</slash:comments>
		</item>
		<item>
		<title>Genetic Testing and Self-Fulfilling Prophecies</title>
		<link>http://slatestarcodex.com/2014/01/19/genetic-testing-and-self-fulfilling-prophecies/</link>
		<comments>http://slatestarcodex.com/2014/01/19/genetic-testing-and-self-fulfilling-prophecies/#comments</comments>
		<pubDate>Mon, 20 Jan 2014 04:40:57 +0000</pubDate>
		<dc:creator><![CDATA[Scott Alexander]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[medicine]]></category>
		<category><![CDATA[psychology]]></category>
		<category><![CDATA[studies]]></category>

		<guid isPermaLink="false">http://slatestarcodex.com/?p=1378</guid>
		<description><![CDATA[Lineweaver et al tested 144 elderly adults for the ApoE4 gene, which is known to be a major risk factor for Alzheimers. They told half of them their test results, kept it secret from the other half, then waited. Eight &#8230; <a href="http://slatestarcodex.com/2014/01/19/genetic-testing-and-self-fulfilling-prophecies/">Continue reading <span class="pjgm-metanav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>Lineweaver et al tested 144 elderly adults for the ApoE4 gene, which is known to be a major risk factor for Alzheimers. They told half of them their test results, kept it secret from the other half, then waited. Eight months later, they asked people how they thought their memory was doing and gave everyone objective memory tests.</p>
<p>No one in the study population had Alzheimers yet, so everyone did okay on the memory test. But subjects who knew they had ApoE4 did significantly worse than subjects who did have ApoE4 but didn&#8217;t know it. The subjects who knew they <i>didn&#8217;t</i> have ApoE4 didn&#8217;t do any better on the memory test than other subjects who didn&#8217;t have ApoE4 but had not yet heard the good news, but they did give subjectively better ratings of their memory ability.</p>
<p>The medical community concludes from this that letting people know their genetic risks may be dangerous, and although I hate to admit it, they have a point.</p>
<p>Unfortunately, <A HREF="http://slatestarcodex.com/Stuff/alzheimersgene.pdf">the study</A> doesn&#8217;t give the methodological details I need to really understand the implications.</p>
<p>We know that the researchers waited eight months between giving the genetic test results and doing the memory tests. That&#8217;s good.</p>
<p>But was it the same researcher doing both the genes and the memory parts of the study? Was it in the same building? Did they start the memory tests by saying &#8220;Hi! I&#8217;m Dr. Lineweaver! You may remember me from such medical experiments as the one eight months ago in which you were told that you had a high risk of getting Alzheimer&#8217;s disease&#8221;?</p>
<p>Or did they sneakily pretend to be a separate study entirely and try to avoid mentioning the A-word throughout?</p>
<p>It would not surprise me if &#8211; having been primed with a reminder of their Alzheimer testing results &#8211; the subjects then performed worse on a memory test that was given immediately after in an obviously related context.</p>
<p>It would be much more surprising &#8211; though still not totally unbelievable &#8211; if subjects, having been told they had a high risk of Alzheimers, just went around for eight months having slightly worse memory which was reflected on everything they did including the memory test administered by the researchers.</p>
<p>New England Journal of Medicine <A HREF="http://www.jwatch.org/na32799/2013/11/25/stereotype-threat-and-genotyping-alzheimer-disease">compares the finding</A> to &#8220;stereotype threat&#8221;, the phenomenon in which people can for example sometimes make women perform worse on math tests simply by telling them that it is a &#8220;test of their innate mathematical abilities&#8221; &#8211; something that women are stereotypically bad at.</p>
<p>The memory tests the researchers were giving are equivalent to the &#8220;innate mathematical abilities&#8221; condition in the stereotype threat research &#8211; a test clearly intended to measure how good their memory was in a very scientific way. The activities of daily living that require memory &#8211; keeping appointments, paying bills on time, et cetera &#8211; are the equivalent of the condition in stereotype threat experiments where researchers just give women a normal math test without introduction and stereotype threat is not seen.</p>
<p>So I see two ways in which we could get results like the ones in this study without any broader implications of ApoE4 testing harming the elderly in general.</p>
<p>First, being called to the same study in which the ApoE4 results were given could have primed their worries about Alzheimers and made them do especially bad on the study&#8217;s memory test compared to their usual memory.</p>
<p>Second, the study&#8217;s memory test could have been official-looking enough that it activated their stereotype of themselves as having innately poor memory, when, concordant with stereotype threat research, that stereotype doesn&#8217;t harm their everyday memory-requiring activities.</p>
<p>In either of these cases, the study would have some very limited implications, which the authors describe in an appropriately circumscribed way: &#8220;The patient&#8217;s knowledge of his or her genotype and risk of Alzheimer&#8217;s disease should be considered when evaluating cognition in the elderly.&#8221;</p>
<p>But this would not imply that genetic testing elderly people for ApoE4 is risky and can itself cause them to develop forgetfulness and other Alzheimer&#8217;s symptoms.</p>
<p>I worry that the medical community is going to miss this subtlety and start &#8220;raising awareness&#8221; of the possibility that genetic testing can cause harmful side effects, before finishing the hard task of discovering if that&#8217;s actually true.</p>
]]></content:encoded>
			<wfw:commentRss>http://slatestarcodex.com/2014/01/19/genetic-testing-and-self-fulfilling-prophecies/feed/</wfw:commentRss>
		<slash:comments>31</slash:comments>
		</item>
		<item>
		<title>Science &amp; Medicine Links for August</title>
		<link>http://slatestarcodex.com/2013/08/10/science-medicine-links-for-august/</link>
		<comments>http://slatestarcodex.com/2013/08/10/science-medicine-links-for-august/#comments</comments>
		<pubDate>Sat, 10 Aug 2013 12:07:37 +0000</pubDate>
		<dc:creator><![CDATA[Scott Alexander]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[medicine]]></category>
		<category><![CDATA[science]]></category>
		<category><![CDATA[studies]]></category>

		<guid isPermaLink="false">http://slatestarcodex.com/?p=902</guid>
		<description><![CDATA[Case report from the BMJ that would also make a good Twilight Zone episode: Woman hallucinates ghost children. Husband takes pictures of scene to try to prove that there&#8217;s nobody there. Woman sees exact same hallucinations in the photographs. Woman &#8230; <a href="http://slatestarcodex.com/2013/08/10/science-medicine-links-for-august/">Continue reading <span class="pjgm-metanav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>Case report from the BMJ that would also make a good Twilight Zone episode: Woman hallucinates ghost children. Husband takes pictures of scene to try to prove that there&#8217;s nobody there. Woman <A HREF="http://mindhacks.com/2013/05/29/photographing-hallucinations/">sees exact same hallucinations in the photographs</A>. Woman takes some psychiatric drugs, mostly stops having hallucinations, but still sees the hallucinatory ghost children in the (empty to everyone else) old photos. Psychiatry is <i>weird</i>, and/or possibly haunted.</p>
<p>A very strange but creative methodology with by which to study the notoriously complicated field of diet: scientists find that <A HREF="http://lesswrong.com/r/discussion/lw/hoh/weak_evidence_that_eating_vegetables_makes_you/">a gene that makes vegetables taste better also makes you live longer</A>. Weak evidence suggesting that eating more vegetables makes you live longer? Maybe!</p>
<p><A HREF="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3222234/">A Critical Review of the First Ten Years of Candidate Gene by Environment Interaction Research in Psychiatry</A>. Key phrase: &#8220;Ninety-six percent of novel cG×E studies were significant compared with 27% of replication attempts.&#8221; Note that gene x environment interaction studies are a very particular kind of study that is especially easy to do bad work on and this might not generalize to other types of genetics research &#8211; but that at least to some degree it probably does.</p>
<p>A while ago there was great excitement at the discovery that the drug rapamycin extended lifespan in mice. Although this finding has since been replicated and seems broadly correct, the bad news seems to be that <A HREF="http://www.jci.org/articles/view/67674">is now clear</A> that the drug <A HREF="http://www.sciencedaily.com/releases/2013/07/130725141715.htm">just treats some very specific deadly pathologies</A> (like cancer) and does not fight or slow aging. Although if the bad news is that a drug cures cancer, we&#8217;re still doing pretty well.</p>
<p>But if you absolutely must have some miracle substance that might cure aging in lab animals to be excited about, you&#8217;ll be happy to know that <A HREF="http://www.sci-news.com/biology/article01170-rhodiola-extract-lifespan-drosophila.html">rhodiola extends the lifespans of fruit flies 24% and delays age-related loss in physical performance</A>. Also it <A HREF="http://en.wikipedia.org/wiki/Rhodiola_rosea#Scientific_evidence">might be a nootropic or antidepressant or something</A>.</p>
<p>Speaking of miracles, <A HREF="http://psychcentral.com/news/2013/06/19/skin-abnormality-may-prove-biological-basis-for-fibromyalgia/56233.html">Skin Abnormality May Prove Biological Basis For Fibromyalgia</A>. I predict this will probably turn out to be nothing, the same way everyone was super excited a few years ago that we&#8217;d discovered that the <i>real</i> cause of multiple sclerosis was venous outflow obstruction and then it didn&#8217;t replicate, but until then at least fibromyalgia sufferers will get a few good years in of &#8220;SEE! I TOLD YOU IT WAS BIOLOGICAL AND YOU DIDN&#8217;T BELIEVE ME!&#8221;</p>
<p>Not technically a study but a good thing to include here: <A HREF="http://pipeline.corante.com/archives/2013/06/21/eight_toxic_foods_a_little_chemical_education.php">Eight Toxic Foods and a Little Chemical Education</A>. Describes some of the scare claims the media sometimes makes about chemicals and health risks and dissects them carefully and rigorously.</p>
<p>And if that was too basic for you, here&#8217;s the Epic-level version of the same thing: <A HREF="http://thelastpsychiatrist.com/2010/04/deconstructing_a_promotional_s.html">the Last Psychiatrist dissects claims made in a presentation on the drug Geodon</A>. This is old, but I just found it and it terrifies me, in that I thought I knew what to look for and yet this study would have completely passed all the filters I usually have to protect myself from this sort of thing. A good example of how a drug company can run a seemingly rigorous study that stays far away from anything even resembling data falsification or cover-ups &#8211; and yet still get exactly the results they want.</p>
<p>Here&#8217;s Scientific American giving a good exposition of <A HREF="http://www.scribd.com/doc/155870078/Tononi-New-Hypothesis-Explains-Why-We-Sleep-Scientific-American">one of the best current theories about why we sleep</A>. Also, it apparently has evidence behind it now, which it didn&#8217;t the last time I heard about it. Still doesn&#8217;t really explain <A HREF="http://www.overcomingbias.com/2012/10/sleep-is-to-save-energy.html">why some people can go without sleep completely</A>, but maybe that&#8217;s why they brought in the &#8220;local sleep&#8221; points.</p>
<p>Back when people realized it was easy to get positive results from a drug for spurious reasons, they started adding a control group to the experiment. Now that people have realized it&#8217;s easy to get positive results from a controlled trial for spurious reasons, is it time to go one meta-level up and add a control experiment on to the study? One group takes an experiment used to &#8220;prove&#8221; that SSRIs cause gastric bleeding, compares it to dozens of &#8220;control experiments&#8221; run with drugs that don&#8217;t cause gastric bleeding, and finds that, although the real experiment reported positive results, it <A HREF="http://onlinelibrary.wiley.com/doi/10.1002/sim.5925/full">in fact does no different than placebo experiments</A>. This is <i>really</i> clever although probably impractical in most cases.</p>
<p>Psychotherapy over the Internet works at least as well and probably better than face-to-face psychotherapy, <A HREF="http://www.mediadesk.uzh.ch/articles/2013/psychotherapie-via-internet-wirkt-gleich-gut-oder-besser-wie-im-sprechzimmer_en.html">says a study this month</A>, adding to the small mountain of evidence saying the same. A friend of mine uses online psychotherapy and says it&#8217;s easier and more productive because the therapist is less of a Terrifying Authority Figure. Also good for people who want a psychologist who will have severe difficulty calling the cops on them and having them committed. Also good for social phobics who <i>are currently required to leave the house and hang out at a busy medical office if they want to get treatment for their social phobia who the heck came up with this system?</i></p>
<p>&#8220;Adoption study of human obesity&#8221; sounds like something you would get from a Things Scott Is Interested In Mad Libs, along with &#8220;utilitarian behavioral genetics&#8221; or &#8220;double-blind placebo-controlled cuddling of cute girls&#8221;. But it turns out this is a real field that various people have looked into, and the conclusion of <A HREF="http://books.google.com/books?id=Z9eBvuQccfkC&#038;pg=PA50&#038;lpg=PA50&#038;dq=adoption+study+obesity&#038;source=bl&#038;ots=X6OFYd6VKO&#038;sig=7qSVFW304KYwRn38vzFFkcvLsIk&#038;hl=en&#038;sa=X&#038;ei=P8v2Ufn_O8fuyQH8kYGABg&#038;ved=0CDAQ6AEwATgK#v=onepage&#038;q=adoption%20study%20obesity&#038;f=false">most of the studies</A>, including a <A HREF="http://www.ncbi.nlm.nih.gov/m/pubmed/3941707/">very rigorous one in Denmark published in NEJM</A> and a <A HREF="http://ajcn.nutrition.org/content/87/2/398.long">huge UK one by Robert Plomin</A> agree that whether the parents who raise you are obese has zero impact on whether you will become obese, but whether your biological parents whom you may never meet are obese has massive impact on whether you will become obese. This doesn&#8217;t completely disprove the idea that the childhood environment affects obesity &#8211; it could still be that whether or not parents are good at teaching their children not to be obese just has zero correlation with whether the parents themselves are obese &#8211; but it sure casts a lot of doubt on environmental hypotheses and confirms that genetics plays a very big role.</p>
<p>On the other hand, <A HREF="http://www.apa.org/pubs/journals/releases/psp-101-3-579.pdf">here&#8217;s a study from 2011</A> which shows that people with lower Conscientiousness and higher Impulsivity are much more likely to be obese &#8211; &#8220;Participants who scored in the top 10% of impulsivity weighed, on average, 11 kg more than those in the bottom 10%&#8221;. LWers pointed out that this is not itself incompatible with genetics, since most personality traits are themselves somewhat heritable.</p>
<p>On the mutant third hand, if it&#8217;s all just impulsive people who have been poorly trained by their parents, <A HREF="http://www.thedailybeast.com/newsweek/2010/12/10/what-fat-animals-tell-us-about-human-obesity.html">why are wild animals getting fatter</A>?</p>
<p>JAMA Psychiatry: <A HREF="http://jornal.fmrp.usp.br/wp-content/uploads/2013/05/NOSchizophrenia-JAMA-Jaime-1.pdf">Rapid Improvement of Acute Schizophrenia Symptoms After Intravenous Sodium Nitroprusside</A>. Anything that improves schizophrenia symptoms is good news, but this is especially interesting for two reasons. First, the rapid and dramatic effect is easier to replicate and less corruptible than the usual &#8220;take this pill for a month and maybe you will feel better&#8221;, and is reminiscent of the very similar effect of ketamine on depression. Second, sodium nitroprusside is a drug used for high blood pressure without any previously known relevance to psychiatry, opening up a whole new direction in research. The small but interesting field of <A HREF="http://www.ncbi.nlm.nih.gov/pubmed/16005189">nitric oxide in schizophrenia</A> is about to get a lot more scrutiny.</p>
]]></content:encoded>
			<wfw:commentRss>http://slatestarcodex.com/2013/08/10/science-medicine-links-for-august/feed/</wfw:commentRss>
		<slash:comments>11</slash:comments>
		</item>
		<item>
		<title>Hasta La Victorians Siempre</title>
		<link>http://slatestarcodex.com/2013/06/03/hasta-la-victorians-siempre/</link>
		<comments>http://slatestarcodex.com/2013/06/03/hasta-la-victorians-siempre/#comments</comments>
		<pubDate>Tue, 04 Jun 2013 01:21:35 +0000</pubDate>
		<dc:creator><![CDATA[Scott Alexander]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[medicine]]></category>
		<category><![CDATA[statistics]]></category>
		<category><![CDATA[studies]]></category>
		<category><![CDATA[too many graphs]]></category>

		<guid isPermaLink="false">http://slatestarcodex.com/?p=702</guid>
		<description><![CDATA[It seems to be Gush About The Victorians Month in the academic community or something. How The Mid-Victorians Worked, Ate, and Died (h/t Michael Vassar) claims that the mid-Victorian period was a golden age of health during which life expectancy &#8230; <a href="http://slatestarcodex.com/2013/06/03/hasta-la-victorians-siempre/">Continue reading <span class="pjgm-metanav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>It seems to be Gush About The Victorians Month in the academic community or something.</p>
<p><A HREF="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2672390/">How The Mid-Victorians Worked, Ate, and Died</A> (h/t Michael Vassar) claims that the mid-Victorian period was a golden age of health during which life expectancy was as high or higher than today and the diseases we consider a &#8220;normal part of aging&#8221; simply failed to exist. No cancer, no heart disease, just living happily and healthily until you were felled by tuberculosis at age 85. They credit the Victorian diet &#8211; in particular its lack of additives and preservatives and its overload of nutrients, the latter made possible by very high calorie diets that compensated for their increased level of physical activity. The authors admit we probably can&#8217;t safely replicate such a high calorie diet in today&#8217;s sedentary society, but suggest taking lots and lots of vitamins and supplements in order to get the same high nutrient level the Victorians did.</p>
<p>I am <i>really really bad</i> at understanding nutrition (AT LEAST <i>I</i> ADMIT IT!), so I will limit this to attempts to fact-check a few claims, plus some extremely speculative commentary. Let&#8217;s start with the thesis:<br />
<blockquote>We argue in this paper, using a range of historical evidence, which Britain and its world-dominating empire were supported by a workforce, an army and a navy comprised of individuals who were healthier, fitter and stronger than we are today. They were almost entirely free of the degenerative diseases which maim and kill so many of us, and although it is commonly stated that this is because they all died young, the reverse is true; public records reveal that they lived as long – or longer – than we do in the 21st century.</p></blockquote>
<p>And the evidence in support:<br />
<blockquote>The fall in nutritional standards between 1880 and 1900 was so marked that the generations were visibly and progressively shrinking. In 1883 the infantry were forced to lower the minimum height for recruits from 5ft 6 inches to 5ft 3 inches. This was because most new recruits were now coming from an urban background instead of the traditional rural background (the 1881 census showed that over three-quarters of the population now lived in towns and cities). Factors such as a lack of sunlight in urban slums (which led to rickets due to Vitamin D deficiency) had already reduced the height of young male volunteers. Lack of sunlight, however, could not have been the sole critical factor in the next height reduction, a mere 18 years later. By this time, clean air legislation had markedly improved urban sunlight levels; but unfortunately, the supposed ‘improvements’ in dietary intake resulting from imported foods had had time to take effect on the 16–18 year old cohort. It might be expected that the infantry would be able to raise the minimum height requirement back to 5ft. 6 inches. Instead, they were forced to reduce it still further, to a mere 5ft. British officers, who were from the middle and upper classes and not yet exposed to more than the occasional treats of canned produce, were far better fed in terms of their intake of fresh foods and were now on average a full head taller than their malnourished and sickly men.</p></blockquote>
<p>This is very suspicious. If the British Army started recruiting from different populations, or started lowering their standards generally because they needed more men, these data are useless. I tried to find an alternate source of height data, and the best I could come up with was the extremely thorough <A HREF="http://www.nber.org/papers/h0108">Height, Weight, and Body Mass of the British Population Since 1820</A>, who say:</p>
<p><center><IMG SRC="http://slatestarcodex.com/blog_images/victorian_table2.png"></center></p>
<p>This graph seems to do quite well at picking up real decreases in height like the one starting in the birth cohort of 1825 (possibly due to the Corn Laws?), but it has pretty much nothing to say about any great collapse in the last few years of the 19th century. Assuming the average army recruit was 20 years old. In 1883, they would have been in the 1863 birth cohort; in 1901, in the 1881 birth cohort. But in fact, <i>the 1881 birth cohort is taller on average than the 1863 birth cohort!</i>. The army&#8217;s troubles with finding sufficiently tall infantrymen cannot be due to the Amazing Shrinking Englishman &#8211; it must be a matter of casting a wider recruiting net or something.<br />
<blockquote>The crude average figures often used to depict the brevity of Victorian lives mislead because they include infant mortality, which was tragically high. If we strip out peri-natal mortality, however, and look at the life expectancy of those who survived the first five years, a very different picture emerges. Victorian contemporary sources reveal that life expectancy for adults in the mid-Victorian period was almost exactly what it is today. At 65, men could expect another ten years of life; and women another eight. This compares surprisingly favourably with today’s figures: life expectancy at birth (reflecting our improved standards of neo-natal care) averages 75.9 years (men) and 81.3 years (women); though recent work has suggested that for working class men and women this is lower, at around 72 for men and 76 for women.</p>
<p>Given that modern pharmaceutical, surgical, anaesthetic, scanning and other diagnostic technologies were self-evidently unavailable to the mid-Victorians, their high life expectancy is very striking, and can only have been due to their health-promoting lifestyle. But the implications of this new understanding of the mid-Victorian period are rather more profound. It shows that medical advances allied to the pharmaceutical industry’s output have done little more than change the manner of our dying. The Victorians died rapidly of infection and/or trauma, whereas we die slowly of degenerative disease. It reveals that with the exception of family planning, the vast edifice of twentieth century healthcare has not enabled us to live longer but has in the main merely supplied methods of suppressing the symptoms of degenerative diseases which have emerged due to our failure to maintain mid-Victorian nutritional standards.  Above all, it refutes the Panglossian optimism of the contemporary anti-ageing movement whose protagonists use 1900 – a nadir in health and life expectancy trends &#8211; as their starting point to promote the idea of endlessly increasing life span. These are the equivalent of the get-rich-quick share pushers who insisted, during the dot.com boom, that we had at last escaped the constraints of normal economics. Some believed their own message of eternal growth; others used it to sell junk bonds they knew were worthless. The parallels with today’s vitamin pill market are obvious, but this also echoes the way in which Big Pharma trumpets the arrival of each new miracle drug.</p></blockquote>
<p>I was wondering how long it would take &#8220;Big Pharma&#8221; to make an appearance.</p>
<p>Anyway, here we turn to <A HREF=" <blockquote>The crude average figures often used to depict the brevity of Victorian lives mislead because they include infant mortality, which was tragically high. If we strip out peri-natal mortality, however, and look at the life expectancy of those who survived the first five years, a very different picture emerges. Victorian contemporary sources reveal that life expectancy for adults in the mid-Victorian period was almost exactly what it is today. At 65, men could expect another ten years of life; and women another eight. This compares surprisingly favourably with today’s figures: life expectancy at birth (reflecting our improved standards of neo-natal care) averages 75.9 years (men) and 81.3 years (women); though recent work has suggested that for working class men and women this is lower, at around 72 for men and 76 for women.</p>
<p>Given that modern pharmaceutical, surgical, anaesthetic, scanning and other diagnostic technologies were self-evidently unavailable to the mid-Victorians, their high life expectancy is very striking, and can only have been due to their health-promoting lifestyle. But the implications of this new understanding of the mid-Victorian period are rather more profound. It shows that medical advances allied to the pharmaceutical industry’s output have done little more than change the manner of our dying. The Victorians died rapidly of infection and/or trauma, whereas we die slowly of degenerative disease. It reveals that with the exception of family planning, the vast edifice of twentieth century healthcare has not enabled us to live longer but has in the main merely supplied methods of suppressing the symptoms of degenerative diseases which have emerged due to our failure to maintain mid-Victorian nutritional standards.  Above all, it refutes the Panglossian optimism of the contemporary anti-ageing movement whose protagonists use 1900 – a nadir in health and life expectancy trends &#8211; as their starting point to promote the idea of endlessly increasing life span. These are the equivalent of the get-rich-quick share pushers who insisted, during the dot.com boom, that we had at last escaped the constraints of normal economics. Some believed their own message of eternal growth; others used it to sell junk bonds they knew were worthless. The parallels with today’s vitamin pill market are obvious, but this also echoes the way in which Big Pharma trumpets the arrival of each new miracle drug.</p></blockquote>
<p>I was wondering how long it would take &#8220;Big Pharma&#8221; to make an appearance.</p>
<p>Anyway, here we turn to <A HREF="http://books.google.com/books/about/Health_and_Welfare_during_Industrializat.html?id=cS6fdFE7PS4C">Health and Welfare During Industrialization</A>, which includes the following table:</p>
<p><center><IMG SRC="http://slatestarcodex.com/blog_images/victorian_table1.png"></center></p>
<p>This is confusing, so let me explain.</p>
<p>I&#8217;ve taken the mortality tables from Steckel and deleted everything before age 25. Clayton and Rowbotham admit that infant mortality has improved since the Victorian period, so we&#8217;re not counting that.</p>
<p>I&#8217;ve marked the 1873 to 1877 period &#8211; the last period unambiguously before Clayton and Rowbotham&#8217;s 1880 &#8220;nutritional collapse&#8221; &#8211; in gold. This is our baseline.</p>
<p>We notice that the subsequent period actually has lower mortality in nearly every category. This is fine. Just because they&#8217;ve invented less nutrititious food doesn&#8217;t mean that people have started eating it yet, or that it&#8217;s had time to affect their health. Instead, I&#8217;ve looked for the <i>worst</i> post-1880 period and marked that in red as the bottom of the &#8220;nutritional collapse&#8221;. As you can see, it&#8217;s 1888 &#8211; 1892.</p>
<p>Finally, I&#8217;ve taken the period including 1900 &#8211; what they describe as &#8220;a nadir in health and life expectancy trends&#8221;, and marked it in green.</p>
<p>As we can see, the &#8220;nadir&#8221; &#8211; 1900, is actually <i>lower mortality in all age groups</i> than the &#8220;golden age&#8221; of 1873 &#8211; 1877. The &#8220;collapse&#8221;, if it occurred at all, was a tiny statistical blip that was then easily overpowered by the general secular trend of decreasing mortality.</p>
<p>I don&#8217;t know exactly how the authors would like me to consider &#8220;people born in 1900&#8243; vs. &#8220;people dying in 1900&#8243;, but luckily I also happen to have <A HREF="http://www.osfi.gc.ca/app/DocRepository/1/eng/oca/pdf/DEIP_Gallop_e.pdf">life expectancy</A> at age 65 for Britain for the entire period of 1841 to 2001 (God, I <i>love</i> the Internet!):</p>
<p><center><IMG SRC="http://slatestarcodex.com/blog_images/victorian_table1b.png"></center></p>
<p>There&#8217;s nothing on there that could remotely be the kind of collapse they&#8217;re talking about, unless it be that little blip around 1890 which is more than compensated for five years later. Annnnnd here&#8217;s <A HREF="books.google.com/books?id=75DZmQtybMwC&#038;pg=PA216&#038;lpg=PA216&#038;dq=adult+life+expectancy+Victorian+England&#038;source=bl&#038;ots=FwEw97C4Ua&#038;sig=6lYXuX0ubHMjODSbDb3Tf4MHj_U&#038;hl=en&#038;sa=X&#038;ei=hACtUfueAcXCyAH5hIDoAw&#038;ved=0CGgQ6AEwBg">some more data</A>, divided by occupation and showing life expectancy at age 20:</p>
<p><center><IMG SRC="http://slatestarcodex.com/blog_images/victorian_table1c.png"></center></p>
<p>This isn&#8217;t especially high resolution, but it&#8217;s interesting in that it suggests adult life expectancy in the Victorian period was around 65 and not the more modern 75. Sure enough, the book I took it from estimates life expectancy at age 20 (ie excluding childhood mortality) to be about 62.5. Unfortunately, I can&#8217;t access Clayton and Rowbotham&#8217;s sources, so I can&#8217;t tell whether they make an even better argument that it really <i>was</i> 75. But this matters for claims like &#8220;there was much less cancer&#8221;, because cancer increases <i>a lot</i> with age and so if you&#8217;re cutting off the end of the age distribution of course you&#8217;re going to get less cancer (imagine you learned people never went bald in caveman times &#8211; this could be either because they all ate nutritious mammoth meat, or because they all died before age 30).</p>
<p>Overall I&#8217;m going to say I wish I could access their sources, but the fact that they don&#8217;t quote any numbers or show any graphs in support of their &#8220;life expectancy decrease from 1880 to 1900&#8243; hypothesis makes me suspect they don&#8217;t have any [EDIT: See bottom of post].</p>
<p>Their next point is that the Victorians had much less cancer and heart disease than people today. This I am totally willing to believe. The authors provide <A HREF="http://jrs.sagepub.com/content/101/9/454.full">ample evidence</A> in another paper of theirs, and it accords well with very similar evidence presented by Gary Taubes in <i>Good Calories, Bad Calories</i>.</p>
<p>The Victorian Age was before the mass-marketing of cigarettes, before most of the carcinogenic chemicals we inhale all the time, and their diet contained ample fruits and vegetables &#8211; foods that we know lower cancer and heart disease risk. Many of them were manual laborers, there were much fewer obesogenic foods, and people were generally normal weight. It is not mysterious or revolutionary to propose they had less cancer and heart disease than we did.</p>
<p>But let&#8217;s look at Clayton and Rowbotham&#8217;s explanation:<br />
<blockquote>Degenerative diseases are not caused by old age (the ‘wear and tear’ hypothesis); but are driven, in the main, by chronic malnutrition. Our low energy lifestyles leave us depleted in anabolic and anti-catabolic co-factors; and this imbalance is compounded by excessive intakes of inflammatory compounds. The current epidemic of degenerative disease is caused by widespread problem of multiple micro- and phyto-nutrient depletion (Type B malnutrition.)<br />
With the exception of family planning and antibiotics, the vast edifice of twentieth century healthcare has generated little more than tools to suppress symptoms of the degenerative diseases which have emerged due to our failure to maintain mid-Victorian nutritional standards. The only way to combat the adverse effects of Type B malnutrition, and to prevent and / or cure degenerative disease, is to enhance the nutrient density of the modern diet.</p></blockquote>
<p>Whoa whoa whoa whoa whoa.</p>
<p>Gary Taubes does a pretty good job recording all the primitive cultures that <i>also</i> have no degenerative diseases. The Eskimos are one such. They basically just eat meat. You can also find cultures pretty much anywhere, with any diet, who also lack these degenerative diseases. In fact, I <i>think</i> people who are <i>actually</i> malnourished &#8211; starving Africans and the like &#8211; still have lower rates of these degenerative diseases as long as they&#8217;re not eating a &#8220;modern&#8221; diet.</p>
<p>To me, this suggests that their &#8220;phytonutrient depletion&#8221; hypothesis needs to contend with another &#8211; that it&#8217;s not that we&#8217;re not getting enough of the right stuff, but rather that we&#8217;re getting <i>too much</i> of the wrong stuff. As far as I know, this is what mainstream nutrition science believes as well as most of the more interesting crackpots, although of course everyone differs as to what the wrong stuff is. It is a promising and venerable theory and nothing the authors of this paper have said thus far casts the slightest doubt upon it. Indeed, they admit that the &#8220;nutritional collapse&#8221; was caused by the introduction of preserved foods, additives, and cheap sugar.<br />
<blockquote>Our levels of physical activity and therefore our food intakes are at an historic low. To make matters worse, when compared to the mid-Victorian diet, the modern diet is rich in processed foods. It has a higher sodium/potassium ratio, and contains far less fruit, vegetables, whole grains and omega 3 fatty acids. It is lower in fibre and phytonutrients, in proportional and absolute terms; and, because of our high intakes of potato products, breakfast cereals, confectionery and refined baked goods, may have a higher glycemic load. Given all this, it follows that we are inevitably more likely to suffer from dysnutrition (multiple micro- and phytonutrient depletion) than our mid-Victorian forebears&#8230;</p>
<p>Since it would be unacceptable and impractical to recreate the mid-Victorian working class 4,000 calorie/day diet, this constitutes a persuasive argument for a more widespread use of food fortification and/or properly designed food supplements (most supplements on the market are incredibly badly designed; they are assembled by companies that do not understand the real nutritional issues that confront us today, and sell us pills containing irrational combinations and doses that can do more harm than good.</p>
<p>To insist, as orthodox nutritionists and dieticians do, that only whole fruit and veg contain the magical, health-promoting ingredients represents little more than the last gasp of the discredited and anti-scientific theory of vitalism (‘Vitalism—the insistence that there is some big, mysterious extra ingredient in all living things—turns out to have been not a deep insight but a failure of imagination’, Daniel Dennett)  Even the stately FSA concedes that fruit juices count towards your five-a-day, as do freeze-dried powdered extracts of fruits and vegetables. As our knowledge of phytochemistry and phytopharmacology increases, it has become perfectly acceptable to use rational combinations of the key plant constituents in pill or capsule form.</p>
<p>These arguments are developed in ‘Pharmageddon’, a medical textbook which illustrates how micro- and phyto-nutrients can be specifically combined in order to prevent and treat the chronic degenerative diseases that characterise and dominate the 20th and 21st centuries; and how they could be integrated into our food chain in order to reduce the contemporary and excessively high risks of the degenerative diseases to the far lower mid-Victorian levels.</p></blockquote>
<p>So first things first. I am almost sure I went to medical school, and the sorts of textbooks I read there all had names like &#8220;Essentials Of Biochemistry, Third Edition&#8221;, and &#8220;Introduction To Gastroenterology&#8221; and almost never names like &#8220;Pharmageddon&#8221;.</p>
<p>Second, the authors cite some sources for their claim that all supplements currently on the market are poorly designed and do more harm than good. These sources show that a bunch of different supplements are either ineffective or cause cancer, and are entirely correct. What they do not cite are any sources that show that &#8220;correctly designed&#8221; supplements do more good than harm. This is because those sources do not exist because no one has ever discovered that.</p>
<p>The reason the medical community isn&#8217;t switching wholesale from evil pharmageddon-causing drugs to all-natural Victorian-approved nutrient supplementation isn&#8217;t because they&#8217;re in the grip of Big Pharma, it&#8217;s because <i>no one can find supplements that are consistently proven to work</i>.</p>
<p>(the medical community actually <i>is</i> saying &#8220;eat right and exercise&#8221;, but NO ONE LISTENS)</p>
<p>And yes, part of the lack of working supplements is the coordination problem of &#8220;there&#8217;s no money in studying supplements&#8221;. I think it would be great if we could figure out some way to coordinate supplement studies more effectively. But it&#8217;s kind of hard to make that case when all the supplement studies that have been done have been total duds and the pro-supplement community has just kept replying with &#8220;But those supplements were poorly designed!&#8221; and then suggesting a perplexing and extremely contradictory trove of other possible designs, all of which themselves are later found not to work.</p>
<p>One point that seriously enlightened me, thought, was the authors&#8217; comments on vitalism. It&#8217;s definitely true that eating whole foods has useful properties (like reducing risk of diseases) and also definitely true that taking supplements that supposedly share vitamins with those foods doesn&#8217;t. And it&#8217;s <i>also</i> true that I was previously taking this as an Unfortunate Fact Of Life. Which is <i>dumb</i>. Of <i>course</i> we should be trying to figure out what magical property of whole foods makes them effective and then trying to deliver that magical property in a pill. I am disappointed at myself for not realizing that was important sooner, and that alone makes me profoundly grateful I read this study.</p>
<p>It&#8217;s just that I don&#8217;t think we&#8217;re quite there yet, and until we are, supplementation isn&#8217;t all that useful.</p>
<p>In summary, I can&#8217;t confirm this paper&#8217;s suggestion of dire health costs from a &#8220;nutritional collapse&#8221; in the 1880s. However, the notion of a <i>gradual</i> rise in cancer, heart disease, and other degenerative diseases linked to a modern diet seems correct. Their attribution of this to nutrient deficiency is unsupported and probably only a small part of the problem, and their opinions on supplementation seem to possibly veer into crackpottery. However, they are right to note that we need a better science of supplementation and that once we develop this science, supplements really really ought to work.</p>
<p><b>EDIT</b>: <i>Andrew G points out in comments that the study compared Victorian life expectancy at age 60 to modern life expectancy *at birth* in their attempts to say life expectancy was staying about the same. This is clearly not kosher. Victorian life expectancy at age 20 (a good point to rule out childhood mortality) was about 62.5, and modern life expectancy at age 20 is&#8230;something older than life expectancy at birth which is 77 or so. So their &#8220;life expectancy has stayed the same&#8221; argument is completely wrong.</i></p>
]]></content:encoded>
			<wfw:commentRss>http://slatestarcodex.com/2013/06/03/hasta-la-victorians-siempre/feed/</wfw:commentRss>
		<slash:comments>32</slash:comments>
		</item>
		<item>
		<title>Literally Inconceivable: Contraceptives And Abortion Rates</title>
		<link>http://slatestarcodex.com/2013/06/01/literally-inconceivable-contraceptives-and-abortion-rates/</link>
		<comments>http://slatestarcodex.com/2013/06/01/literally-inconceivable-contraceptives-and-abortion-rates/#comments</comments>
		<pubDate>Sun, 02 Jun 2013 00:58:54 +0000</pubDate>
		<dc:creator><![CDATA[Scott Alexander]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[race/gender/etc]]></category>
		<category><![CDATA[statistics]]></category>
		<category><![CDATA[studies]]></category>

		<guid isPermaLink="false">http://slatestarcodex.com/?p=697</guid>
		<description><![CDATA[I have amazing parents who would never do something sneaky like install a keylogger on my computer to keep tabs on me as I move thousands of miles away from home. But if I&#8217;m wrong and they did do that, &#8230; <a href="http://slatestarcodex.com/2013/06/01/literally-inconceivable-contraceptives-and-abortion-rates/">Continue reading <span class="pjgm-metanav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>I have amazing parents who would never do something sneaky like install a keylogger on my computer to keep tabs on me as I move thousands of miles away from home. But if I&#8217;m wrong and they <i>did</i> do that, they&#8217;re probably sweating pretty hard right now. My search history for the last two days looks a lot like &#8220;efficacy of contraceptives&#8221;, &#8220;contraceptive failures&#8221;, &#8220;pregnancy risk if contraceptive failure&#8221;, &#8220;unintended pregnancy&#8221;, &#8220;unintended pregnancy abortion&#8221;, and &#8220;COME ON GOOGLE WHY WON&#8217;T YOU GIVE ME GOOD INFORMATION ON UNINTENDED PREGNANCIES AAARGH&#8221;</p>
<p>(this last one brings up a berkeley.edu address, which does not surprise me one bit)</p>
<p>My parents can relax &#8211; the searches are because of the comments on <A HREF="http://slatestarcodex.com/2013/05/30/fetal-attraction-abortion-and-the-principle-of-charity/">a recent post of mine</A>. In response to <A HREF="http://www.amptoons.com/blog/2013/05/27/pro-lifers-dont-give-a-damn-about-fetuses-they-only-care-about-coercing-women/">a claim that pro-lifers should be in favor of contraception since it decreases abortions</A>, I argued that moral philosophy doesn&#8217;t always work that way, but let the main point &#8211; that contraception decreases abortion &#8211; stand. Some people, especially <A HREF="http://slatestarcodex.com/2013/05/30/fetal-attraction-abortion-and-the-principle-of-charity/#comment-12979">Joe</A> and <A HREF="http://slatestarcodex.com/2013/05/30/fetal-attraction-abortion-and-the-principle-of-charity/#comment-13008">Gilbert</A>, challenged my assumption, leading to an unpleasant wade through the swamp of contraception-related data.</p>
<p><b>The Anti-Contraceptive Position</b></p>
<p>Let&#8217;s start with the nay-sayers. In what he <i>claims</i> is a long essay (ye call that long? I&#8217;ll be showin&#8217; ye long!) <A HREF="http://www.patheos.com/blogs/badcatholic/2012/11/does-contraception-reduce-the-abortion-rate.html">Bad Catholic evaluates</A> correlation between abortion and contraception rates in different countries. He finds &#8211; mostly using data from the pro-choice Guttmacher Institute, a huge clearinghouse of abortion data we will be returning to again and again &#8211; that:<br />
<blockquote>Contraception has been shown to decrease abortion rates primarily in [ex-Soviet bloc] countries with already high abortion rates. These represent a minority of countries. Contraception has been shown to increase abortion rates primarily in [non ex-Soviet bloc] countries with already low abortion rates. These represent a majority of countries. Contraception has been shown to slightly reduce abortion rates after its initial increase of abortion rates, but has never been shown to reduce abortion rates back to pre-contraception levels. This is my claim. I have no doubt that there’s a lot more to say, given the incredible amount of studies I haven’t seen. But as far as I can tell, this is a claim far closer to the truth than the oft-repeated, always unexamined “Contraception reduces abortion rates”.</p></blockquote>
<p>This at first sounds bizarre &#8211; how could contraception, a technology that decreases unintended pregnancy &#8211; increase abortion, a result of unintended pregnancy? Enter <A HREF="http://en.wikipedia.org/wiki/Peltzman_effect">the Peltzman effect</A>, aka risk compensation.</p>
<p>I have a deep love for the Peltzman effect. Part of this is that it&#8217;s one of the few terms we social scientists have that sounds as nifty as the one physicists and mathematicians bandy around all the time. Another part is that a girl messaged me on OKCupid once explaining the Peltzman effect to me and asking me on a date (I never claimed my life was normal). But the rest of it is that it&#8217;s jsut this really elegant and unexpected finding where across a broad set of domains people respond to hard-won advances that make them safer with &#8220;Cool! Now I can behave irresponsibly!&#8221; It&#8217;s been found with <A HREF="http://en.wikipedia.org/wiki/Peltzman_effect">anti-lock brakes</A> (drivers drive closer to the car in front of them), with <A HREF="http://en.wikipedia.org/wiki/Peltzman_effect#Seat_belts">seat belts</A> (people just drive faster), and with <A HREF="http://www.ncbi.nlm.nih.gov/pubmed/17112456">childrens&#8217; safety gear</A> (children just behave more recklessly).</p>
<p>The Peltzman effect doesn&#8217;t always hold true; sometimes we expect it and can&#8217;t find it. And it rarely makes things <i>worse</i> &#8211; it usually is cited as keeping things at the same level they were before, and one well-studied exception, the Munich taxi study, only finds a tiny increase in accidents. </p>
<p>But inside view here &#8211; how many people here, if they don&#8217;t want kids, would be willing to have totally unprotected sex? And how many people would be willing to have sex using condoms? But condoms have a <A HREF="http://en.wikipedia.org/wiki/Comparison_of_birth_control_methods#Comparison_table">typical failure rate</A> of 15% &#8211; meaning that if a couple has sex for a year using only condoms for protection, there&#8217;s a 15% chance the woman will get pregnant. With combined oral contraceptive pill, it&#8217;s 8%. So if these contraceptive methods make people about ten times more willing to have sex when they don&#8217;t want pregnancy &#8211; not at all hard to imagine! &#8211; they could raise the unintended pregnancy rate and therefore the abortion rate.</p>
<p>This is the context of Joe&#8217;s study showing that in Spain from 1997 &#8211; 2007, <A HREF="http://realchoice.blogspot.com/2011/01/yet-another-example-of-counter.html">a large rise in contraceptive usage occurred simultaneous with a large rise in abortion</A>.</p>
<p>A few other arguments seem transparently stupid to me &#8211; for example, some people like to point out that US states with high contraception rates also have high abortion rates, but that&#8217;s mostly a feature of those states being very liberal and so allowing abortion clinics to operate there. So lets move on to&#8230;</p>
<p><b>The Pro-Contraceptive Position</b></p>
<p>Just in case you thought you were going to escape without any graphs:</p>
<p><center><IMG SRC="http://slatestarcodex.com/blog_images/contraception_pregnancy.jpg" HEIGHT="300" WIDTH="600"></center></p>
<p>Here&#8217;s teenage birth rates over the last 75 years. Like nearly all social problems, they have been steadily and somewhat mysteriously declining, but we notice an especially sharp decline around 1960, the year the Pill was introduced. Abortion wasn&#8217;t legalized until the 70s and was pretty uncommon before then, so we can leave it out of this analysis and say that it <i>sure looks like</i> the invention of a new form of contraception decreased pregnancies.</p>
<p>Also near an all-time low are <A HREF="http://www.washingtonpost.com/blogs/wonkblog/wp/2012/11/23/surprise-the-abortion-rate-just-hit-an-all-time-low/">abortion rates</A> (I assume they mean &#8220;all-time low since abortion was legalized&#8221;?). This seems to be due to <i>both</i> fewer unintended pregnancies <i>and</i> less willingness to end unintended pregnancies with abortion. Santelli et al 2002 <A HREF="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1716232/">find the decline to correspond nicely to increasing use of contraceptives</A>. Some pro-lifers <A HREF="http://www.lifenews.com/2012/02/17/studies-birth-control-contraception-dont-cut-abortions/">claim</A> (data unavailable), that after 2002 abortions continued to drop even though contraception use stayed steady. Unfortunately, all I can find is <A HREF="http://www.huffingtonpost.com/2013/02/14/birth-control-use-cdc-study_n_2686941.html">the CDC saying contraceptive use continued to rise</A> &#8211; in the absence of their contrary data, I&#8217;m giving this point to the &#8220;increased contraception helped reduce abortion&#8221; people. </p>
<p>(one way that we could reconcile these two results, if we were feeling very generous, is to say that overall contraception use has remained the same, but users have switched to more effective modern forms of contraception like the implant or IUD, with <A HREF="http://en.wikipedia.org/wiki/Comparison_of_birth_control_methods#Comparison_table">a failure rate</A> less than 1/100th that of condoms)</p>
<p>But this is actually consistent with Bad Catholic&#8217;s claim above &#8211; contraception increases the abortion rate when first introduced, then eventually stabilizes and decreases it, but never back to the level before contraception. So let&#8217;s evaluate that one. Bad Catholic writes:<br />
<blockquote>An honest look at the data shows that in virtually every country that increased the use of contraception, there was a simultaneous increase in that country’s abortion rate. In England (Rise in contraceptive use: simultaneous rise in abortions), France (Rise in contraceptive use: simultaneous rise in abortions), Australia, (Rise in contraceptive use: simultaneous rise in abortions), Portugal (Whose abortion rate only began to rise after 1999, after oral contraceptive methods were made widely available), Canada (Whose abortion rate only began to rise after the legalization of oral contraceptives in 1969), and, as the Guttmacher Institute shows, Singapore, Cuba, Denmark, the Netherlands, and South Korea, to name a few.</p></blockquote>
<p>Let&#8217;s investigate the countries in order. The claim seems to be only that abortions and contraceptive use rose &#8220;simultaneously&#8221;, but following his links this turns out to mean &#8220;throughout the 20th century&#8221;. There is no attempt to prove that the particular shape of the contraception curve matched that of the abortion curve or anything like that, just that there was more contraception in 2000 than in 1950, and, whaddya know, more abortions as well. The same methodology could very easily correlate abortion with global temperature. His statistics on England, France, and Australia all seem to be of this type.</p>
<p>He makes a stronger claim about Canada: that &#8220;the abortion rate only began to rise after the legalization of oral contraceptives in 1969&#8243;. You know what else was legalized in Canada in 1969? <A HREF="http://abortionincanada.ca/history/legal-abortion-in-canada/">Abortion</A>. I&#8217;m going with &#8220;probably not a good test case&#8221;.</p>
<p>As for Portugal, the claim that oral contraceptive methods were legalized in 1999 seems wrong; <A HREF="http://www2.hu-berlin.de/sexology/IES/portugal.html#9">his own link</A> says they have been available since 1985 and that only the emergency contraceptive pill was made available in 1999. Further, his claim that &#8220;abortion rates only began to rise after 1999&#8243; also seems wrong &#8211; <A HREF="http://www.johnstonsarchive.net/policy/abortion/ab-portugal.html">his link</A> shows what looks like a pretty linear rise in abortion rates from 1996 to 2006; I don&#8217;t think anyone eyeballing those numbers would be tempted to consider 1999 anything remotely like an inflection point. My own guess for an inflection point would be 2007, and sure enough when I Google it <A HREF="http://en.wikipedia.org/wiki/Portuguese_abortion_referendum,_2007">that was the year they fully legalized abortion</A>.</p>
<p>The Guttmacher Institute doesn&#8217;t link to its sources as diligently as Bad Catholic, so I&#8217;m just going to accept their claim that six countries &#8211; Singapore, Cuba, Denmark, Netherlands, US, and South Korea &#8211; saw simultaneous increases in contraception and abortion &#8211; after all, it goes against the direction of their bias so they have no incentive to lie. They give their results the following explanation:<br />
<blockquote>The reason for the confusion stems from the observation that, within particular populations, contraceptive prevalence and the incidence of induced abortion can and, indeed, often do rise in parallel, contrary to what one would expect. The explanation for these counterintuitive trends is clear.2 In societies that have not yet entered the fertility transition, both actual fertility and desired family sizes are high (or, to put it another way, childbearing is not yet considered to be &#8220;within the calculus of conscious choice&#8221;3). In such societies, couples are at little (or no) risk of unwanted pregnancies. The advent of modern contraception is associated with a destabilization of high (or &#8220;fatalistic&#8221;) fertility preferences. Thus, as contraceptive prevalence rises and fertility starts to fall, an increasing proportion of couples want no more children (or want an appreciable delay before the next child), and exposure to the risk of unintended pregnancy also increases as a result. In the early and middle phases of fertility transition, adoption and sustained use of effective methods of contraception by couples who wish to postpone or limit childbearing is still far from universal. Hence, the growing need for contraception may outstrip use itself;4 thus, the incidence of unintended and unwanted pregnancies rises, fueling increases in unwanted live births and induced abortion. In this scenario, contraceptive use and induced abortion may rise simultaneously.</p>
<p>As fertility decreases toward replacement level (two births per woman), or even lower, the length of potential exposure to unwanted pregnancies increases further. For instance, in a society in which the average woman is sexually active from ages 20 to 45 and wants two children, approximately 20 of those 25 years will be spent trying to avoid pregnancy. Once use of highly effective contraceptive methods rises to 80%, the potential demand for abortion, and its incidence, will fall. Demand for abortion falls to zero only in the &#8220;perfect contraceptive&#8221; population, in which women are protected by absolutely effective contraceptive use at all times, except for the relatively short periods when they want to conceive, are pregnant or are protected by lactational amenorrhea.5 Because such a state of perfect protection is never actually achieved, a residual demand for abortion always exists, although its magnitude varies considerably among low-fertility societies, according to levels of contraceptive use and choice of methods.</p></blockquote>
<p>This seems <i>incredibly</i> reasonable, and we will come back to it later. Let&#8217;s abandon all of these time series type studies and see if we can find a halfway-decent controlled experiment.</p>
<p>Well, uh&#8230;we can find <i>a</i> controlled experiment. <A HREF="http://www.huffingtonpost.com/2012/10/05/study-free-birth-control-abortion-rate_n_1942621.html">These people in St. Louis</A> gave people free contraceptives and later found that they had a teenage pregnancy rate much lower than the rest of the population. Gilbert <A HREF="http://slatestarcodex.com/2013/05/30/fetal-attraction-abortion-and-the-principle-of-charity/#comment-13042">gives</A> this study exactly the correct criticism &#8211; participants from a very specific population (poor people in St. Louis interested in signing up for a contraceptive study) are being compared to the general population (everyone in the United States). This is inexcusable, especially considering that Real Science has an extremely standard way of avoiding this problem (sign people up for your study, only give the intervention to a randomly selected half, and the other half is an instant control group). Other fatal issues &#8211; the study used IUDs, the most effective form of contraception, but most of the worry that contraception might increase abortion comes from less effective means like condoms and the Pill. Finally, if you&#8217;re really interested in the way that widespread availability of contraceptives makes a culture more libertine, just giving them to a couple of people within that culture isn&#8217;t going to capture that effect. I am maybe a little bit hugely disappointed that most of the media and bloggers reporting on this didn&#8217;t mention these sorts of issues.</p>
<p>But it does show one interesting thing, which is that when people get free contraception, they <A HREF="https://www.stlbeacon.org/#!/content/25459/study_recommends_iuds">start using</A> more effective contraception methods. Would this also cause risk compensation? I don&#8217;t know, but I feel like there has to be some amount of sex beyond which it&#8217;s just <i>no longer fun</i>, and some contraceptive methods are so effective that it would be <i>really hard</i> to have so much sex that they&#8217;re worse than nothing.</p>
<p>Let&#8217;s close this section with a few minor points.</p>
<p>Contraceptive advocates <A HREF="http://www.ncbi.nlm.nih.gov/pubmed/7971545">point to</A> the Netherlands, with one of the lowest abortion rates in the world. Given the stereotypes of the Dutch, they <i>probably</i> didn&#8217;t get that way through careful abstinence, and indeed their government is unusually generous in providing free contraceptives.</p>
<p>It turns out people can just survey women having abortions and ask them if they used contraceptives or not! 54% of abortion patients were using contraception at the time, which pro-life websites get very excited about: &#8220;IT&#8217;S MORE THAN HALF!&#8221; But putting these numbers in context may diminish their enthusiasm: the <i>four-fifths of American women who use contraception</i> account for 54% of abortions; the fifth of women who don&#8217;t use it account for the other 46%. The Guttmacher Institute gets more or less the same numbers, but frames them in a very convincingly pro-contraceptive way:<br />
<blockquote>The two-thirds of U.S. women at risk of unintended pregnancy who use contraception consistently and correctly throughout the course of any given year account for only 5% of all unintended pregnancies. The 19% of women at risk who use contraception but do so inconsistently account for 44% of all unintended pregnancies, while the 16% of women at risk who do not use contraception at all for a month or more during the year account for 52% of all unintended pregnancies.</p></blockquote>
<p>So it seems clear that the more (and better) you use contraception, the less likely you are to have an abortion.</p>
<p><b>Summary</b></p>
<p>I think we can use these results to build a consistent picture.</p>
<p>Contraceptive and abortion rates often rise simultaneously. This rise is not necessarily causal, and is more likely to be due to both being parts of the same philosophy &#8211; people want to have lots of sex but not have kids. As this philosophy becomes more widespread, as it has nearly everywhere in the 20th century with the Sexual Revolution and Demographic Transition, both contraception and abortion will rise. As it gains ground, both contraception and abortion will become more legal and available, making them rise even further. It is unclear to what degree the availability of contraception itself causes the rise of this philosophy. I&#8217;m intrigued by <A HREF="http://www.popsci.com/science/article/2013-01/did-penicillin-kickstart-sexual-revolution">this claim that penicillin rather than the pill</A> started the Sexual Revolution, but if someone wants to claim that it was all due to contraceptives, I don&#8217;t have enough expertise in the area to prove her wrong.</p>
<p>On the other hand, once a society has undergone this transition and settled on &#8220;lots of sex, few kids&#8221; as being its dominant values, then the local application of more contraception seems to decrease abortion rates. We know this because of the surveys of abortion patients saying they are disproportionately likely not to be contraceptive users. We know this because of the decline in teenage pregnancies with the advent of the Pill. And we also notice the game-changing nature of new, more effective contraceptives with near-zero failure rates replacing older, more fallible ones, and the not-provably-causal but certainly suggestive secular decline in abortion rates that corresponds with that replacement.</p>
<p>Overall my guess would be that a society that legalizes contraceptives would see an increase in abortion rates (which might or might not be causal depending on that society&#8217;s situation), but that in a society like our own, where contraceptives are already legal and the demographic transition is pretty much complete, increasing access to contraceptives is probably going to decrease abortion. And increasing access to extremely effective contraceptives like the implant or <A HREF="http://en.wikipedia.org/wiki/RISUG">RISUG</A>, especially when they replace less effective contraceptives like the condom, are very very probably going to decrease abortion.</p>
]]></content:encoded>
			<wfw:commentRss>http://slatestarcodex.com/2013/06/01/literally-inconceivable-contraceptives-and-abortion-rates/feed/</wfw:commentRss>
		<slash:comments>51</slash:comments>
		</item>
	</channel>
</rss>
