After a brief spurt of debate over the claim that “97% of relevant published papers support anthropogenic climate change”, I think the picture has mostly settled to an agreement that – although we can contest the methodology of that particular study – there are multiple lines of evidence that the number is somewhere in the nineties.
So if any doubt at all is to remain about climate change, it has to come from the worry that sometimes entire scientific fields can get things near-unanimously wrong, especially for political or conformity-related reasons.
In fact, I’d go so far as to say that if we are not climatologists ourselves, our prior on climate change should be based upon how frequently entire scientific fields get things terribly wrong for political or conformity-related reasons.
Skeptics mock the claim that science was wrong before, but skeptics mock everything. A better plan might be to try to quantify the frequency of scientific failures so we can see how good (or bad) the chances are for any given field.
Before we investigate, we should define our reference class properly. I think a scientific mistake only counts as a reason for doubting climate change (or any other commonly-accepted scientific paradigm) if:
1. It was made sometime in the recent past. Aristotle was wrong about all sorts of things, and so were those doctors who thought everything had to do with black bile, but the scientific community back then was a lot less rigorous than our own. Let’s say it counts if it’s after 1900.
2. It was part of a really important theory, one of the fundamental paradigms of an entire field. I’m sure some tiny group of biologists have been wrong about how many chromosomes a shrew has, but that’s probably an easier mistake to wander into than all of climatology screwing up simultaneously.
3. It was a stubborn resistance to the truth, rather than just a failure to have come up with the correct theory immediately. People were geocentrists before they were heliocentrists, but this wasn’t because the field of astronomy became overly politicized and self-assured, it was because (aside from one ancient Greek guy nobody really read) heliocentrism wasn’t invented until the 1500s, and after that it took people a couple of generations to catch on. In the same way, Newton’s theory of gravity wasn’t quite as good as Einstein’s, but this would not shame physicists in the same way climate change being wrong would shame climatologists. Let’s say that in order to count, the correct theory has to be very well known (the correct theory is allowed to be “this phenomenon doesn’t exist at all and you are wasting your time”) and there is a large group of people mostly outside the mainstream scientific establishment pushing it (for approximately correct reasons) whom scientists just refuse to listen to.
4. We now know that the past scientific establishment was definitely, definitely wrong and everyone agrees about this and it is not seriously in doubt. This criterion isn’t to be fair to the climatologists, this is to be fair to me when I have to read the comments to this post and get a bunch of “Nutritionists have yet to sign on to my pet theory of diet, that proves some scientific fields are hopelessly corrupt!”
Do any such scientific failures exist?
If we want to play this game on Easy Mode, our first target will be Lysenkoism, the completely bonkers theory of agriculture and genetics adopted by the Soviet Union. A low-level agricultural biologist, Lysenko, came up with questionable ways of increasing agricultural output through something kind of like Lamarckian evolution. The Soviet government wanted to inspire people in the middle of a famine, didn’t really like real scientists because they seemed kind of bourgeois, and wanted to discredit genetics because heritability seemed contrary to the idea of New Soviet Man. So they promoted Lysenko enough times that everyone got the message that Lysenkoism was the road to getting good positions. All the careerists switched over to the new paradigm, and the holdouts who continued to believe in genetics were denounced as fascists. According to Wikipedia, “in 1948, genetics was officially declared “a bourgeois pseudoscience”; all geneticists were fired from their jobs (some were also arrested), and all genetic research was discontinued.”
About twenty years later the Soviets quietly came to their senses and covered up the whole thing.
I would argue that Stalinist Russia, where the government was very clearly intervening in science and killing the people it didn’t like, isn’t a fair test case for a theory today. But climate change opponents would probably respond that the liberal world order is unfairly promoting scientists who support climate change and persecuting those who oppose it. And Lysenkoism at least proves that is the sort of thing which can in theory sometimes happen. So let’s grumble a little but give it to them.
Now we turn the dial up to Hard Mode. Are there any cases of failure on a similar level within a scientific community in a country not actively being ruled by Stalin?
I can think of two: Freudian psychoanalysis and behaviorist psychology.
Freudian psychoanalysis needs no introduction. It dominated psychiatry – not at all a small field – from about 1930 to 1980. As far as anyone can tell, the entire gigantic edifice has no redeeming qualities. I mean, it correctly describes the existence of a subconscious, and it may have some insightful things to say on childhood trauma, but as far as a decent model of the brain or of psychological treatment goes, it was a giant mistake.
I got a little better idea just how big a mistake doing some research for the Anti-Reactionary FAQ. I wanted to see how homosexuals were viewed back in the 1950s and ran across two New York Times articles about them (1, 2). It’s really creepy to see them explaining how instead of holding on to folk beliefs about how homosexuals are normal people just like you or me, people need to start listening to the psychoanalytic experts, who know the real story behind why some people are homosexual. The interviews with the experts in the article are a little surreal.
Psychoanalysis wasn’t an honest mistake. The field already had a perfectly good alternative – denouncing the whole thing as bunk – and sensible non-psychoanalysts seemed to do exactly that. On the other hand, the more you got “educated” about psychiatry in psychoanalytic institutions, and the more you wanted to become a psychiatrist yourself, the more you got biased into think psychoanalysis was obviously correct and dismissing the doubters as science denalists or whatever it was they said back then.
So this seems like a genuine example of a scientific field failing.
Behaviorism in psychology was…well, this part will be controversial. A weak version is “psychologists should not study thoughts or emotions because these are unknowable by scientific methods; instead they should limit themselves to behaviors”. A strong version is “thoughts and emotions don’t exist; they are post hoc explanations invented by people to rationalize their behaviors”. People are going to tell me that real psychologists only believed the weak version, but having read more than a little 1950s psychology, I’m going to tell them they’re wrong. I think a lot of people believed the strong version and that in fact it was the dominant paradigm in the field.
And of course common people said this was stupid, of course we have thoughts and emotions, and the experts just said that kind of drivel was exactly what common people would think. Then came the cognitive revolution and people realized thoughts and emotions were actually kind of easy to study. And then we got MRI machines and are now a good chunk of the way to seeing them.
So this too I will count as a scientific failure.
But – and this seems important – I can’t think of any others.
Suppose there are about fifty scientific fields approximately as important as genetics or psychiatry or psychology. And suppose within the past century, each of them had room for about five paradigms as important as psychoanalysis or behaviorism or Lysenkoism.
That would mean there are about 250 possibilities for science failure, of which three were actually science failures – for a failure rate of 1.2%.
This doesn’t seem much more encouraging for the anti-global-warming cause than the 3% of papers that support them.
I think I’m being pretty fair here – after all, Lysenkoism was limited to one extremely-screwed-up country, and people are going to yell that behaviorism wasn’t as bad as I made it sound. And two of the three failures are in psychology, a social science much fuzzier than climatology where we can expect far more errors. A cynic might say if we include psychology we might as well go all the way and include economics, sociology, and anthropology, raising our error count to over nine thousand.
But if we want to be even fairer, we can admit that there are probably some science failures that haven’t been detected yet. I can think of three that I very strongly suspect are in that category, although I won’t tell you what they are so as to not distract from the meta-level debate. That brings us to 2.4%. Admit that maybe I’ve only caught half of the impending science failures out there, and we get to 3.6%. Still not much of an improvement for the anti-AGW crowd over having 3% of the literature.
Unless of course I am missing a whole load of well-known science failures which you will remind me about in the comments.
[Edit: Wow, people are really bad at following criteria 3 and 4, even going so far as to post the exact examples I said not to. Don’t let that be you.]
There seem to be cases where almost everyone agrees there were broad, politicized expert failures, even when they disagree about what those were. For an instance near and dear to us commenters, consider predominant scientific views on race. Constructionists pretty much all agree that when realism was dominant it was a false consensus propped up for political reasons, and realists all agree that constructionism is a false doctrine propped up for political reasons. So we should probably count it.
Two of those are from a field you’re particularly familiar with, and a third is extremely well publicized, probably because it was done by an Official Enemy for Official Enemy Reasons.
Fellow commenter Doug points to another failure by Soviet science that has less to do with Official Enemy Reasons and thus may be more applicable; there are some other possible ideas in the thread.
That is an excellent point. I was trying to avoid bringing it up, so that we don’t get another thread with 250 comments on race and 5 having anything to do with the post, but your formulation neatly cuts through the controversy.
It seems like there is good reason to somewhat relax point 4 of the criteria. But not to allow pet conspiracy theories as you mention.
We should count instances if it’s absolutely clear there are or have been competing camps of experts who claim that one side or another has to be wrong – even if we can’t resolve it now it will still turn out one group was wrong in the end.
Not really. After all, if the 97% number is correct, there aren’t really that many experts who think global warming is wrong. It’s mostly amateurs outside the field. Fields where all the experts were wrong and the educated contrarian amateurs correct seem much rarer than fields where one of two camps of experts were wrong.
I’m confused as this dismissal is on seemingly arbitrary grounds. Why would you count the point Oligopsony made about the race realism issue then? There is just about an analogous “97%” number in that field right now, which would mean you should dismiss the dissenters there.
At one point in time the contrarian view in any field would have been held by a small percentage of experts. If we were at a particular point in time in the past a lot of other contrarian views would have been dismissed.
This reasoning seems to be effectively saying we can dismiss something because of the snapshot in time we are in, or dismiss it permanently, even if ten years from now the experts split 50/50 on global warming it’s a different point in time.
(I do not think the AGW skeptic position is correct, but this isn’t a good argument against it. At one point in time a given, say, quantum mechanics issue had a 97/3 split or worse against the contrarians)
Groups of experts that are discursively entangled with one another are different than ones that aren’t, here – normal, healthy science is full of competing research programs that call each other quacks, and hopefully in the long run truth plays a more than trivial role in determining which live and die. It’s when researchers move on from such contention – in the wrong way – that is concerning.
(Randal Collins’ Sociology of Philosophies is a super-interesting analysis of institutional and competitive dynamics in the history of philosophy. Philosophy is obviously different than modern science in a lot of ways, but a possibly relevant theme is that intellectually creative periods almost always involve between 3-6 major paradigms.)
Right, but I don’t get this line of commenting nor responses to others on the topic then. Not trying to be the one bringing it up or arguing about the merits of the actual topic itself, but:
That doesn’t account for why you or Scott would actually consider the race realism example something as that should count.
If the requirement is that large group of amateurs had to push the correct theory on their own, it’s out, and if it’s that the correct answer has not been realized one way or another yet today, it’s also out.
I think the intended consideration is that The Other Hypothesis be something researchers are aware of the existence of – something that was/is true during both the reign of realism and that of constructionism.
the thing about the race issue is that everyone was a realist up until it became necessary to be a constructivist or you lose your job, but realism is reasserting itself in the shadowy parts of cyberspace and now in a few popular books.
So at least one of constructivism and realism is wrong, and both were positions that everyone held at one point.
That’s how point 4 gets relaxed.
I don’t think race is the only issue that is currently unresolved and had virtually unanimous views that can’t both be right at different times.
@peppermint:
I have 60% confidence that by “everyone” there, you didn’t intentionally mean to imply “including black people”, you just forgot that they count as “everyone”, which is a highly predictable consequence of racism! Because I’m very sure that black people used to be at least far less “Realist” than the general population.
Do you have evidence for your claim, Multi? My anecdotal impression is that blacks are no less race realist than whites, and I think they were probably more so pre-60s. Also, in my own experience, Africans, who have been less affected by post-1960s Western political correctness, tend to be much more open talking about innate racial differences than white Americans and Europeans.
I haven’t gotten this impression from the autobiography of Malcolm X; he describes various “black shame”/imitating whiteness stuff with contempt, but a suggestion by one of his peers that white people are inherently intellectually superior would’ve certainly provoked an even angrier rant from him. He personally aspired to become a lawyer in high school, and describes how a white teacher laughing and dismissing his ambition was one of the many turning points in his life.
I’m certainly not suggesting that all blacks, either now or in the past, are race realists. (Nor is a black person aspiring to become a lawyer in any tension with race realism, but that’s a separate matter.) Most Western intelligentsia, of any race, are social constructionists. Lower class Westerners, being less beholden to political correctness and (because less intelligent) less good at the self-deception involved in race denialism, are more likely to be race realists; and again I think this holds true of all races. (Lower class Americans, again of any race, also probably tend to have more exposure to the black underclass in this country.) So I’m thinking not so much of Malcolm X or other black political leaders as the African-American man down the street who comes by with his lawnmower asking to mow my lawn. I think he’s roughly as likely to be a race realist as the poor white family next door, which is to say, pretty likely.
Look at this bait and switch!
Me: everyone was a realist before… …but realism is reasserting itself thanks to genetic evidence (Cochran&Harpending, Wade)
Multiheaded: hurr durr statistical terminology for non-statistical ideas derpy derp Malcolm X didn’t think he was inferior thus not everyone was a realist
Multiheaded switched from race realism being the acknowledgement that race exists biologically to race realism meaning Blacks are inferior to Whites. Both are true, but one is the theory called race realism, and the other is just a single proven statistical fact with a hundred-year pedigree under withering criticism.
@Troy: pay attention! I was talking about Malcolm X’s observations on the mindset of the varied black communities that he lived amongst, not about his object-level personal beliefs – except insofar as his black nationalism would’ve made him more likely to attack his peers for openly subscribing to black inferiority. He certainly criticized himself and others for accepting white supremacy in lesser and implicit ways; he wouldn’t have remained silent if someone openly professed it! In fact, another common trend in his writing is how many middle-class black people used to disavow their direct and visible social inequality; this feels like the opposite of acknowledging any inherent inequality.
@Peppermint: good to see how my comments can be blunt, rude and deliberately provocative, and still a reactionary would beat me to the bottom.
@Multi: Is this a fair reconstruction of your argument?
(1) If large numbers of people in the black communities that Malcolm X lived in were race realists, Malcolm X would have mentioned it (in order to denounce it).
(2) Malcolm X did not do this.
(3) Therefore, it is not the case that large numbers of people in the black communities that Malcolm X lived in were race realists.
@Troy
Exactly. More or less.
P.S. I encourage anyone remotely interested to get that book, it’s a hell of a read.
Okay, I just wanted to make sure I understood you correctly before responding.
I’m not persuaded by this argument. It’s an example of an “argument from silence”: if such-and-such were the case, X would have said so, but X didn’t say so, therefore such-and-such isn’t/wasn’t the case. However, we’re often very bad at predicting a priori what authors would write about. Most historians recognize this and consider arguments from silence weak. The Wikipedia article on this, although not outstanding, is adequate: http://en.wikipedia.org/wiki/Arguments_from_silence. Here are two examples from that article where arguments form silence would lead us astray: Marco Polo never mentions the Great Wall of China and Pliny the Younger never mentions the destruction of Pompeii in his discussion of the eruption of Vesuvius.
I am at a disadvantage in the case at hand because I haven’t read Malcolm X. But from what you’ve said I don’t think that his not mentioning race realism (or belief in racial differences in intelligence/ability, which seems to be more your focus) provides much evidence that few people in his communities held these beliefs. It would be different, of course, if Malcolm X explicitly said that his fellows didn’t hold these beliefs (although even then this would still be anecdotal).
You offer a similar indirect argument from (3):
In fact, another common trend in his writing is how many middle-class black people used to disavow their direct and visible social inequality; this feels like the opposite of acknowledging any inherent inequality.
I don’t find this persuasive either, although partly I’m unclear at what’s meant here. But it seems that one thing that I might mean in disavowing my own “direct and visible social inequality” is that I am treated equally under the law (the “conservative” notion of equality). This is completely orthogonal to the existence of biological differences in socially valuable traits. Another thing I might mean is that I am just as intelligent, capable, wealthy, or what-have-you as others. But this is also independent of the average capability/wealth/what-have-you of others in my group. So I don’t see how any reading of the above shows that Malcolm X’s peers didn’t believe in race realism.
You think black people didn’t divide people into racial and sub-racial groups that they believed that innate differences beyond cosmetic? I would be mildly interested to see evidence either way, such as transcripts of what tribes said upon first contact with Europeans, or propaganda in tribal conflicts call the other side devils (or just like us but worthy of killing for other reasons, etc.)
lightskin vs. darkskin is a thing
edit: http://en.wikipedia.org/wiki/Paper_bag_party
I’m amazed anyhow anyone can think they can call the shots about race realism without knowing which way of defining race ie in question. Should anyone be realistic about Latino?
Not sure what you mean Peter. Different people and different people groups will of course categorize races slightly differently, although on the whole there’s substantial inter-group agreement. Genetic research can help with the question of which categorization is “best.” As for your example, Latinos are a heterogeneous population formed by the mixing of three different races. One doesn’t need to be a geneticist to know this. So no, I don’t think people should think of Latino as a race. (The U.S. Census, for example, does not define Latino as a racial category.)
I’m saying that realist versus constructively isn’t an either or choice. Since there ate de facto categories that don’t make genetic sense, irrational person would have to be constructively about them, even if they were realist about other categories.
After all, liberalism was founded by playing bait-and-switch with terminology and making brazenly false scientific statements while posing them as statistical.
http://radishmag.wordpress.com/2013/11/08/democracy-and-the-intellectuals/#wise-locke
Locke: There can be no injury, where there is no property. (since injury means violation of a right, and rights are property)
Multiheaded: Blacks are less likely to be race realists, because they don’t see themselves as inferior. (where race realism means the belief that Blacks are inferior; despite its clear and different definition in the thread)
I don’t think a philosophical controversy about the interpretation of a set of theories and terms necessarily counts as a scientific disagreement. I mean, what prediction does one side make and the other deny? With global warming the predictions are obvious. With race realism/constructivism, not so much.
Just throwing ideas out there, to see if anything sticks:
– Plate tectonics.
– Interpretations of quantum mechanics have varied quite a bit.
– The cosmological constant differed from predictions by quite a bit.
– Creationist “science” seems like an analogous case to Lysenkoism.
– Homeopathy or other pseudo-sciences might meet enough of your criteria.
– Catching softwre bugs later is more expensive, or at least the idea that it’s been conclusively studied http://lesswrong.com/lw/9sv/diseased_disciplines_the_strange_case_of_the/
That’s all I can think of.
Plate tectonics is a definite yes, but homeopathy and ‘creation science’ are so marginal in medicine and biology as to not be worth talking about.
If you really want an evolutionary one though group selectionism seems like a good candidate.
Edit: Oops, dumb mistake alert.
I thought kin selection is mostly true. Are you sure you’re not talking about group selectionism?
Thanks for the catch.
What about plate tectonics has been disproven? My knowledge of geology is rudimentary but I didn’t know it was out of date.
Plate tectonics was dismissed for a (relatively) long time (so the geological theories preceding it, regarding the formation of mountains etc., were wrong).
Makes sesnse, I was confused by the phrasing of [false theory], [false theory], [true theory].
Interpretations of QM have not varied among the experts in the field (n.b. not the idiots who write articles for public consumption).
Which ends up being very similar to the AGW thing, in that the experts in the field believe one thing, and a portion of the rabble vociferously argues something else. However, the august journalists of science who stand between the experts and the rabble throwing pearls before swine are on the side of the AGW-credulists and on the side of the Copenhagen-denialists, so there is some asymmetry – which is important, because the opinions of the average man on issues that do not affect his own life can only come from propaganda and/or totally non-propagandistic professional journalism.
The interpretation of quantum mechanics is a great example of how history is rewritten. The meaning of “the Copenhagen interpretation” has changed every generation.
In particular, everyone pro or con (except Bohr) who was in Copenhagen in 1925 agrees that Bohr said that consciousness causes collapse. Bohr explicitly denies this position in the 1927 debate with Einstein, long before “idiot popularizers” like von Neumann and Wigner.
A great deal of scientific racism falls into this category pretty clearly:
http://en.wikipedia.org/wiki/Scientific_racism
Okay, speaking as an outspoken race realist here;
Can we all agree not to blow up the comments about this? Please?
Oligospony’s compromise position is actually pretty elegant and the main question here is much more interesting than our perennial race skirmishes.
No, there isn’t really a middle position here. “Race realist” (ie: scientific racist) claims have an extremely low prior, in fact, an extra low prior, due to having been investigated and found false so many times before.
Absolutely! Us Slavs (especially Polish immigrants), the inhuman Yellow Peril, the Irish…
In particular, Oligopsony once mentioned that Jews used to score lower on IQ tests than US Whites a hundred years ago; anyone has any citations for that?
Edit: googling suggests that Jewish immigrants were simply given English-heavy tests; still relevant. The white-supremacist society has gotten IQ and its significance wrong so many times before, why don’t today’s scientific racists apply the Outside View to their endeavours?
To be clear, I didn’t mean to propose a “compromise” in the substantively staking out a middle ground, but in cutting through object-level arguments about race that we’ve had before and will have again. Like Armstrong says Scott’s meta question is more interesting.
Well, at least now no-one can say I didn’t try to prevent this.
As to “being investigated and found false”;
The old ‘scientific racist’ model of human migration patterns and phylogeny, replaced in the 60’s with postmodern anthropology and its “pots not people” approach, has been completely vindicated by population genetics. It’s gotten to the point where race can be predicted on DNA alone with over 95% accuracy, and tests to quantify the exact admixture of different ethnicities are commonplace. We’re even beginning to rediscover some really verboten stuff, like finding a degree of European admixture in higher caste Indians which suggests late 19th / early 20th century theories about the Indo-European invasions might have been right.
Despite Gould slandering it as phrenology, MRI scans have provided incontrovertible proof that the old estimates of brain volume were correct pretty much down to the gram. And not only have 50+ years of not been able to close the measured IQ gaps, but the degree of intelligence’s heritability and it’s correlation with brain volume/structure have increased.
Modern medicine has been a phenomenally effective refutation of the idea that race is “skin deep” by showing profound physiological differences not linked to appearance. Different hormone levels and rates of maturation, different drug reactions, even different progressions of disease. Race-based medicine is saving lives in ways which shouldn’t be possible if we’re all the same under the hood.
The only sense in which it has been “found wrong” is that being too explicit about what you are measuring can lose you your job, which while certainly a compelling argument doesn’t have much evidentiary weight.
Armstrong for President 2020 was saying, “Hey, can we not talk about this because it will likely cause a flame war.” And instead of arguing that it was unlikely to cause a flame war, or that flame wars are sometimes acceptable, you just decided to start a flame war.
Armstrong, can I have a citation for the MRI / brain volume / intelligence thing you’re talking about? And its relation to race?
I would also love a link to that MRI thing.
If you claim that your position is based on empirical evidence, then that’s not a *prior*, that’s a *posterior*.
Scientists in this field will refer to “ancestry” (which can get complex), not “race”.
Specific genetic features which can be more common in certain ancestry groups can – sometimes – affect medication. Check the genotype, not the “race”. Race is an approximate, squishy, changeably-defined word that needs to be abandoned for scientific discussions.
Don’t scientists in the field refer to ancestry instead of race because the slur “ancestryist” hasn’t been coined? (related: http://isteve.blogspot.com/2014/05/the-race-faq.html )
Yeah, “sexism” and even “ageism” are forcing scientists to abandon perfectly normal terms, it’s getting pretty silly. “Gender chromosomes” instead of “sex chromosomes”! People now report “temporal experience” instead of “age”?! Political correctness run amok!
Just joking. They do not do those things. Researchers use the word ancestry because it’s more accurate.
Race = ancestry + social construct. If you’re going to study social constructs, go ahead and say “race”, but otherwise you need to get away from that cruft. See also: http://www.theatlantic.com/national/archive/2013/05/the-social-construction-of-race/275974/
Racist is a more potent smear than sexist or ageist.
I think what you and Coates are calling “social construct” is a mix of “shortcuts for ease of use.” That is, some ancestral features are more evident than others, so when attempting to classify someone without having a full genetic analysis in front of you, some aspects might be overlooked.
Kind of like how different people might disagree on whether a 69F degree day is cool or comfortable or warm, but you’re never going to keep everyone without a thermometer from opining about the weather.
Race has aspects that are shortcuts for ease of use but it is not just a shortcut to refer to ancestry.
From 1923-1956 geneticists thought humans had 24 pairs of chromosomes because Dr Theophilus Painter miscounted them and every geneticist from that point on evidently trusted his reputation over their lying eyes. That was a pretty damn embarrassing one.
Anthropology has only recently started to throw off the postmodern infestation it picked up in the Sixties. That’s a pretty big deal considering ethnographies basically disappeared for three or four decades and physical anthropology was dead until DNA testing emerged as a way to identify ancestry without having to learn racist facts about differences in bone structure. Most of the hardcore PoMos have been forced out into sociology but a few prominent anthropology departments are still captured.
Milikan’s 1913 oil drop experiment mismeasured the charge of the electron by a fairly large degree and the reluctance of physicists to correct the value by much more than a small amount over subsequent decades (as described by Feynman, I couldn’t find the data on accepted values in different years) isn’t quite as central as either of the above but is still a decent example of a field taking decades to correct a simple mistake due to the prestige of it’s originator.
I think the main takeaway is that politics is a secondary issue; this is a more primal issue of fear. I’m not sure it makes such a big difference if it’s fear of being arrested by the MGB, fear of contradicting a distinguished senior researcher, fear of looking like you’re in the Ahnenerbe because you use calipers, or fear of not looking cool to all your Sartre-reading psychiatrist buddies.
None of these seem to satisfy point 3, unless there was some large group of amateurs debating the value of the electron and being dismissed and ignored by the establishment.
But the chromosome story is fascinating and I hadn’t heard it before! Thanks!
Well for the chromosomes one there were pictures in textbooks showing clear images of nuclei with 23 pairs of chromosomes… captioned saying there were 24. I’m not sure “grade school arithmetic” counts as an alternate theory but it really ought to.
For the postmodern anthropology one though there was absolutely an existing theory; all previous and subsequent anthropology. This one is really just not debatable.
You’re right that the oil drop thing probably doesn’t fit though, although I think it helps illustrate the point that scientists are fallible mortals often afraid to rock the boat too much.
But would we need to satisfy point 3 to be potentially massively overstating the case for anthropogenic climate change? A Millikan-style decline could still be having a huge effect on our confidence levels, if it’s present.
My favorite example in the same set of mistakes (things that take decades to notice that shouldn’t have) is the Hayflick limit — everyone thought human cells were immortal and would divide forever (and when they died, it must be due to experimenter mistake), until Hayflick kept impeccably good notes and discovered they always died around 50 mitosis cycles.
Since evidence for anthropogenic climate change has accrued progressively for the last few decades, with levels of acceptance among experts generally going up rather than down, we can probably discount a Milikan-style failure.
Millikan was accurate to 1%. He just overstated his precision. He could have done better but its hardly a giant failure.
From what I understand, admittedly little when it comes to physics, his figure was off by 1% but more than five times his standard error.
Not to mention that there is evidently some suspicion he manipulated his data to get that result in the first place.
If you manipulate the data to get the right answer, you still win. It just means you should have written down how you manipulated the data, because apparently it’s a valid technique.
David Goodstein (PDF) makes a good case that Millikan is largely innocent of manipulating data.
The issue is that the oil drop equipment was very sensitive to the size of the drops produced and atmospheric conditions, so that it was definitely necessary to discard some of the measurements as flawed. Millikan used his intuition to decide which data was flawed in this manner. By modern standards, he should have had an objective criterion set up in advance, but reanalyzing the full data set (according to Goodstein citing Allan Franklin) doesn’t show a bias towards a particular outcome.
The place where Millikan is usually blamed for dishonesty is that Millikan’s paper contains the sentence “It is to be remarked,
too, that this is not a selected group of drops, but represents all the drops experimented
upon during 60 consecutive days,
during which time the apparatus was taken down
several times and set up anew.” Goodstein argues that this sentence in context was meant to say not that no drops were discarded at any point but that a certain computation involving air resistance had been performed with all drops that had been judged worth including, not a subset (a more notable point when computations are done by hand.)
How about the opposite side? E.g. the many contemporary rationalizations for 19th century bourgeois views on masturbation or homosexuality.
Stuff like eating cereal preventing masterbation or the female orgasm not existing? Sure, yeah I’d put that in the same camp.
As I said, fear is more primal than any political division. Conservatives, liberals, the innumerate; any group can potentially intimidate a field in the right circumstances.
I’m not sure these satisfy the recency or primacy requirements. William Keith Kellogg, at least, was making those sorts of claims about the “benefits” of cereal within the 20th century, and they weren’t immediately discarded by the public as bullshit, but I’m not aware of them having had traction in the scientific community. As for the female orgasm not existing, I can’t pin down any kind of consensus on what medical professionals generally believed on the subject in the earlier half of the 20th century, although the existence of medical vibrators seem to suggest some kind of tacit understanding.
The rat chromosome reference immediately put me in mind of the human chromosome thing. Maybe it doesn’t fit the criteria, but it sure seems like an important commentary on the effectiveness of our scientific mechanisms. Here is something that would have been taught to basically everybody learning about that field. The raw materials to replicate the results were also available to basically everyone. No fancy math beyond the most basic counting was involved.
And in spite of having many eyes on the problem, easy replication, and requiring basically no expertise, we still didn’t catch the error for 30 years. Why should we be confident about the hard problems?
That somehow reminded me of the tongue map thing.
Um, that article makes some… strangely-worded claims (the what of all tastes? “qualia”? how are these “qualia” operationalized?), and the papers allegedly containing the research behind the core claims (citations 3, 6, 7) are not available. I’m a bit suspicious.
We need a way to refer to the subjective sensation of salty, or green, or warmth, rather than to the objective, exists-outside-the-brain physical phenomena that cause these sensations. Someone (a philosopher, I think) came up with the term “qualia.” It’s become a pretty standard term.
The context I came across the word in first was colours. Colours are subjective experiences produced in response to certain frequencies of light hitting the retina – thus, a frequency of light is the physical phenomenon, the colour is the quale (though I expect we’ll be fighting a losing battle getting anyone to recognise the singular form of the word).
It’s like that old thing–“how do you know the red I see is the same as the red you see? What if, like, your red was my yellow?” The “red you see” would be the quale (pl. qualia).
Folks, I’m not asking for an explanation of the term “qualia”. I’ve written papers on qualia (the claim that “Colours are subjective experiences”, in particular, is so painful that I am almost tempted to start a whole sub-thread about it, but that would be a bad idea).
My point is that its usage is bizarre in this context. Unless “qualia” is operationalized somehow in tongue-map-related studies, the claim being made about the alleged “qualia” is very strange, and that makes me suspect that the Wikipedian(s) editing that article got rather liberal with their interpretation of the quoted (but strangely unavailable) studies.
Learn to use the internet archive. [3] [6] [7].
Last time I looked into this, my conclusion was that it was a matter of people saying: how dare laymen think they can understand our field.
What is politics but planning to use force and threat of force to get what you want?
All politics is force, but not all force is politics, and not all fear is force.
The discussion of multiple interests in order to negotiate the least-bad way to satisfy them all? IE, when the US Constitution was being written small-state and large-state delegates did not whip out pistols and shot each other until only one side was left, but they put both flat-rate and proportional-representation into the Constitution. Likewise for slave states and free states, though their 2/3 compromise was struck away with the 13th Amendment.
OTOH, you can’t get anywhere without at least some force and threat of force to enforce the terms of whatever agreement is eventually reached.
On pure methodological grounds if you’re going to get anything useful out of the all the things people are saying and add up some things in the end you should really look at the confidence interval on your most arbitrary estimate.
The claim of “50 scientific fields, 5 paradigms each” just seems like a gut feeling guess out of the blue.
That has a huge amount of uncertainty to it.
https://slatestarcodex.com/2013/05/02/if-its-worth-doing-its-worth-doing-with-made-up-statistics/
If we can’t come up with more than a dozen examples of science failure, I think it’s fair to say there were enough opportunities for science failure that these are very rare, much rarer than would be necessary for the anti AGW case to merit serious attention.
The reference class for AGW being held to suspicion by nonexperts isn’t all scientific theories, but all scientific theories where loud experts or maybe-experts are suspicious.
Offhand all the examples of that I can think of are politicized. This may be because (1) that’s what’s interesting to me so I become aware of it, (2) that’s what gets contested, and (3) political factions and passions get recruited by contend ants when one is in command of official instructions and the other is not (or just generally) or some combination thereof.
(An interesting subhypothesis of (2) is that politicization is actually good for scientific questions, because otherwise no one would have the zeal to question the orthodoxy. Like in the chromosome example above, if libertarians were somehow offended by there being 24 chromosomes they would have totally contested it, as opposed to just going along because who really cares.)
FWIW, talking about the “anti-global-warming cause” as being synonymous with “manmade CO2 has no effect on climate” is pretty much the exact same fallacy you rightly criticize feminists for using. Breaking it down:
Obviously Reasonable Anti-Global-Warming Beliefs:
Attempts to fix manmade climate change should not destroy our current civilization, revert humanity to a medieval technology level, kill hundreds of millions of people, or otherwise do anything Really Bad (TM).
Currently Controversial Anti-Global-Warming Beliefs:
The problem of manmade climate change has been greatly exaggerated by scientists and by the media, and all or most political efforts to fix the problem would have costs exceeding their benefits.
Obviously Unreasonable Anti-Global-Warming Beliefs:
Manmade CO2 and other human effects have zero or negligible impact on global climate, both now and for the foreseeable future.
One can dispute exactly how reasonable specific things are, but there is obviously a spectrum here, the main controversy is mostly about stuff somewhere in the middle, and trying to prove one’s case by proving the Obviously Reasonable Beliefs is fallacious, as you yourself pointed out.
I am pretty sure > 50% of people who would self-identify as anti-AGW would believe what you call the Obviously Unreasonable position, which is a pretty big difference between that and the feminism example.
I agree there is a much more defensible position, but I don’t see many people using it.
If I had been trying to undermine anti-AGW rhetoric, I agree this would be a crappy way to do it. But I think quantifying amount of science failure is a really interesting topic for lots of reasons, and the prevailing school of anti-AGW thought makes for a good frame story.
Doing some quick Googling, the polling is just bizarrely inconsistent on this. According to this, 88% of Americans think the US should work to reduce global warming, even if this has economic costs. On the other hand, this says that only 49% of Americans think humans are causing global warming at all. On the other other hand, the main Wikipedia article on the topic says that people asked if “human activity”, “natural activity” or “both” was causing climate change were excluded from the numbers if they answered “both”, which according to this means they’re excluding some 41% of UK citizens (compared to only 9% who say climate change is entirely naturally caused).
I’d like to believe that about half of people believe humans are mostly responsible, about half believe humans are partly responsible, and almost nobody believes humans have no effect in defiance of basic physics, like the UK poll says. But the polling data I’ve seen is too inconsistent to conclude this with reasonable confidence.
I expect polling data in the US to be different from elsewhere because the anti-AGW position is a conservative tribal signifier in the US in a way that it isn’t even in other anglophone countries.
Many people asked about the AGW in the US probably aren’t really answering the question, but substituting, “which tribe do you belong to?”
What Matthew said.
James Donald has repeatedly said that the lukewarmer position is reasonable but he will brook no compromise with the warmists who are watermelons and want to kill and destroy.
I replied that the lukewarmer position should be adhered to because it is true.
Oh well.
I shouldn’t ask, I shouldn’t ask, I shouldn’t ask…
What’s a watermelon?
Edit: I have the idea of green on the outside, red on the inside, therefore communists, therefore evil?
Yes, basically people using environmental causes to get greater state power.
I’m an AGW skeptic and so far as I can tell, nearly everybody who calls themselves a skeptic and nearly everybody who has been called a “denialist” in the popular press is in that “97%”, not the “3%” (or whatever the relevant number is). The claim that people who doubt the severity of global warming are in the “3%” side is something that is merely asserted propagandistically – there’s no evidence for it.
The original definition of “the consensus” was roughly: (1) CO2 is a greenhouse gas, (2) world temperature increased a bit in the last half of the last century, (3) human activity has been responsible for a “significant” proportion of recent warming.
Whether you take an opinion poll or look at papers you can find a pretty high level of agreement on THAT sort of consensus, but it’s because those points (especially that last one) are vague and innocuous enough that you can believe all of it and still not think it’s *a problem* or think we know the exact *amount* or think this problem is *worth doing anything about*. Disagreeing with any of these out-of-frame contentions doesn’t put one in the “3%” but still does get one labeled a “denialist” by partisans.
I think you’ll be hard-pressed to find many climate skeptics who deny there has been some measured warming between, say, 1950 and 2000, who deny that humans have caused some measurable amount of past/recent warming (merely paving roads and planting crops could accomplish that, even setting aside CO2) or who deny that CO2 is a greenhouse gas. Certainly none of the high-profile ones fit that description. Can you name any?
Maybe we have different data sources, but I come across “lukewarmers” regularly and so far as I know I have yet to meet an actual “denialist”. It’s a strawman.
I can think of a massive field-wide failure in recent western history driven by leftist political bias: Sovietology. As many of you know, academic economists (overwhelmingly liberal) routinely overestimated Soviet GDP estimates and made excessively optimistic economic forecasts of the Soviet economy that are hilarious in retrospect. See some commentary here: http://econlog.econlib.org/archives/2009/12/why_were_americ.html
Now, about global warming… I think we have reason to be extra suspicious of environmental science in particular. Remember the ozone layer? And how, back in the 90’s, it’s disappearance was claimed to be the WORST DISASTER IN HUMAN HISTORY and we were all going to be barbecued alive by the sun’s death rays by 2050 or something?
Well, assume all that was true. How come we don’t hear anything about that impending doom any more? Presumably because environmental science found something sexier to research, something more alarming and better at getting grant money and political patronage: global warming (now called “climate change”—we wouldn’t want to stick our necks out and make a specific prediction, now would, we?). Well, if they’re willing to neglect a known crisis to research something sexier, the institution as a whole is obviously untrustworthy (even if its individual researchers are honest), and we should assume the noise it makes is really just an expression of pure political avarice.
Now, consider the case in which that ozone business was all balderdash. In that case, we have a confirmed case of environmental science manufacturing a crisis for funding and prestige, so we ought to give them little credence in the future.
And if you say that the crisis has passed for some reason because some country passed some new environmental regulations or something… while my relatively young brain remembers little of the 90’s, I certainly recall scientists, activists, and politicians say that it would take a Herculean commitment to reverse the damage, and that the very survival of humanity depended on our efforts. And I got the impression that we were supposed to think there was no way in Hell that all would be well by 2014. I’m also aware that anti-AGW activism predates the 90’s, but I don’t remember it being nearly as popular.
I think we don’t hear about the ozone hole anymore because everyone met in Montreal and signed an agreement banning the offending chemicals, after which the ozone hole started to shrink and is now in the process of disappearing entirely.
Do you mean the treaty that was signed in 1987? http://en.wikipedia.org/wiki/Montreal_Protocol
[Edit:] And it looks like some serious amendments came in the early 90’s. Still…
If this treaty was adequate, why were the 90’s the heyday of ozone alarmism? [Further edits:] And if the treaty was such a success, and catastrophe was really averted, why aren’t the Democrats screaming “LOOK AT US, WE SAVED THE WORLD!”? I mean, they do it for Social Security, Medicaid, Medicare, the Civil Rights Act of 1964, killing Bin Laden… you think if the left really saved the world, someone would, you know, claim credit for it.
I’m not sure it’s the best measure, but google ngrams has ozone mentions peaking in 1993, and declining by more than half since then:
https://books.google.com/ngrams/graph?content=ozone&year_start=1970&year_end=2008&corpus=15&smoothing=1&share=&direct_url=t1%3B%2Cozone%3B%2Cc0
This doesn’t seem like all that much of a mystery to me. The problem was largely solved, and with a minor delay began to fade from the public consciousness. What else should have happened?
As for your point about Democrats not claiming credit, I wish they would! Their pessimism bias seems to prevent them from recognizing their own successes, like the fact that rivers in the United States no longer catch on fire.
why aren’t the Democrats screaming “LOOK AT US, WE SAVED THE WORLD!”?
Because Republicans would counter with “Reagan did it!”
http://www.nytimes.com/2012/11/11/opinion/sunday/climate-change-lessons-from-ronald-reagan.html
I get the sense that the American Left — can’t speak for the rest of the world — has historically had something of an uneasy relationship with environmentalism. Oh, sure, the Democrats will get into bed with them when they offer a chance to thumb noses at Big Oil or Big Pharma or some other shibboleth starting with “Big”, but fundamentally it doesn’t play well to the core constituency of trade unions and urban minorities, functioning more as a sop to rich, guilty whites and a plausible way of claiming reality’s on their side. Agenda-setters don’t really give a shit. The Greens, meanwhile, aren’t straight leftists so much as some kind of bizarre Frankensteinian populo-socialist monstrosity that really likes whales.
To Chris and James Miller: Thank you; I’m *somewhat* less confident in my claim than I was an hour ago.
The fact that they aren’t screaming about it doesn’t really tell you anything, since it’s hard to imagine politicians checking what it was that caused an improvement, and then boasting iff it was their interventions.
Odd how resistant people can be to do the idea that a problem can be fixed. 14 years ago I got into an argument with someone who was insistent that three had never been a Y2K bug, because no planes had fallen out if the sky. Havin g been involved in a project in which I found and fixed a Y2K, I begged to differ.
How often do academic fields, filled with some pretty smart people, go absolutely nuts?
Rousseauian noble-savage stuff sociology
Blank-slatism
Marxist Economics
Post-Structuralism
Critical Theory
Most strands of contemporary art criticism
Most strands of contemporary literary criticism
Deconstructionism
Hegelian Studies
That lots of people took Jacques Derrida or Judith Butler seriously should terrify us.
Most of these are the humanities, not the sciences.
Yes. I think the reference class “sciences” is too small to have enough data points. I think we need to use the reference class “academic fields” or “large groups of smart people.”
Uh, question. Have you actually read Derrida and Butler and put a lot of effort into understanding them (I haven’t read Derrida, but Butler is a shit writer and so you actually have to put a lot of work in to figure out what she’s saying)? Can you coherently explain what they believe and why they believe it? How confident are you that you are not strawmanning them? Can you distinguish “they are obviously full of shit” from “the humanities acquired a fad for incoherent writing, because France, but are trying to explain various subtle and complex points that are actually interesting and potentially important”? Or– as I suspect– are you taking “my thirdhand evidence says these fields are obviously dumb, therefore they must be dumb and entirely diseased disciplines” as evidence?
I’m actually fairly sympathetic to the humanities — I get the feeling I’m far more of a lit nerd than the average LW or SSC commenter — but the thing about incoherent writing is that it’s pretty easy to read meaning into it. I haven’t personally read Derrida (though I have read some other continental philosophers), but the commentary on him that I’ve read is varied enough that I’m not at all sure how much is pure eisegesis.
My only exposure to Derrida and his fanboys came through an absolutely terrible Minor Eng Lit Professor in British Redbrick University’s book on “The Uncanny”.
He kept going on about Derrida and how he was a Derridist, and he was so dreadful that I was about ready to burn Derrida at the stake. Then he gave an actual quotation from Derrida, rather than his notion of what Derrida’s ideas were, and the difference was astounding.
You could tell there was genuine thought going on, there was a real idea and meaning being worked out. I didn’t necessarily get what he meant or would have agreed with it, but I had to respect that this was indeed an intellect at work.
The fanboys, though, were pure hopeless.
As to climate change and science failure, what if it’s not so much a case of failure as of not knowing everything that’s going on? As this very recent news story about Pseudoalteromonas and flame-retardant chemicals demonstrates?
How are flame-retardant chemicals getting into the ocean? Well, it must be down to Evil Industry because they can’t be produced naturally!
Whoops, turns out there is an organism that does produce these chemicals in nature!
I don’t think it can be reasonably held that human acts have not contributed to climate change, but some of the tub-thumping about it being ALL solely down to humans and unless we do (insert favoured action of the moment) IMMEDIATELY, in five/ten/twenty years we will all burn/freeze/suffocate to death!
I’m a tiny bit sceptical because I’m old enough to remember the doom-mongering about how, by the far-flung date of 1980 at the most, we would all be shivering in a new ice age due to human meddling with the climate.
If there was any actual content in Derrida or Butler, somebody would have distilled it into a clear, comprehensible format. The fact that this has not happened, in combination with thirdhand evidence, should be sufficient for any rational person to put a very low prior on Derrida being worth their time
If there were any actual content in “general relativity,” somebody would have distilled it into a clear, comprehensible format. I can’t understand it, and—it happens—I even have an MIT math degree, so obviously it’s complete nonsense.
The most comprehensible thing I’ve found is Wesley Phoa’s Should Computer Scientists Read Derrida?.
This seems as apt a time as any to praise you as someone who is especially good at translating things from LW-type-unfriendly to LW-type friendly idiomata.
You people are going to make me read Derrida, aren’t you. If I can’t find something that doesn’t suck, I’ll read Derrida.
Anyway, the standard resources:
http://www.iep.utm.edu/derrida/
http://plato.stanford.edu/entries/derrida/
Stanford consistently sucks, but it was the standard when I was in college — I don’t think anyone even mentioned the IEP.
Oh, also, a funny and pretty accurate explanation of deconstruction for computer geeks is How To Deconstruct Almost Anything.
When I was learning about this, in the late Ordovician or thereabouts, Jonathan Culler’s book was recommended as an introduction to Derrida. My memory of the Ordovician is a bit hazy, but I think it was pretty readable.
The most important thing to bear in mind is that Derrida is mainly a troll. If you read him as annoying everyone just for the lulz, while making clever insulting jokes that most people miss, you’ve got the gist of it.
Albert Einstein’s Theory of Relativity In Words of Four Letters or Less
I mean, certainly one might have a low prior on Derrida being worth their time, but that is different from the strong claim that this is incontrovertible evidence that entire academic fields are worthless, which I generally expect to be supported by careful investigation.
Also, in the humanities it is totally high-status to write in an abtruse and complex style. The best thing I’ve found for it is to befriend people who like an author and then they tell me what I’m supposed to get out of it; the next best is to take a class on it. I think she is occasionally correct (heterosexuality is a parody of homosexuality), often wrong but subtly so, and sometimes correct if you assume Lacan and Freud are also correct.
I can’t find any way to interpret this. Would you mind giving a very short explanation as to what this means?
I could see the reverse being true, so maybe you meant to type that?
I would wager Ozy means (that Derrida means) that seeing oneself as heterosexual, as a characteristic or identity, is a parody of seeing oneself as homosexual as an identity, in that the default, or perhaps in-power group cannot give a meaningful sense of self in the same way that having a difference from the norm can, for [reasons].
general relativity is fine. Your problem is that math degrees are handed out too easily.
Isn’t that Butler?
When I was in college (at a very liberal New England liberal arts university), I took a class in formal logic taught by the chair of the philosophy department. I distinctly recall him jokingly-but-not-really saying that “the dark side” in philosophy was “things like Derrida.” I haven’t read Derrida either, but if even liberal academic philosophers think he’s risible, that probably counts for something.
Heidegger and Derrida are laughed at and despised by most academic philosophers I know.
QWOOP, it’s then safe to assume that most of the academic philosophers you know are analytic philosophers, who in fact do tend to dominate philosophy departments in the US. ‘Tis not so everywhere in the world.
Given that, by admission, much of the field is characterized by incoherent writing that requires you to “put a lot of effort into understanding” it before you have even gotten to the start blocks wherefrom you can begin to discuss it on the merits, exactly how much effort are you demanding before we’re allowed to write it off as not worth the time?
Many other fields are quite difficult and require a lot of study. But in those fields, there’s a confidence that once you have some mastery, there is something useful there to be a master of.
Or, as per Oakland: The trouble with it is that once you get there, there is no there there.
I don’t want to speak for Ozymandias, but there’s a difference between judging something “absolutely crazy” and judging it “not worth the time.”
Fair enough.
Yes, what Oligopsony said. I think it is quite sensible to decide that the humanities are not worth your time based on third-hand evidence. I think that third-hand evidence is not sufficient to say that they are all absolutely nuts and should be used as examples to prove that academic fields are terrible.
I’m possibly biased here– I was a gender studies major, I think my field was studying real things, and people who get up in arms about Derrida usually think gender studies is made up too.
The Anonymouse, while that is reasonable, this can trap you into never venturing too far out of your comfort zone. This is particularly true of a whole range of areas of knowledge that require significant involvement and dedication until it finally clicks. A rationalist perspective is understandably leery of such things, but I’ve had the experience of finding a “there” there in areas that are commonly scoffed at, and that I might have scoffed at myself, but made sense from the inside and enriched my understanding of the world.
To clarify a bit:
Ozy, I was not intending to write off all of the humanities as not worth the time spent studying, merely the obscurantist intellectual fad that passes under the catch-all name “theory.” I left my English department when I realized my time and energy were better spent elsewhere. YMMV, but I am much happier now and I no longer surround myself with posturers to the profound.
(Except maybe here on SSC. :))
Anodognosic, I don’t think we disagree. There is indeed a limitation that can be hit if you’re unwilling to devote yourself to study without a locked-in reward at the end. If we all abided by that limitation, we’d all be doing incredibly practical things like HVAC repair and agronomy. The trick is knowing just how far afield you’re personally willing to venture. My line will be different from yours, which will be different from Ozy’s.
I do philosophy. I have seen Judith Butler talk about people like the philosopher J. L. Austin and completely mangle what he’s saying. If she’s incompetent about things I know directly, it’s likely this translates into other parts of her idea-space.
But I think there’s a broader argument to be made:
If doing philosophy with simple, neat analytic sentences is hard and sometimes unclear, then trying to do philosophy using abstruse language, while unsubtly oscillating between bold normative and descriptive claims, while not being self-aware about the status and tribal affiliation games you are playing is likely impossible.
You’re asking a lot, since oftentimes postmodern philosophers are “not even wrong”. Often theorists make readers do a lot of work only to arrive at less rigorous versions of already existing arguments. Their conclusions might be right, but their methods are still awful and their contributions to the field negligible.
Fully cognizant that this might open up a whole new can of worms, some worthwhile intellectual work is not simply about arriving at the right conclusions. It’s about reaching a certain level of consciousness–we might call it reaching a meta level–which rational inquiry by itself might not be able to achieve. There is a level of insight, a sense of something of the world opening up, which is not a matter of reaching the right conclusions, but of instilling the right patterns in your mind.
Hegel is a very interesting case. He’s still considered a leading philosopher, yet all the circumstantial, non-textual evidence I’ve heard indicates that he was a crank. He thought he had solved philosophy for all time, and that the entire process of life on earth had reached its apex and perfection in the contemporary Prussian state. The students that admired him admitted that they didn’t understand what he was saying. Hegel admitted that nobody understood what he wrote. Once, asked to explain a passage from one of his own books, he admitted he didn’t know what it meant. He became famous largely because he had a philosophy that claimed that the existing Prussian state was the best possible state, and that justified crushing individual freedom under the heel of the State, and the State liked that. He redefined his terms as he went, in terms of each other, so that it’s possible any attempt to pin them down would lead to endless recursion. He titled one of his most-famous books “The Science of Logic”, without seeming to realize that it had little to do with science or logic. He thought metaphysics was science, and he thought logic meant the dialectic method, which was merely a narrative framework like the Hero’s Journey, not a logic at all.
Derrida, from summaries I’ve read of his work, is pointing out inconsistencies in epistemology without knowing what to replace them with. Derrida and his opponents each have different pieces of the truth, but can’t produce a synthesis, because all of them try to categorize statements as “true” or “false”, and this is a false conceptualization. Statements in natural languages cannot be true or false, if for no other reason than that the words in them can’t be precisely defined. They should properly be regarded as conveying information. Doing this should dissolve the paradoxes, from Hume to Derrida, one runs into when asking whether something is “true” or “known”, and how a category or claim refers to reality.
” Statements in natural languages cannot be true or false” is in natural language, so it cannot be true or false.
And yet it conveys information.
It is a convenient approximation to say a sentence is true or false, but if you read the literature that I was referring to, it is about difficulties that arise when you believe “true” means something like “certain; true under all possible interpretations; probability 1” and “false” means the opposite. Such as Hume’s famous objection that we can’t say that we know the sun will rise tomorrow.
“Elvis is still alive” conveys information.
Concepts of information that are agnostic about truth are only of use in engineering….they cannot be plugged into a useful
Epistemology.
Yes, you can’t be certain. That is something that all philosophers now agree. But you should respend to that by abandoning certainty claims, not by abandoning truth claims, because “there is no truth” is self refuting in a way that “there is probably no certainty” is not.
Considering it’s been true every time for 4.54 billion years, the statement, “The sun will rise tomorrow with 100% probability” is true to thirty or forty decimal places. (Diffraction can get to fifteen sig figs.) Even if I’m forced to admit that 100% certainty is impossible, we can get close enough. Similarly, Descartes’ Daemon has never stolen the coffee cream from my fridge. I’ve never even been out simultaneous to being sure I had some.
First-order mental objects are 100% certain. What I think they are determines what they are. I can’t change my mind from a mistake to truth, because changing my mind changes the truth value.
peterdjones and Alrenous both mean, I think, by “truth” what I mean by “conveying positive information”.
If a statement corresponds to reality, it conveys positive information. “Elvis is alive” conveys negative information. Being told that Elvis is alive, and taking it seriously, increases your predictive error.
There are truths, but natural language statements are too imprecise to be analyzed as “true” or “false”. If you take a simple statement like “Elvis is alive”, then it is very nearly false–though someone can say “Elvis is alive!” and mean that the memes and attitudes that truly constitute Elvis still persist, and hence Elvis is alive today, by their interpretations of “Elvis” and “alive”. If you take a statement of the kind people actually argue about, like “Life begins at conception” or “The patriarchy is unjust”, then it is silly for philosophers to analyze them and show that it’s impossible for them to be either 100% true or 100% false. And that’s what philosophers do.
Consider carefully the distinction between information and truth. I’m referring to a large philosophical literature that has a very strict interpretation of “truth”. The difference in certainty between “even odds” and “true to forty decimal places” is only 133 bits. The difference between “true to forty decimal places” and “true” is an infinite number of bits.
Hume can bring up examples like “Elvis is alive” or “The sun will rise tomorrow” because they are the easiest case for the true/false approach, and showing it handles them badly demolishes it. Refuting those arguments would only move the battle forward a bit, to “Cats are predators” or some other slightly-less-true statement.
Sentences cannot refer to perceptions. Sentences can refer to categorizations of perceptions, but those categorizations are statistical, which brings us into the land of information, not of truth.
“Pretending?” Dude, if you don’t want me to talk to you, you can just say so, no need to be rude about it.Apology (below) accepted.
I am aware that there is a large philosophical literature that equates truth and certainty. There is also a subsequent literature arguing that that is an incorrect theory of truth,
Your statement that natural language statements are not true or false seems to take the older, infalliblist notion as a given. I was pointing out its disadvantages.
It’s not a fact that natural language sentences aren’t true or false, it depends on how you are defining true and false. You expect people to get some sort of “positive information” from your statements. That would make them true in the terminology of people to whom “true” means p>0.9, or somessuch.
I am absolutely not arguing for infallibilism.
My point was, that while you seemed to be arguing for fallibilism , you were doing so in a way that made it sound self refuting. That was not intended as an argument for infallibilism. Your revised claim in terms if 100% truth does not have that problem.
Your other revised claim that statements deliver information that may be positive or negative seems to reinvent the new faliblist notion of truth using different vocabulary.
Sorry, Alrenous. I’m editing my comment to be more polite.
Sorry for getting riled. I think there’s a distinction–the fallibist notion seems to believe that sentences have a well-defined meaning that might be true or false, but that we can’t know which. The objections that structuralists, and Derrida, made have more to do with the claim that sentences have a well-defined meaning. The solution I expect is that sentences can be interpreted as propositions about categories, where those categories are defined as the odds of particular observational tests having particular outcomes. Thus what a sentence provides you, even if it is unambiguous and you believe it completely, is information that changes your probability distribution over observations.
I suppose a structuralist would still say that “Life begins at conception” conveys no useful information unless you share beliefs about the moral implications of life, and would handle “The patriarchy is unjust” quite differently depending on whether you did or did not believe in the existence of “the patriarchy”.
Apology spurs forth forgiveness. Humble forgiveness, since it’s far from certain I could have done the same.
—
My personal definition of truth is based solely on predictiveness. I don’t see any other material purpose for truth, and therefore I back-define: anything which serves the purpose is truth, to the extent it serves the purpose. (Truth is a terminal value for me. But to know if I’m serving my terminal value I, regardless, need a value-neutral definition. It’s important to me that I’m truly gathering truth, not merely enjoying the illusion.)
Natural language is often imprecise. Then, by my definition, it is imprecisely true.
Must agree. Bitrate mismatch between phenomenon and language.
The practical difference is negligible, though.
To get at this from another direction, information has mass, in which case physics can’t be true or the entire universe would collapse into a black hole. But physics is true of itself by definition… This is one reason why I have a personal definition of truth.
Sentences can imperfectly refer to perceptions. First, it can be good enough for whatever purpose you want, which is a colloquial way of saying the epsilon-delta limit definition applies. For example it is impossible to describe the quale ‘red.’ But if we both have ‘red’ I don’t need to describe it, I just have to construct a pointer, and you can correct for the imprecision upon mental decoding. Describing Elvis as ‘alive’ or ‘dead’ is indeed ridiculously vague, but it can be good enough. When it’s not – if a surgeon needs to operate on Elvis – language has never been found insufficient.
For a second angle, natural language can be cast as a special case of mathematics. All sentences can be converted to Number, which is precise. (You can tell due to the length of numerical descriptions.)
The spectre of self refuting has not been laid to rest. Defining truth in terms if predicting observations has the difficulty that “truth is what enables you to predict observations” does not enable you to predict observations. Cf L.P..
Third point, it’s a descriptive definition, not a prescriptive one. The set of things that happen to be predictive, I call ‘truth’ and then get on with life. If you disagree, go ahead, but it’s somewhat like arguing that my handle isn’t ‘Alrenous.’
Second point, it works, so it doesn’t matter. If you think you’ve found a contradiction in the design but the car still runs, the car still runs; your logic must be wrong.
First, yeah it does. If you find something predictive, you can predict I’ll call it truth.
The purpose of truth is to be predictive. The purpose of prediction is to achieve goals. Ergo you can discard data about not-truth without harming your ability to achieve goals.
Saves hard drive space, if nothing else. E.g. I can discard knowledge of the difference between 100% certainty and how certain I am that I have cream without losing any ability to whiten my coffee. Carrying the error bound is not worth the effort; certainty is 100% to any reasonable number of sig figs.
If you discard the definition, you can’t know what I’ll call truth. If you discard the distinction between predictive and not-predictive, then you’re completely lost. The distinction needs a name, so…
All ASCII sentences are numbers in base 128. That precision is useful for computing checksums, but not much else.
So you wish to refute Godel? Please do, perhaps we can collaborate.
http://www.youtube.com/watch?v=tl08MkPM8es
“Kill the commies, so we can do science!”
How many of those fields take measurements and are constrained by using models and tools derived elsewhere in the hard sciences.
Those examples are from the wrong reference class.
An interesting approach! But, I don’t understand your criterion #3. Why require that the correct answer be known before doubting the official theory? One can have good reasons for thinking the official theory is wrong, or for considering that the official theory does not have good evidential support, even when you don’t have a better theory yourself. So, one could argue that “there’s still insufficient evidence for AGW,” and that might be sufficient reason to dismiss it (if indeed there is insufficient evidence).
If we relax #3, it’s easy to find lots of examples—it would be fun to enumerate them—of 20th century scientific consensuses that we now know were absolutely wrong. Aren’t they good enough reason, by your logic, to consider that AGW might be wrong?
(Note that I have no opinion whatsoever about AGW, and do not intend to argue a skeptical position.)
Failure to think up the right theory seems to be relevantly different from failure to recognise a theory as the right one when you are looking at it and can test it. Inventing theories is a very different thing from testing them, isn’t it? So I think criterion 3 makes a lot of sense.
I’m sorry, I don’t understand.
Criterion #3 says that mistaken scientific consensuses in which there was no competing hypothesis don’t count as evidence that scientific consensuses can be wrong. Why does that make sense?
Put a different way, we’re apparently supposed to believe that the AGW hypothesis is much less likely to be wrong because some people think it is wrong. If everyone agreed it was right, the reference class would expand to include all the consensuses where no one thought they were wrong, and the AGW hypothesis would suddenly become substantially more likely to be wrong.
(Or am I confused?)
This was also my reading, and I noticed the same bizarre implication of drawing these reference classes.
I don’t think the reference classes have been defined consistently at all either. I am still confused where Scott is coming from.
As on above comments it really looks to me like the race realism issue doesn’t actually fit into the reference class originally specified and Scott only said so to placate other commenters. Heading off debate on whether it is correct or not is obviously what is necessary for this thread, but not what reference class it belongs to. That’s just one possible example though, as a total number of one it doesn’t make an overall difference to the reference class.
In general, what’s actually true of AGW:
There are a non-zero number of experts with the contrarian view. Small or not (it’s unclear if we’re even concerned about this, and exactly how?) but nonzero
There are a large number of amateurs with the contrarian view who claim political motivation/censorship/etc.
Trying to dismiss those two claims is uncharitable at best to the contrarian AGW view. Likewise there are some historical science examples that don’t fit into a reference class where both of those things must be true.
Edit: In general it’s very hard for there to exist cases where the number of experts with a (correct) contrarian view is zero, except when the contrarian view is a hypothesis that simply didn’t exist yet at that time.
> the AGW hypothesis is much less likely to be wrong because some people think it is wrong. If everyone agreed it was right, the reference class would expand to include all the consensuses where no one thought they were wrong, and the AGW hypothesis would suddenly become substantially more likely to be wrong.
Yes. Because there’s public controversy, and people are forced to think about it. AGW has received a lot more scrutiny than random fact about the squirrel genome.
I think the actual thing we want to measure is that the prevailing theory is wrong and a *particular theory, promoted by educated amateurs* is right (as opposed to some unspecified theory that might be discovered in the future). This should resolve the paradox.
Given that a theory has remained the consensus, we should be more confident in it if it’s survived criticism than if it’s merely reigned unopposed.
Lots of simple errors (chromosome count, electron mass) have persisted unchallenged for a while, through some combination of lack of eyes on the problem, lack of effort to replicate, or undue trust in the experimenter. But these sorts of mistakes are easily resolved when actually critiqued, and cease to be consensus when they actually get investigated more thoroughly.
The opposition to AGW has caused the evidence to be repeatedly and closely scrutinized, and the fact that this hasn’t caused the experts to change their minds means that AGW isn’t this kind of simple but common error.
But in this case, there are only two possibilities at a level of detail that is important and controversial: AGW is happening, or not.
>Criterion #3 says that mistaken scientific consensuses in which there was no competing hypothesis don’t count as evidence that scientific consensuses can be wrong. Why does that make sense?
Think about it this way:
In situations where Science is wrong because the answer isn’t in it’s hypothesis-space, the correct response is to try and invent better hypotheses and then do Science to them.
In situations when the true hypothesis is already known – and “this phenomenon doesn’t exist” doesn’t require much creativity to come up with, presumably, it’s just the null hypothesis – then we have to deal with a breakdown in the social process of Science itself. This isn’t a problem we can solve by simply continuing with the normal process of scientific truth-finding.
The second situation is a Bigger Deal, and it’s the situation anti-AGW proposes we are in. If we were in the former, there would be no argument; the problem would be self-correcting by virtue of having been noticed.
AGW being wrong includes things like global warming being much, much worse that expected, while we want to distinguish AGW from business as usual.
Because if he doesn’t state that requirement, every krank on the Internet will present the Scientific Establishment’s failure to accept their theories on perpetual motion or spectral vision or orgone medicine as examples of what Scott is claiming.
In others words, the comments to this post will be flooded with garbage. Indeed, I expect them to be, even with that requirement in place.
Interpretations of quantum mechanics is almost this, I’d say. I’m also suspicious of many claims doctors make about nutrition, but I don’t think nutritionists are overconfident themselves.
I agree with some who are pointing out other examples of scientific failures. But I want to raise a different point. Your reference class seems flawed to me. While science might do well in general, this is not necessarily true in the case of climate modelling. Climate is a long term process, so its predictions can’t be tested easily. And the ability to test predictions is what gives science its power. Without that ability, a field is “scientific” only in name. Given its limitations, climatology seems more like a social science to me than a hard science. And it’s practically a brand new one, at that.
The weather is notoriously difficult to predict more than 2 weeks in advance. I know that weather is not the same as climate. But there’s some overlap, because both weather systems and climate systems are chaotic and complex. We rely on computer modelling to predict both of them, and the models are only as good as the assumptions and data we build into them. And the essence of a complex system is that even tiny changes have large consequences, so even small flaws in input are very important.
I’ve yet to see any detailed predictions from climate scientists – while there is a consensus that warming is real, they seem to agree about little else. If their models yield varying timeframes, mechanisms, and consequences, it seems likely that the models are false inventions. The only thing that all models agree on is that warming is real – that indicates systematic conspiracy or bias. Similar to how it makes sense to be skeptical of all religions, if religious believers disagree on all details except the mere existence of the supernatural.
Weren’t there predictions in the 1970s that global cooling would happen? What changes in technology or methodology or data collection have happened which make climatologists more credible now than they were then?
I reluctantly err on the side of believing in warming, because the potential consequences seem vast and I haven’t yet researched this subject in any real detail. I welcome corrections to my thoughts, I almost certainly need them. To be honest, I’m writing this in the hope someone will change my mind and convert me to the majority opinion. But although I’m not confident in warming’s nonexistence, neither am I confident in the opposite. And I think that it’s an important subject, so I should be honest about my doubts.
> Weren’t there predictions in the 1970s that global cooling would happen?
There was one paper in a not-very-prestigous journal claiming that. It was never a mainstream view, much less a consensus.
> Climate is a long term process, so its predictions can’t be tested easily. And the ability to test predictions is what gives science its power. Without that ability, a field is “scientific” only in name.
I don’t think that’s really true. For example, while moderns have some good examples of being able to watch evolution in action, that’s a fairly new thing; for most of its history, evolutionary biology relied purely on historical record for data, not new predictions. But it was still a hard science all the same.
That said, you can get around this somewhat by “predicting the past”. For example, the theory of Milankovitch cycles predicts very-long-term climate trends, and has been verified by taking deep-ocean cores and inferring temperatures. If we obtain deeper cores and are able to obtain results about the further past, the theory tells us roughly what we should expect to see – and this ought to count as a prediction. (As indeed we have, and its predictions have been confirmed.) Something similar happens with the relationship between CO2 and global temperature averages.
> And the essence of a complex system is that even tiny changes have large consequences
You’re conflating “complex” with “chaotic”, here. (Complex: hard to model, involving many factors. Chaotic: very small changes to input produce disproportionately large changes in output.) Weather is both chaotic and complex, agreed, but climate does not appear to be particularly chaotic. Certainly nowhere near as much as weather.
> I’ve yet to see any detailed predictions from climate scientists
Have you… looked? Notable: “Since IPCC’s first report in 1990, assessed projections have suggested global average temperature increases between about 0.15°C and 0.3°C per decade for 1990 to 2005. This can now be compared with observed values of about 0.2°C per decade, strengthening confidence in near-term projections.”
> The only thing that all models agree on is that warming is real
Not really. They all make roughly the same predictions about the amount of warming, and roughly the same predictions about sea level rise, and ocean acidification, and a host of other things.
> that indicates systematic conspiracy or bias
or trends sufficiently powerful to show up even if you vary the weighting and choice of factors in your model. Again, climate is a complex system, not a chaotic one.
> Weren’t there predictions in the 1970s that global cooling would happen?
No, not really. You’ll want pages 5-9 there. Choice quote: “The survey identified only 7 articles indicating cooling compared to 44 indicating warming”.
> What changes in technology or methodology or data collection have happened which make climatologists more credible now than they were then?
Wow, where to start. (And I’m only addressing this for the sake of completeness, in light of the above.) We sent up a ton of weather satellites, which give us data we could only dream of 50 years ago. (This is a development with an impact comparable to that of the invention of genetic sequencing on evolutionary biology.) Our computing power has improved; we can now perform much more detailed simulations. We’ve started taking significantly deeper ice and ocean cores, giving us a much larger historical record to work with. This list could be much longer if you’d like it to be.
> I welcome corrections to my thoughts, I almost certainly need them. To be honest, I’m writing this in the hope someone will change my mind and convert me to the majority opinion.
Hopefully I’ve helped!
Thanks for the corrections on updated data collection methods, the unpopularity of global cooling hypotheses, and my conflation of chaotic and complex systems. You’re clearly right in those cases.
I’m not totally convinced that climate isn’t chaotic, however.
Additionally, I am still skeptical about historical predictions. I don’t have enough familiarity with the field to know whether the majority of people actually made such predictions in advance of the new information, or if they only claimed they did. This is something I need to look into. Unless you’re able to provide another nice summary for me? 🙂
Your example of evolutionary science doesn’t help much. I think there’s far too much speculation in that field. Recently, for example, I’ve learned there are some excellent criticisms of the high cost signalling model, which go almost completely ignored in the textbooks.
Meteorologists are more likely than climatologists to be AGW skeptics because the former use techniques that are good at near-term predictions but lousy as long-terms predictions, and vice-versa for the climatologists:
http://www.nytimes.com/2010/03/30/science/earth/30warming.html
Haha, this might be relevant!
http://www.detectingdesign.com/milankovitch.html
Yep, I remember the 70s flap about the forthcoming New Ice Age and how we’d all be wiped out by the ice sheets stretching down from the North Pole.
One of the advantages of being a dinosaur 🙂
I suppose, though, the flip side to being “the boy who cried wolf” is that when the real wolf comes, we’re likely to get eaten, and how can we tell the wolf is real this time, with global warming no sorry now we’re calling it climate change?
A failure rate of 1.2% seems off by orders of magnitude. Many fields of “science” are in reality cargo cults, with very deep flaws in their basic methodological paradigms.
Let’s take the science of “nutrition” as an example of a field with a broken epistemology. Careerist nutritionists keep publishing papers about correlations between variables, implicitly claiming the results are causal. Everyone knows not to take the results seriously, they know that what causes cancer today will be shown to be protective tomorrow. Modern frameworks for causal inference are very clear about why this is happening. The nutritionists are simply not listening – they still get their studies published in “reputable” peer-reviewed journals, and are still taken seriously as “scientists”.
The same phenomenon occurs almost everywhere in applied statistical sciences, including in epidemiology, sociology, applied econometrics, psychology, environmental health etc.
If the update is in reference to this comment, I would like to point out that saying “Nutrition is a false science because it uses bad methodology based on evidential decision theory, in reality, we know almost nothing sbout what foods are good for you” is very different from claiming that “nutrition is a false science because they don’t accept that paleo diet is good for you”
Oh yeah, one example that absolutely fits all the criteria would be the historical progress on scurvy, beriberi, and possibly other nutrient deficiency syndromes.
Not only did medical establishment people have the right answers available and pushed wrong ideas, sometimes they had the right answer already implemented in practice and later switched away from it due to pseudoscience or political pressure before the issue was finally resolved.
The only slight qualification is the timeframe as a lot of this didn’t extend past the early 1900s AFAIK.
I think the scurvy and beriberi cases didn’t reach past 1900, but the one with Pellagra (= niacin deficiency among people eating corn as a staple) only got resolved in the 1930s. Native Americans had solved the problem long before with their food preparation methods which were being ignored by Western science (though they’d only solved it in an engineering sense, not a scientific sense, since of course they had never heard of niacin).
I don’t know the beriberi case, but scurvy was a solved problem in the 19th century that was no longer solved in 1900. Maybe it had just regressed to ignorance, rather than a wrong consensus, but a lot of people had very wrong beliefs about scurvy.
This doesn’t really match your desiderata at all, but is pretty interesting: http://en.wikipedia.org/wiki/Samuel_Soal.
Apparently many prominent scientists (including Alan Turing) believed in ESP for a decade or so.
I think an important distinction is to what degree funding is centralized. Victorian gentlemen scientists of means don’t need to grovel for funds so we shouldn’t expect such a convergent force. As funding for scientific research becomes more centralized we should expect more and more examples of Lysenkoism. Also, rather than thinking of global failures of science, think of funding source-motivated error pairs. For example, the denigration of “Jewish Physics” in Germany. Or genetic research in the US vs China today. Didn’t you recently write about Michelle Obama’s water initiative? Although you might not call educational policy a science, the entire framing seems miles away from what any intelligent observer with Google access would come up with. Also, large swatches of medicine, even things as simple and commonplace as knee surgery not having any benefit over simply making incisions in the knee. People can and do and will continue to do naive Malthusian extrapolations (think oil/energy and food within the last couple decades), but the world keeps on turning. An idea I hope some plucky economist picks up is (are you listening Leavitt?) making a market index to price the risk of global warming. Include things like the change in price of land that is marginally too cold to currently farm, price of low lying coastland, cost of natural disaster insurance for crops, etc. I would trust even a rudimentary version of this more than the overwhelming Soviet majority.
What would an intelligent observer with access to Google do? I think educational policy looks pretty hopeless.
I agree, and I doubt you will ever hear a politician say that. Most educational policy is better referred to as the study of “education alchemy”, the process of turning low IQ/low class kids into academic gold.
Aren’t some Pacific island countries buying up land in case the ocean level rises and they sink? Kiribati has already bought land in Fiji, and I think there was another one that bought land in China.
However, those islands can be creative in how they get their money, and I wouldn’t be surprised if it turned out to be some sort of theater to try to get more aid. There are some papers saying that there’s no risk of them sinking even if the ocean level does rise — something about coral debris.
An implication of this is that early modern English science should have been clearly superior to its French equivalent. Is there an historiographic consensus on this? It seems like both were pretty effective but I’m not an expert.
I seem to remember coming across a prediction market for global warming online at some point — maybe via Steve Sailer’s blog? At any rate, IIRC that suggested that people willing to put their money where their mouth was expected continued warming, though not to as substantial a degree as some have predicted.
Just to be clear, whatever number we would agree on for this investigation would only be our prior for a scientific theory being a victim of political capture. Presumably, the odds of climate change in particular being a victim of that would be higher given that it is already being criticized as such.
I suppose the next step would be to ask, of the theories that have been accused of being politically motivated, what percent have turned out to actually be so? And what was the lag time between theory discovery, popular criticism, and eventual replacement?
I don’t think you were claiming otherwise, but just want to be explicit about it.
I am also very surprised at the thrust of this post given how frequent a topic scientific misconduct is to this blog. Didn’t you write a whole post about parapsychology as the control group for science? And I seem to recall a much higher estimate for that case. Maybe you need to occasionally reestablish your seemingly liberal bona fides so you can continue to have smart people listen to you, in which case I wish you well in the Straussian struggle.
A relevant implicit difference from parapsychology may be that climatologists are taken seriously by people from other or the same disciplines, whereas parapsychologists are not.
I think criterion 2 is tricky because of survivorship bias. You bring up an incorrect theory in biology, which is a subject we still believe in. A subject like alchemy which is not in general regarded well today is not as salient a “scientific failure” because it is from our view not scientific. There was some back and forth about whether seemingly humanity related subjects count, because they are not scientific. THAT IS EXACTLY THE POINT. Past failures will not look like scientific failures. Lots of people believed very seriously in scientific socialism, but we don’t view that as a scientific failure due to our current demarcation views. Malthus’ math was correct until it wasn’t.
I am not terribly familiar with Malthus, but AFAIK his math was correct. Population grows geometrically, food production grows arithmetically, therefore unless people stop having babies or figure out how to produce dramatically more food we will starve. There were two confounding variables Malthusians were not aware of: one, the Green Revolution and, two, the demographic transition– that is, we both stopped having babies and started producing more food. And if you couldn’t predict everyone stopping having so many babies, you probably can’t predict when they’re going to start back up again, so that is hardly the sort of thing one wants to rely on; and it seems a bit optimistic to expect that Norman Borlaug will show up each time, like some kind of biology superhero, to save our butts.
In addition, one may consider it reasonable to believe that alchemy used methods that were… perhaps a little less rigorous than those used at a modern university, and therefore maybe does not belong in the same reference class.
Part of the bad, from a modern perspective, methodology of alchemy is that output was coded in several layers of metaphor and oblique references so that only the morally and intellectually fit could understand it. This is precisely what everyone accuses postmodernists of doing, except alchemists were explicit about it. It’s inherently hard for outsiders to evaluate whether this is the case, but it’s an at least plausible a priori that weak versions are common.
“Population grows geometrically, food production grows arithmetically…”
There’s a clear, if naive, argument for why population might grow geometrically. What is the analogous argument that Malthus gave for expecting that “food grows arithmetically”?
After all, the amount of food is itself a population, namely the population of whatever constitutes our food stock (cows, wheat stalks, or whatever). So, if population in general grows exponentially, then food in particular should also grow exponentially.
He believed that food production grew mainly by increasing the area under cultivation and that there was diminishing returns as people used the most fertile lands first followed by more marginal land.
The modern model is that premodern agricultural productivity gains get used to produce more people, instead of increasing the standard of living.
Eventually you hit rock bottom and can’t harvest any more energy than the Sun gives you.
Extended to the cosmic scale, growth in energy use is constrained by how fast you can expand and colonize new star systems – a sphere expanding at the speed of light. Thus, population growth is constrained to O(t^3)
@Samuel Skinner
But couldn’t you replace “cultivation” by “occupation” and get a parallel argument that population will also grow arithmetically?
@ckp
That’s an argument for how population (and food production, for that matter) will in fact grow, but it’s not helping me to see why Malthus thought that food production and population would grow at qualitatively different rates.
Sorry, Anonymous @ July 4, 2014 at 8:41 was me.
Personally, I would expect food production to follow exponential growth curves, like most other technologies.
But then, Malthus clearly didn’t know about that. Which is far from unreasonable, to put it lightly, given the time period.
What is unreasonable is to assert that it is linear. And. for that matter, that population growth is exponential – which is only true if death rates are a constant, which they patently are not.
While any particular effect that stops/prevents naive extrapolation may be rare, the class of all things that prevent extrapolation is rather large.
“In addition, one may consider it reasonable to believe that alchemy used methods that were… perhaps a little less rigorous than those used at a modern university, and therefore maybe does not belong in the same reference class.”
I think the point is that every failure changes science, so that past failures will never look like what current science is. If global warming is proven false over the coming decades, it may be at one point that nothing that relies as heavily on computer models is seen as such; if it is proven true, consensus opinion may be given greater weight. In either case, the currently wrong theory will be said to “not truely be science by our modern standards” by the cyborg science historians of 2150.
computer models are absolutely not the problem.
If there is a problem with global warming (I think there is), it is that grant applications that have sexy sexy implications like we’re all gonna die are easier to get funded, and that politicians love telling people stories about why they should be given power.
The first problem can be solved when science doesn’t depend on grants as much. The second problem can be solved when democracy is abandoned.
If I were a venal scientist, I would be touting myself to the oil companies, not struggling for government grants.
@peterdjones: You might want money less than you want the ability to think of yourself as fighting the good fight, or being accepted by your peers who all believe in AGW. Or probably several other things. Point being, not that AGW is particularly suspect, but that people can be motivated to compromise their epistemology by all kinds of things, and it’s dangerous to assume that some position is exempt from strong motivation.
If I was a high-IQ individual looking for money, I would go into business, not science. If I was looking for power, I would try for political stuff. If I was looking for the opportunity to work on positions of official truth, a.k.a. priesthood, then I would go into science.
Well … no, they don’t. I think you know that, if you think, you just hadn’t applied the facts to the belief.
There is a sense in which he is mathematically correct – there’s only so much resources in the universe, and expansion has to stop sometime. I think that’s part of where the appeal comes from. But as soon as you examine it, it falls apart.
(The other part is probably that people like contraception because, well, it makes life more fun, and Malthusian catastrophes would make contraception morally obligatory and shut up everyone who criticizes them. Which is nice, admittedly.)
> (The other part is probably that people like contraception because, well, it makes life more fun, and Malthusian catastrophes would make contraception morally obligatory and shut up everyone who criticizes them. Which is nice, admittedly.)
Well, one explicitly Malthusian objection to contraceptives that I’ve heard is that Malthusian catastrophes have a necessary eugenic effect, by enforcing hyper-competition, and that without it we’re resigning ourselves to mediocrity.
http://dienekes.blogspot.com/2009/10/migrationism-strikes-back.html
Old school skull measuring physical anthropologists seem to have understood prehistoric patterns of migration better than their flower power successors and even geneticists c. 2000 (e.g. most present day Europeans are Middle Eastern imposters, not long settled natives). The explanation for the step backward is mostly political: after the 1960’s social scientists took a strong dislike to any theory of prehistorical cultural change that invoked conquest or migration, because they thought that that was the sort of story the Nazis would have liked. They therefore pretended that skull shape was not at all heritable and could not be used to track ancient migrations and essentially chose to forget decades worth of accumulated research on that subject. This sounds crazy, but if 12 years ago you were surfing the web researching the ultimate origins of Europeans you would have received more accurate information from explicitly racist sites keeping the flame of skull n’ bones anthro alive than you would from reading Brian Sykes’ The Seven Daughters of Eve or relevant scientific journals.
yup. And as a result, the good Bayesians of LessWrong are now slightly more receptive to racism, to the degree that being banned for racism made them less receptive to those theories.
Shouldn’t the reference class be limited to scientific issues on which the Blues were strongly on one side and the Greens on the other?
Like tobacco
Good example, although I doubt this was a Republican/Democratic thing given that tobacco was grown in the south which at the time of the major tobacco debates was dominated by Democrats, which is also the pro-regulation party.
Oh, this is fun.
I think of these two examples fairly often when I’m thinking about groups of intelligent, educated, well-meaning, experienced people who miss fundamental problems with the way they do things. I realize that psychiatrists and stock traders in general have more to offer than the below examples might suggest.
1. Thomas Szasz criticized psychiatry for inability to make accurate diagnoses, likening the field to alchemy or astrology. David Rosenhan, roughly in the same school of thought, then put it to the test (along with several others) by getting himself admitted to a mental institution as a healthy person, and then had a hell of a time getting out again.
“The second part of his study involved an offended hospital administration challenging Rosenhan to send pseudopatients to its facility, whom its staff would then detect. Rosenhan agreed and in the following weeks out of 193 new patients the staff identified 41 as potential pseudopatients, with 19 of these receiving suspicion from at least 1 psychiatrist and 1 other staff member. In fact Rosenhan had sent no one to the hospital.” http://en.wikipedia.org/wiki/Rosenhan_experiment
2. Daniel Kahneman writes about the time he studied a stock trading firm. He realized that the firm’s most distinguished employees actually did slightly worse than the market, on average! Implying that despite all of their mathematical and economic and financial education, their customers would have been better off putting their money in an index fund. When he delivered this bombshell to the firm, no one at the meeting said anything. At all. They simply carried on. On the way out, one trader walking with him said defensively that he had worked there for many years, and no one could take that away from him. He thought, well, I just did.
Sorry these aren’t more science-y, but at least they’re medical and math/finance.
I think that it’s very difficult to understand what pathological thinking by smart people looks like until one has a great deal of experience interacting with the financial profession. There are truth-standards other than those of science. In fact, there are truth standards that generally win by being repeatedly and confidently wrong. Once you know that, Tetlock’s results are less surprising.
I actually find the reverse. I’m a day trader, and I can never persuade muggles that its legit.
Smart people “know” that everyone in the stock market loses money compared to their mythical index fund (protip, ask which fund and you’ll find out that they’ve got their cash in a checking account, or they are underwater on loans).
I ultimately decided its just defensiveness. They hold the view that the market is unbeatable not because of its merits, but because they like the consequences.
If they are wrong, if smart folks can make money by investing, then they are working their 9-5 for no reason, because they could be generating that income by having their savings work for them. That would mean that they are making a mistake. If they are right, and the market is just a disguised lottery, then they aren’t losing out on the gains they could be making with their mattress fund. They want to still be smart, so the market is gambling.
How long have you been day trading, and how much money have you made? (In percentage terms, obviously)
For the record, I think that beating the market is possible – just much harder than the typical active investor thinks it is. But protip, ask a day trader how much they’ve made and it’s always less than they would have sticking the money in the S&P 500.
Day traders aren’t in the financial profession. They almost all get terrible returns for their time compared to actually working, but I don’t think it destroys their minds, nor that markets are anything close to efficient.
I have a non-political example: Chomskian transformational syntax. TL;DR version: Chomsky had a bright idea about how to describe the syntax of English, and the entire field of linguistics got derailed for about forty years trying to extend and refine his idea, except that now most linguists are coming around to the fact that while the Chomskian approach is appealing, it’s fundamentally flawed and doesn’t actually explain anything. The full explanation will get kind of long.
So in the late 60’s Chomsky (then a very young dude) published a book which showed how you could explain most of the syntactic features of English with a set of “transformational rules” which took an underlying utterance and rearranged or replaced its parts. This was closely tied to the notion of Universal Grammar, the thought being that if you peeled away all of the transformations there was a single, consistent, and simple grammar that was directly encoded in the human brain and shared by all languages, and that the differences between languages were merely differences in lexicon and which transformations the languages applied. And the whole linguistics field was like whoah, and spent a long time cataloguing more transformations, in more languages, and hundreds of papers were written showing how seemingly disparate constructions in different languages could be derived from a common underlying syntax with the judicious application of transformations.
Two problems became apparent: one was that Chomsky’s original catalog of transformations was incomplete, as there were places where the published transformations would predict the wrong form, or prohibited an attested form. But you could always fix that up by adding some more transformations or tweaking the conditions. This, however, pointed to the second problem: Chomsky’s transformational notation was too strong. It could literally turn anything into anything else, limited only by the ingenuity of the linguist writing the paper, which in turn meant that you couldn’t actually infer anything about Universal Grammar from them.
What followed were a series of revisions of the theory meant to constrain the kinds of transformations that were available, or to change the way in which underlying structures and transformations were conceived. I recall learning about at least three of these: Principles and Parameters, the Minimalist Programme, and X-Bar Theory (the last of which is the current reigning model among people who still hold to Chomskian syntax). All of these were ingenious in some ways, but they all had the same two problems. They would over-generate some things, allowing for kinds of languages and syntactic transformations which are never observed, and they would under-generate other things, prohibiting syntactic features which are actually found. And no one ever got any closer to an actual description of Universal Grammar.
Eventually people started looking for other ideas (and the handful of anti-Chomskian holdouts started to get some cred). There are still Chomskians around, so this example might not fulfill your condition #4, but most of the new interesting work on syntax these days does not use the Chomskian model at all. No one believes in Universal Grammar any more. The most interesting new syntactic models are stochastic (meaning they work with a probabilistic model of grammaticality and generativity) and/or don’t have deep structures at all, both of which are anathema to the Chomskian approach.
(This is based on my memories of linguistics undergrad courses which are now a decade in the past, plus my reading of linguistics papers since then, so I could be wrong about all of this, and have probably screwed up some of the details. But I’m pretty confident in my overall conclusion that Chomsky was fundamentally wrong about syntax.)
Hooray! Really glad you wrote this one up. I was going to give it as an example after Scott clarified point #3, but I think you’ve done a better job.
I was railing against Chomskianism in the 80s, which made me borderline crackpot at the time. Glad to be vindicated by history 🙂
I’m surprised you didn’t pounce on symbolic AI.
That would have been boasting.
I’d also mention Chomsky’s oft-quoted claim that there must be a universal grammar because the sentences a child hears don’t contain enough examples to learn a grammar. This is testable; we can use information theory to measure the information in a grammar and the information in a corpus of text. As far as I know, linguists still cite this claim today, yet a couple of minutes of analysis proves it’s false, and false by orders of magnitude if you adjust for the fact that many different grammars may suffice.
I am one who is uncertain about the existence of Global Warming. I’ve read some of the most famous reports, heard some very clever rebuttals to them, looked around for rebuttals to those rebuttals, and mainly just ended up confused.
For what it’s worth, a link to a site by a SF author, career statistician and (warning ahead for those of you who need it) Roman Catholic, Michael Flynn.
A link to a list of posts he has about global warming/climate change, and his critiques of the mathematics/statistical analyses involved.
My mathematical abilities stop at the twelve times table, so I have no idea if his criticism is good, bad or indifferent, but he has a professional interest in how the data is interpreted.
Well, if you’re okay with being confused, you should probably just assume the experts are probably right.
If you are soliciting links, I’d recommend trying to check over the basic science – it’s surprisingly doable. this page has some history and good links, including these two.
There’s the controversy over the existence of macromolecules. I’m not sure how strongly it meets criteria 3, though. There was considerable resistance to the idea when Staudinger first proposed it and in 20/20 hindsight it looks like the resistance lasted longer and was more vehement than was justified, but it did come into broad acceptance over a decade or so.
I wrote a relevant post. It mentions an example you didn’t give. It’s a great example because it happened in parallel, so you can’t say that people weren’t ready for it.
I haven’t read the comments yet so sorry if someone already said this but I think part of the problem is lumping in the error of “this is bunk” with “This is reasonable but your conclusions based on it are unreasonable.”
In my opinion, a lot of pop climate science is the equivalent of someone learning about evolution and responding with “Clearly if we follow the recent trend of human development, evolution will result in stick-figures with gigantic brains, since more evolution leads to more intelligence.” Sure that COULD happen but it’s not actually specified in the far less predictive information we have about evolution. I don’t think denying that theory would be denying evolution.
Content warning: reference class gerrymandering.
You say fifty fields of study with five paradigms apiece. That might be fair to represent the whole of science, but it’s not the case that the only thing we know about greenhouse-gas tipping-point climate change is that it’s a paradigm of a field of study. Rather, we know that it is a socially-controversial paradigm that’s used to justify political action, and I’m guessing the numbers for those look quite a bit worse.
If you wanted to be really cruel, you could say that the appropriate reference class is paradigms that keep making wrong predictions but are held to anyway, and so the cynic was right and we should be comparing it to economics.
The question of how many “fields” there are does seem important.
One way to count would be by heads. For example, is “cardiology” a field? It’s a subfield of medicine, but I’d guess it has way more heads than climatology. [I have no data for that, just a WAG.] Climatology could be listed as subfield of “Earth Science,” too.
The most important fact in cardiology for a couple of decades is was that statins prevent heart disease by lowering your cholesterol. That was a very well-established fact that nearly everyone agreed on. There were some wackos all along who said this was an artificial political consensus created by pharma companies, but they were ignored. Tons of statistics showed that statins lowered cholesterol and heart attacks.
But that turned out to be false. Lots of other drugs lower cholesterol and don’t prevent heart attacks. So if statins are cardioprotective, it’s for some other reason. And the tons of statistics are looking slightly shaky, too. In fact, it’s possible that they were mostly generated by pharma companies and a wee bit of bias might have crept in.
If you’re going to count heads, there’s the issue of what’s a “scientist”. Cardiologists, and most physicians, are not scientists. They’re engineers. Except that some physicians, just like some engineers, actually are doing science, at least some of the time.
Luckily it need not be based solely on that. But incidentally; string theory, macroeconomics, nutrition.
It’s also important to distinguish between climate change and anthropogenic climate change, which I sometimes call carbonic warming. That 2000 is noticeably warmer than 1900 is not reasonably disputable. The question is whether it is carbonic warming or noise. If CW, it is likely to continue and more or less track CO2 concentration. If not, not.
Prior to the recent utter debunking of every climate model, we already knew climate modelling was 99% impossible because it’s a chaotic system. You cannot successfully fit past climate data without overfitting. Overfit models reliably diverge wildly as soon as they’re out of the training regimen, which is what we’ve in fact observed. Usually overfit models will diverge wildly in both direction, but predictably published climate models only diverge upward.
But back to chaotic systems. To accurately model Earth’s atmosphere requires a truly staggering amount of data, because you need it all. That’s what it means to be a chaotic system. This is comically beyond our capabilities.
Secondly, economics. From Austrian/Ancap principles, that CW would become corrupt is downright pedestrian. If you couple scientific conclusions to coercive power, then they will conclude that you should coerce them more money. Every time. Notably these are the same people that knew 80 years in advance that the USSR was going to collapse from internal hemorrhaging.
These issues are not arcane. They would have boiled off anyone with integrity or curiosity. The worthwhile grad students would have quietly switched advisers, and as a result there is no such thing as a climate scientist. Climate Audit’s repeated embarrassings the so-called climate scientists was entirely predictable. Particularly memorable, finding they were suppressing the Medieval Climate Optimum, the peak of which is still significantly higher than modern times. And then climategate confirmed that they were committing outright fraud.
One particularly relevant scientific failure – global cooling. Remember global cooling, which was also predicted to be a disaster?
Remember the ozone layer? If banning freons or whatever fixed it, when was the victory party?
Remember acid rain? At least acid rain was actually measurable.
—
Environmental science has been reliably seized by political advocates. Climate science in particular is actually impossible. Climate science has shown signs of fraud. Climate science has had zero successes.
Presumption of innocence is for murder trials.* There is no evidence that climate science is worth a single dollar of anyone’s money.
*(More generally, the epistemic virtue is the presumption of inaction. Don’t throw maybe-possibly-a-criminal in jail. Don’t throw money at maybe-I-guess scientists.)
> the recent utter debunking of every climate model
[citation needed]
> And then climategate confirmed that they were committing outright fraud.
All the most dramatic claims of that kind that I heard of in the aftermath of the CRU hack turned out to be false. Could you be more specific about an example of “outright fraud”?
> Remember the ozone layer?
Yup. And yes, it does appear that cutting down CFC usage is what fixed the problem. Are you seriously suggesting that we determine whether some bit of science got the right answer by seeing whether there’s a “victory party”?
> Remember acid rain?
Yup. It looks to me like something that the scientists got right, the politicians and the market found a way of improving, and that’s now reasonably well under control. Is there supposed to be a problem there?
>[citation needed]
Looks like they’re all wrong. How wrong?
Roughly speaking, that means modelling per se has been falsified at 98% confidence.
>Could you be more specific about an example of “outright fraud”?
Already cited the Medieval Climate Optimum. The real graphs are on Climate Audit. If you want to see one that’s a straight up lie, go to Wikipedia.
>Acid Rain, Ozone
Good to see I can eat half a response with a couple sentences. This is why I brought it up: the emotional/political/propaganda value of these things vastly outweighs the scientific value of them. Looks like you jumped on these to make it look like you had something to jump on.
“The most recent climate model simulations used in the AR5 indicate that the warming stagnation since 1998 is no longer consistent with model projections even at the 2% confidence level.”
Since there hasn’t actually been a warming stagnation since 1998, I feel some doubts on that position. http://www.skepticalscience.com/global-warming-stopped-in-1998.htm
Anyway, anyone who uses 1998 as the beginning of a period is essentially a fraud. The year was exceptionally warm, but the years around it weren’t. Nobody claims “there has been no warming since 1999 (or 1997)”, because that’s clearly ridiculous. But then fixating on one year (1998) is equally ridiculous.
But anyway, this conversation is pointless. You’ll come up with some point stolen from http://wattsupwiththat.com/ or similar, I’ll respond with something from http://www.skepticalscience.com/ or similar – we could just let the websites talk to each other, it’ll be more profitable all round.
The main salient point is that you are not a climate scientist, and you are unbelievably overconfident in your ability to know more than them. There is no reason your opinions (or mine, for that matter) should carry the slightest weight here.
I didn’t claim warming had stagnated. I claimed that climate models, all of them, failed to predict what actually happened. Prediction is the gold standard of science, and ergo it is safe to conclude there is no such thing as climate science.
I noticed you did not attack the point that climate modelling has failed.
Frankly that link looks like fraud to me. The chart at the Watts link is from satellite data, which I can dredge up from a ‘reputable’ source if that’s such an issue, and it clearly contradicts the data at skeptical science.
>There is no reason your opinions (or mine, for that matter) should carry the slightest weight here.
>Since there hasn’t actually been a warming stagnation since 1998, I feel some doubts on that position.
I notice you frontload the propaganda and then backload the disclaimer. These positions are in contradiction; you’re not allowed both at once. Either the answer is “warming hasn’t stagnated” or “I don’t know.”
If Authority is an issue, is the American Physical Society Authoritative enough? Because I’m simply parroting their claim in more blunt language.
>because it’s a chaotic system
The climate (versus the weather) isn’t strongly chaotic on the timescales we care about. If anyone truly believed that climate was chaotic, they’d be terrified and desperately begging for the colonisation of space, as they’d couldn’t know whether the next year or decade would be too hot or cold for life to exist.
>Remember the ozone layer?
Signed in 1989 and revised during the 90s: “The Montreal Protocol on Substances that Deplete the Ozone Layer (a protocol to the Vienna Convention for the Protection of the Ozone Layer) is an international treaty designed to protect the ozone layer by phasing out the production of numerous substances that are responsible for ozone depletion”
Resulting in things like this: “Evidence for the effectiveness of the Montreal Protocol to protect the ozone layer” http://www.atmos-chem-phys.org/10/12161/2010/acp-10-12161-2010.html
>Remember acid rain?
“The Acid Rain Program is a market-based initiative taken by the United States Environmental Protection Agency in an effort to reduce overall atmospheric levels of sulfur dioxide and nitrogen oxides, which cause acid rain.” It was setup in 1995 and “Since the 1990s, SO2 emissions have dropped 40%, and according to the Pacific Research Institute, acid rain levels have dropped 65% since 1976.” (see eg http://en.wikipedia.org/wiki/Acid_Rain_Program).
>when was the victory party?
Somehow “that thing we did five years ago about that thing we were worried about six years ago? well, turned out it worked” attracts less attention than “new disaster/sob story now NOW NOWWWWW!!!”
But if you want to organise a victory party, I’m up for it. What’s the dress code?
>If anyone truly believed that climate was chaotic, they’d be terrified and desperately begging for the colonisation of space, as they’d couldn’t know whether the next year or decade would be too hot or cold for life to exist.
Chaotic systems have attractors. We have ice ages and not-ice-ages. Indeed I’ve seen several discussions about how the problem isn’t warming per se, but about whether we’ll suddenly discover a new catastrophic attractor.
>Acid rain, Ozone
Wow, ate half of one comment and 4/5ths of another with a couple sentences.
As above, you missed the point. It looks like you can’t jump on the other 94% of my comment, so you jump on this to try to make me look bad.
If I thought it was particularly important, I would have spent more than a couple seconds on it. I’m not going to spend any more on it now. But for the record, I was looking for some sort of argument that CO2 sensitivity can be measured like rainwater pH.
> It looks like you can’t jump on the other 94% of my comment
Any anti-AGW drivel has a response on an opposing website; eg Medieval Climate Optimum:
http://www.skepticalscience.com/Sargasso-Sea-Not-Representative-of-Global-Temperature.html
Quoting opposite websites at each other is of no particular use.
Sure it’s useful. For one, I can’t call your stuff drivel if I don’t know what it is.
I’m trying to convince disinterested observers, not you. In this case, I can happily agree that the stuff debunked in your link is, in fact, bunk.
Refer to “IPCC 1990 Figure 7.1 bottom panel” in the Climate Audit link. Yes, that IPCC. Apparently it’s adapted from Eddy and Bradley, who were building on somebody called Lamb. I can dredge all this up if you want to get into it.
In case I need to repeat it: my statement is that the MWP and HCO actually happened but have been suppressed or deprecated in post-1995 climatology, for the obvious propaganda reasons.
These points were addressed here and here.
Skip pages 5-9, the point is narrative vs. narrative comparison.
High status narrative vs. high status narrative. Why is it high status? Because it allows the scientist to conclude you should coerce them more money.
The fact it wasn’t scientifically supported is a point in my favour. It shows a record of politically supporting bunk.
—
The statement climate is not chaotic is absurd. It is, among other things, a Cauchy oscillator. It is literally the canonical example of a chaotic system.
I’m glad they know enough that they have to deny it, I guess. I’ve seen worse.
As usual, Alrenous lies about his links.
Douglas, your irrational hatred of me is showing. You going to substantiate that serious claim?
Though to me it looks like you’re conceding. I have privileged information: I know I’m not lying. That you have to say I am to not be convinced tells me I’m sharing the correct, relevant information.
Secondly, economics. From Austrian/Ancap principles, that CW would become corrupt is downright pedestrian. If you couple scientific conclusions to coercive power, then they will conclude that you should coerce them more money.
It’s corruptible from both directions. If scientific findings might endanger some industry’s profits, then research funded by that industry will conclude that the endangering findings are bunk. Example tobacco funded research on lung cancer.
However a profitable industry has plenty of money to begin funding research that will support its product — well before significant public money will be available to the other side.
Indeed. See also: vaccines. While I still think vaccines are great, we don’t actually know if e.g. thimerosal is dangerous to infants. (Vaccines are basically a free lunch. Free lunch – government malfeasance = still pretty awesome.)
Or: it seems voters get it when industries publish self-aggrandizing research. They don’t notice when government does it, though.
Some people notice some of the funding sources for a while. But the results of the studies stay on record indefinitely.
“Notably these are the same people that knew 80 years in advance that the USSR was going to collapse from internal hemorrhaging. ”
That’s astonishing, since the USSR didn’t even exist for 80 years.
“heliocentrism wasn’t invented until the 1500s, and after that it took people a couple of generations to catch on.”
Actually, what happened was that it took a couple of generations for anyone to come up with any evidence for it. Nothing that Galileo offered is now considered evidence.
There was plenty of evidence for thousands of years. Aristarchus showed that the sun was much bigger than the Earth. That should convince anyone.
Galileo observed the phases of Venus. That’s definitive proof that Venus goes around the sun. Though I don’t understand why anyone ever doubted that Mercury and Venus go around the sun. Many people over the millennia have gotten that right.
Also, Galileo provided theory. His theory (relativity) was a rebuttal to most of the arguments against heliocentrism.
Pre-Newton, is it actually obvious that small things orbit big things?
Being post-Newton, that’s hard for me to judge. I don’t think the early moderns talked about that particular issue much, which is definitely a strike against it.
Before Newton, it’s not clear that large things should cause small things to move, but it was clear that large things are harder to move than small things.
Seleucus used the tides to prove heliocentrism, whatever that means. Presumably this means that he found that the sun was exerting a force on the Earth, almost as large as the force exerted by the much nearer moon. But Copernicus didn’t know this. But modernity was way behind, not even realizing that the tides had to do with the moon until shockingly late. I don’t mean how the moon caused the tide, but merely the fact that the tides are on the same 28 hour cycle as the moon.
How do the phases of Venus prove that it goes around the sun?
And no, his “theory” doesn’t rebut most of the arguments against heliocentrism. Crucially, it doesn’t rebuke the two big ones, namely that if we were hurtling around space like that, why didn’t we feel it? and if we were that much closer to the stars some parts of the year than others (precise parts depending on the stars in question, of course), why did they always look identical?
Suppose that Venus and the Sun both went around us in circular orbits.
It is observed that the Earth never gets between the sun and Venus. That is, these bodies never occupy antipodal points in the sky. (Which is why Venus is the Morning Star and the Evening Star, but never the Midnight Star.)
Thus, as Venus and the sun orbit us, they are always roughly (very roughly) in the same direction relative to us.
If Venus went around us but outside the sun’s orbit, then the portion of Venus’s surface facing us would also mostly be facing the sun. Hence, the visible surface of Venus would always be mostly illuminated.
If Venus went around us but inside the sun’s orbit, then the portion of Venus’s surface facing us would mostly be facing away from the sun. Hence, the visible surface of Venus would always be mostly in shadow.
Now, in fact, we observe that Venus’s surface is sometimes mostly illuminated and sometimes mostly in shadow. This would not be possible if Venus and the Sun both orbited us, but it is exactly what we would see if Venus orbited the sun. (However, the possibility that the Venus/sun system orbits us isn’t ruled out by these observations.)
Indeed, heliocentrism actually seems like a very important data point, even if it is right at the dawn of what we’d now call “science.” Copernicus’ theory was widely circulated and well-known among the educated for roughly 50 years after he published it, without making serious dents in the consensus. And they didn’t accept it for a very good reason: the bulk of the evidence pointed to geocentrism instead.
What we have here is worse than just a case of expert consensus failing, because the failure wasn’t even caused by political factors. This was a failure of empiricism itself, which is science’s primary truth-finding mechanism.
Uh… normally I’d take your word on the history of psychiatry better than I do, being a psychiatrist, but based on what little I know of the field, this sure sounds like an exaggeration. Even if you count Alfred Adler and Carl Jung as basically Freudian, you’ve still got alternatives like Carl Rogers and Abrahamm Maslow (who Wikipedia tells me got their start in the 1940s and 1950s) and drug treatments and electroconvulsive therapy becoming a big part of psychiatry some time in the 60s (maybe even the 50s, going by Wikipedia).
Exactly. Scott’s own examples fail to satisfy condition 3. Note also that Behaviorism and Freudianism were (are, to the extent both persist) largely contemporaneous. The primary competing “theory” for both, the common sense view of the times, was that conscious thoughts and emotions explained behavior. That’s about equally wrong, so I think what we have here is mere “failure to come up with the correct theory right off the bat.”
Good point about the psychoanalysis and behaviorism being contemporaneous. I was considering making a similar point, but maybe Scott’s idea is that the psychiatrists were all Freudians while the experimental psychologists were all behaviorists? Still seems implausible.
Neither Rogers nor Maslow were psychiatrists, and their work has been almost completely ignored by psychiatry. Psychology ≠ psychiatry.
Early drug and electroconvulsive treatment were not considered a disproof of Freudian therapy. The common belief among psychoanalysts was that drugs were all nice and well for symptomatic relief but that only analysis could provide a real cure. Many analysts incorporated medication, Freud himself prescribed various psychoactive medications (including cocaine), and his daughter Anna Freud prescribed several of the early antidepressants. I don’t know to what degree they were viewed as contradictory paradigms. It seems to me that it would be perfectly consistent to understand that the brain had a biological substrate while believing Freud’s theories described its higher level operations (as indeed many people do today)
Could you say what you think the essential differences are? (Of course I already know that a psychiatrist has a medical degree and can prescribe drugs.)
(I thought about this a bit more.) I guess psychiatry is interested in helping people deal with mental problems and psychology is interested in the human mind in general? And I guess the thing most relevant to your response to Chris Hallquist is simply that they are separate fields that don’t necessarily talk to each other a lot (or at least didn’t in the days of Rogers and Maslow).
Yes, that is the distinction. Ernst Weber is an example of a psychologist whose work on perception has no application to psychiatry. So it’s not a counterexample to psychiatry being dominated by Freud. And if Weber’s work had displaced Freud, that would have been the worst failure I’ve ever heard of.
But Chris didn’t give Weber as an example, but gave Rogers and Maslow. Wikipedia agrees with Scott that they are psychologists. Maybe Maslow should count as a psychologist whose theories ought to influence psychiatrists. But Rogers simply was a psychiatrist. His whole career was devoted to designing psychotherapies.
He was one of the four great astronomers of antiquity: Eratosthenes, who measured the size of the Earth (300BC); Aristarchus, who measured the size of the moon and the sun (250); Apollonius, who studied ellipses (200); and Hipparchus, who gathered data (150).
His argument for heliocentrism was not preserved, but everyone knew that it existed. For a century after Copernicus’s death, people claimed to be arguing for and against Aristarchus, not Copernicus. It was only recently that he has been erased from history.
━━━━━━━━━
That’s modernity. What about antiquity? What did they think about Aristarchus?
Well, aside from heresy prosecution, we don’t really know. Lucio Russo argues that heliocentrism was the Hellenistic standard. Hellenistic astronomy doesn’t survive, but Roman pop science does, and its explanations for the weird movements of the planets include: (1) an illusion and (2x) the sun shoots triangles at them, both of which sound like heliocentrism. But the same authors are explicitly geocentric. I am convinced that it was a major school of thought, if not dominant.
What about the germ theory of disease? We have Ignaz Semmelweis, the fellow who reduced the contraction of fever in childbirth by a factor of 9 by making his doctors wash their hands, and also John Snow, the cholera epidemic pump-handle-remover. They were regarded as crackpots for decades, which I think is long enough to qualify.
Insufficiently recent to count, I think. (Although this is some more reference class gerrymandering. To what extent have we gotten better at recognizing when crackpots are right in the 20th century, versus to what extent have we simply become too cowardly to be crackpots? If you look at how climate crackpots are treated in the scientific community, you’d be scared too.)
I think an issue with the “since the 1900s” reference class is that it takes TIME to recognize large scientific mistakes or incompletenesses. It took a couple hundred years after Newton for Einstein to come along.
Also, my intuition is that “new” sciences go completely off course more easily than established sciences, and so you want to go back to the pre-formation of new science to look at major mistakes.
I would probably put the reference class at “since the Enlightenment”.
Not exactly sciences, but…
Economics: minimum wage monotonically causing unemployment
Archaeology: “Native” Americans arriving in one clump 10kya via Bering land bridge and ice-free corridor.
Archaeology: Viking settlements in North America
Medicine: Smoking is good for you (I can’t swear this was for real)
I don’t know about “good for you”, but R.A. Fisher, one of the most influential statisticians of the 20th century, was known to be a strong advocate of the position that smoking was completely harmless, and that the reason people with lung cancer tend to be smokers is because smoking makes lung cancer hurt less.
I own a glorious medical textbook from the ’20s, in which one is told that smoking is comparatively harmless but if taken to excess may damage one’s eyesight.
I don’t think that it was ever consensus in economics that minimum wages above market-clearing rates always cause unemployment (e.g. see http://www.igmchicago.org/igm-economic-experts-panel/poll-results?SurveyID=SV_br0IEq5a9E77NMV).
But the tenor of the hundreds of papers published on the subject (ably reviewed by Neumark/Wascher (2006) http://www.aei.org/files/2006/12/04/20061201_NeumarkWascherPaper.pdf) is that they do.
And in the cases they don’t, it’s likely that (i.e. we have suggestive evidence that) they depress future job growth, or reduce non-wage benefits of jobs.
Right, I’m still convinced that minimum wage above the market-clearing level does (all else being equal) tend to reduce the quality and quantity of jobs. But we don’t have to *decide* this one for it to work as an example – there was a HUGE consensus one direction until quite recently and since Card/Krueger there’s been a pretty large push in the other direction…and the two views can’t BOTH be right so at least one is wrong. Either view by itself fails on point 4, but the two views taken together probably work. (If you’re on Krugman’s side you think most of the field was wrong until 1992 and if you’re not, you think a large chunk of the field is wrong today.)
Wait, there weren’t?
Yes, I’m interested in this one: I thought this was still the consensus.
I think many of the examples here (including some of your original examples) are missing the point of science. Science at its core is *not* about finding truth; rather, it is about finding *frameworks* that are useful for uncovering truths.
Take the example of behavioralism. The core tenets were bullshit, but it was *way* better than the thing it replaced, which was basically armchair philosophy. Behavioralism was a reaction to a field full of armchair philosophers who speculated about what was in people’s heads without bothering to collect any data. The (admittedly extreme) reaction to this was to ban everything *but* observable data, which was quite effective at getting the overall endeavour back on track but had some side effects down the line.
Or take Freudian psychoanalysis. I’m less sure about this example, but my impression is that before Freud people didn’t really think that *listening to what the patient said* was a particularly useful thing to do. Freud bothered to do that, and yes, also came up with a bunch of other crazy ideas that were not terrible useful, but that fundamental insight was *so* important that even with all the additional baggage it was still better than what it replaced.
Note that in linguistics, behavioralism was later mostly replaced by Chomskian nativism, which again goes to the other extreme and claims that language is almost entirely hard-coded into humans with a small number of parameters to be learned. This seems mostly wrong given current knowledge but was a huge improvement upon behavioralism (why? because there were important identifiable aspects of language that behavioralism was unable to say anything meaningful about, but which Chomsky was). And now we’re emerging into a more nuanced, statistical understanding of language, which is succeeding because *it* can explain properties that *nativism* can’t.
OK. Current climate modeling is a framework for uncovering truths, and the fact that it has not uncovered any to date should not be held against it as a noble act of science. But as a policy guide to whether or not we should cripple all spheres of human activity by taking away their cheap energy in order to not destroy the world, maybe we should wait until it actually uncovers some truths, yes?
The models predicting climate change makes use of statistical paradigms that have had great success at uncovering truths in other areas, as well as within climate science itself (where there is a large amount of data collected and many diverse phenomena that have been successfully explained, so no shortage of problems to solve to test the framework). It seems like a perfectly reasonable framework, and unlike in linguistics and psychology it didn’t replace anything truly horrible (as far as I know, at least), so we shouldn’t expect there to be too much accompanying baggage.
Behavioralism was a reaction to a field full of armchair philosophers who speculated about what was in people’s heads without bothering to collect any data.
No, I’m sorry, but this is simply not true. Consider:
Hermann Ebbinghaus
Wilhelm Wundt
Gustav Fechner
Ernst Weber
These are some of the great experimental psychologists, who championed and demonstrated that the mind could be studied empirically, and discovered facts of human psychology which are held true to this day.
Some of these people died before behaviorism was even invented. The idea that before behaviorism, psychology was just armchair theorizing, is nonsense. You can claim that it was a reaction to structuralism, and to the idea (from Wundt, Titchener, etc.) that one must be trained in introspection to be an experimental participant, and to other such things, but that’s a rather different matter.
Thank you, my understanding of the history is shaky at best and this helps a lot to fill it in.
Scott, I don’t think you can use scientific method to study scientific method. 🙂
Damned if you do, damned if you don’t? People have criticised epistemologies that *don’t* apply to themselves. (https://en.wikipedia.org/wiki/Self-refuting_idea#Verification-_and_falsification-principles)
I think you do want to be able to apply an epistemology to itself. E.g. “science”, Bayesianism etc.
Continental drift was for a long time rejected by the mainstream despite the evidence for it. This may not count as it was a positive claim, and the objections were based in part on the lack of a causal mechanism, which is semi legitimate. Possibly we can count the alternative at the time of massive intercontinental land bridges (to explain similar fauna) as a claim that was obviously silly and there was a good alternative to. http://en.wikipedia.org/wiki/Continental_drift#Rejection_of_Wegener.27s_theory http://www.ucmp.berkeley.edu/history/wegener.html
There was a lot of good evidence against it, too; for one, the mechanism originally proposed for moving the continents around was sheer nonsense.
Speaking as a geologist-in-training, we’ve only /barely/ gotten past the point where there are living geologists that reject continental drift. (The development is surprisingly recent, at least as a complete synthesis- we’re talking about the 1950s, not the Victorian era or something.)
But I wouldn’t classify a rejection of plate tectonics as an error of the sort that is germane to this discussion. Science is an ongoing process, and ‘not being persuaded by a useful theory’ is worlds away from ‘being convinced by a false theory’. In the case of plate tectonics, I know there was a generational component; everyone who went through graduate school after the introduction of plate tectonics was persuaded, a large fraction of the older ones were persuaded, and eventually the rest died of old age.
I had books as a child, in the 1970s, which treated continental drift as not entirely established.
Ludwig Boltzmann had a devil of a time getting physicists to take his work on the kinetic theory of gases seriously; it wasn’t until Einstein’s paper in 1905 on Brownian motion that physicists couldn’t justify disbelieving in atoms any more. (Chemists had been taken atomic theory literally for quite a long time before that.)
A huge failure in the field of Physics was the discovery of N-Rays which were analogous to X-Rays. 120 Scientist, 300 published papers and almost 30 years research into something that did not in fact exist.
http://en.wikipedia.org/wiki/N_ray
And again the discovery of Polywater. Ten years of research into something that should have been debunked in one day. This is just boiling/freezing water.
http://en.wikipedia.org/wiki/Polywater
And to be honest climate change is much harder to debunk (assuming it is really wrong).
The wiki article there suggests that it was not 30 years but rather 2 years that it spent between prominence and debunking. Embarrassing, but not on the level of climatology.
You are right, must have mixed up the numbers.
Another very interesting failure of science, at least for a long time was the question about the origin of the moon. Since the 19th century around 6 very promising theories were around to explain the origin and formation of the moon.
It wasn’t until Apollo landed on the moon and analyzed the geochemical composition of the moon that all of them were debunked.
I think this example, maybe combined with Newtons Aether that was around for a long time as well is better suited to compare to other new scientific theories. Dark Matter and Dark Energy come to mind where no direct evidence was observed so far and which might turn out wrong.
This has nothing to do with climate science in particular, but I find it a bit too easy that slate only mentions 3 examples that he can think of and all of them in the “soft” sciences.
Could you tell us about your suspected current failures of science in a month or so? They sound quite important, and by that time they wouldn’t risk distracting from present discussion.
Seconded, though a month seems overmuch. Discussions in this blog tend to die in a matter of days (usually as soon as the next blog entry is posted).
How about:
The belief that smoking was beneficial to health (cigarettes handed out in doctors’ offices).
The belief that a low-fat high-carb diet was the best way to lose weight.
Also:
” I can think of three that I very strongly suspect are in that category, although I won’t tell you what they are so as to not distract from the meta-level debate.”
Can you make a separate post about them?
The less the science has to do with human affairs, the gentler the arguments. When astronomers demoted Pluto from planet to dwarf planet, some people were upset. Neil deGrasse Tyson claims, jokingly, to have received hate mail because of his stance on Pluto. But it’s unlikely modern astronomy will ever generate the seething bitterness of the Michael Mann vs Mark Steyn court case.
The more the science has to do with human affairs (e.g. social psychology), the more we need to be vigilant. Diederik Stapel published fraudulent studies over many years before he was finally caught out. I don’t imagine a physicist working at CERN could even attempt anything as brazen.
An interesting case to consider is the Mad Cow Disease outbreak in the UK. To begin with, Richard Lacey was a lone voice warning of the danger of infected beef. His treatment was disgraceful
http://www.theguardian.com/uk/2001/mar/05/footandmouth.simonhattenstone
But once the problem of Mad Cow Disease potentially causing nvCJD was recognised, things swerved to the opposite side. I seem to remember forecasts of six-figure death tolls across the UK. In the event 176 people died from nvCJD. There were also some farmer suicides.
Not sure if these are good enough:
Plate tectonics.
N-rays.
HBD.
Homosexuality as a medical condition (separate from it being a fitness-reducing disease).
The Highlands controversy (which Wikipedia hardly mentions)
Two links to the same article:
http://archive.lewrockwell.com/rozeff/rozeff152.html
http://www.lewrockwell.com/2007/05/michael-s-rozeff/how-the-state-corrupts-science/
The number of human chromosomes has already been mentioned. It’s sort of a real-life example of Asch’s conformity experiment.
Of interest:
https://en.wikipedia.org/wiki/Suppressed_research_in_the_Soviet_Union
Freud has his defenders…
http://www.theguardian.com/society/2003/mar/22/health.healthandwellbeing
I’m no expert, and he might have credible defenders, but Oliver James is not one of them.
So has Marx. Some people just want to believe.
Bolzmann entropy and the whole mechanism of what’s now known as statistical mechanics in physics was so ignored and ridculed that Bolzmann took his own life before seeing his theory become mainstream. This is now the basis of statistical mechanics.
http://en.wikipedia.org/wiki/Boltzmann's_entropy_formula
(Might be slightly too old for you to count it).
The brief time that memory RNA was considered proved:
http://en.wikipedia.org/wiki/Memory_RNA
then it wasn’t.
The belief in the luminiferous aether up until the late 19th century — that was mainstream physics but just wrong — this was the basis for all physics related to light at that time:
http://en.wikipedia.org/wiki/Luminiferous_aether
Stomach ulcers from bacteria — depending on how narrowly you want to define field
http://en.wikipedia.org/wiki/Helicobacter_pylori
Was long thought to be diet/lifestyle that caused ulcers. Discovered first in 1875 rediscovered several times up until 1980s/1990s now scientific mainstream.
But considering you’re defining climate change as a “field” I think it’s sort of OK. Climate change wouldn’t be a field if there wasn’t climate change — it would just be a boring series of observations of temperature fluctuations.
Has anyone mined nutrition yet? There’s loads of examples there. Speaking of nutrition, here’s Kellogg.
1. Probiotics. Role of gut flora. I don’t know if Kellogg figured out the causal mechanism, but his recommendation to eat a lot of yogurt (for both physical and mental health reasons) falls pretty well in line with this — and only now is it going mainstream.
2. The effects of masturbation: net positive or net negative? Total reversal of position means there’s a mistake *somewhere*, even if we don’t know what it is yet.
3. How well-known was it back then that caffeine causes bowel irritation, or that fiber is important?
There are probably more — I haven’t read much of Kellogg, and I don’t know the history very well.
Ptolemy’s Almagest explicitly discusses heliocentrism, and rejects it mainly on the grounds that there is no plausible physical mechanism, not that Ptolemy can’t make it work mathematically.
Do you have a precise citation to where Ptolemy says this?
Wikipedia says that he rejected the rotation of the Earth on the grounds of winds, which sounds like a rather different complaint.
A few people actually got Heliocentricism right before Copernicus according to
http://en.wikipedia.org/wiki/Heliocentrism
Also, wasn’t science *obviously* politicized in the era of Copernicus, Bruno and Galileo?
Does Freud say wrong things about the brain, or just ignore it.
Is it fair to say that psychoanalysis is no more effective than placebo as a treatment, when it was the original type of talk therapy? Talk therapy is very effective, even if the type of therapy doesn’t matter, and represented major progress in terms of treatment relative to the ‘straitjacket and padded room’ standard of care, or even the ‘lobotomy’ that briefly succeeded them. Also, a hugh amount of classic psychology did take place during the period dominated by psychoanalysis and behaviorism.
Fifty fields as important as genetics, psychology and psychiatry, with internal consensus as strong as behaviorism, psychoanalysis and Lysenkoism? I don’t believe that. It’s terrible procedure to make up a number on one side when you can easily just try to count it out, and I think you’d get stuck before 20. Also, 5 paradigms per century? Maybe 2-3. If we assume 20 and 2.5, we’re looking at a failure rate of 3:50. If we assume follow your rules, and say that current consensus doesn’t count, only overthrown consensus, then there are 1-2 instances per field of overthrown consensus per century, for a failure rate of 3:30 or 10%.
What about fields like hypnosis, where there’s been a consensus for centuries that it’s real but that you shouldn’t use it or investigate it?
What about Traditional Chinese Medicine? Or massive differences between Western Countries WRT basic medical facts, as described here http://www.amazon.com/Medicine-Culture-Revised-Lynn-Payer/dp/0805048030
If I remember correctly, Stalin banned lots of sciences, BTW, including computer science!
WRT global warming though, I’d say that the conversation is politicized to the point of incoherence, not just controversy. Is ‘supporting anthropogenic climate change’ like ‘supporting gay marriage’, e.g. demanding that we (whoever ‘we’ is) do something? Or is it simply not denying high-school chemistry relating to the emissions spectrum of CO2, which I’d say is a great deal more certain than 97%? My impression is that the consensus on AGW is that
a) the world is getting warmer, largely or entirely due to CO2 emissions.
b) we can make models to outline a range of effect sizes
c) the models are known to be highly unreliable, have huge error bars, and even the high-end estimates give effect sizes small enough that given normal rates of time discounting it wouldn’t be worth paying much now to correct the problem, but given how unreliable the models are, their high-end estimates aren’t the real high-end anyway
d) empirically, the glaciers seem to be melting faster than the high-end estimates from the models predict, but no-one has a good explanation for what’s happening there
e) there’s little philosophical consensus in favor of time discounting
f) there are lots of non-AGW reasons for doing many of the things that are proposed as solutions to AGW by mainstream authorities. Meanwhile, technocratic engineering types receive little support when they promote quick fixes via geo-engineering.
All in all, it seems clear that climate change is not about changes to the Earth’s climate.
You blatantly assert that hypnosis is real, and that this has been known for centuries. Can we have some links? Lots of links?
And what, exactly, are you saying is up with traditional Chinese medicine?
The Oxford Handbook of Hypnosis is a good place to start. You can read bits of the book on the first link and chapter abstracts on the second. http://books.google.com/books/about/The_Oxford_Handbook_of_Hypnosis.html?id=Nz_dnQEACAAJ
http://www.oxfordhandbooks.com/view/10.1093/oxfordhb/9780198570097.001.0001/oxfordhb-9780198570097
If you want “lots”, google scholar will generate them for you. I suggest starting with the queries: “hypnosis real simulator”, “hypnosis fMRI”, “hypnosis pain”
I’ve always been curious about this with regards to hypnosis discussions: how would you define “real”?
A real “trance state” that is significantly different from “normal” thought? Real effects? Perticularly impressive real effects?
What other fields are like hypnosis in this way?
Just for fun, I’m going to throw in another one – from my totally biased personal advocacy, of course. I think the current situation with genome identifiability and human biological materials is sort of in this place.
Pt 2. This is really important: human tissues and cell lines are a foundational and pervasive tool in human biology/health research. We have been sharing these materials with very little barriers – to the benefit of science. We legally *ruled* individuals could not assert personal property rights on materials derived from them: see Moore v. Regents of the University of California.
We thought we couldn’t know who they came from. We were wrong. Not only can we re-identify, we can look at the individual’s genome and predict all sorts of sensitive stuff from it. It’s huge.
Pt 3. There is a stubborn resistance to truth. As you saw with the NIH’s draft genome data sharing policy last fall, the NIH persists in describing genetic data as “de-identified” when this it is now a published fact that “de-identified” samples are identifiable (Gymrek et al, Science 2013). People have been warning this for years (I work with them). There is a ton of fear and frustration that decades of materials and data are going to be rendered unusable (I have talked to leaders who voice this).
Pt 4. Does everyone agree? Essentially, yes. Everyone now agrees the genome is identifiable, that the biological materials are identifiable. Resistance went from denial to “not talking about it” to “but it’s unlikely and hard to do” – and that’s an increasingly weak argument. We’re currently (FINALLY) in a transition between positions 3 and 4. People are starting to look at solutions involving data security, legal restrictions, and restructuring how we pursue data and biological sample sharing and consent.
What does this say about science and wrongness? Perhaps that sunk costs sometimes play a large role in point #3. Beyond the sunk costs of personal reputations, anthropogenic climate change would seem to be in the reversed situation (the scientific consensus is in spite of societal sunk costs).
Anyone wants to bet anything that there will be scandals re: corporations snooping around for personal records, or private individuals (I will refrain from speculation on the particular groups likely to be tempted) attempting to practice eugenics? On the bright side, maybe this will give credence to the currently-unfashionable warnings about how fucking damaging such stuff could be.
I think your work might be of enormous importance to public security! Would you be willing to share a summary of the efforts in this direction?
You may be disappointed, amused, and/or interested to hear that our solution is not about security: Some people want to share their samples and genomes publicly, despite the risks. Work with these people.
(That is to say, our work is in “restructuring how we pursue data/sample sharing and consent”. I don’t have a solution for the sunk costs beyond “stop sinking more costs!”)
You can follow our work at blog.personalgenomes.org and also our new project blog, blog.openhumans.org.
American Anthropology, 1930-1980. Franz Boas opposed evolution and was convinced that human behavior was entirely a product of culture, and his doctoral student Margaret Mead knew that she’d have to produce results in line with this to get her degree. She did some sloppy fieldwork and published “Coming of Age in Samoa”, which reported as fact things her informants had made up as jokes about Samoan sexuality, and was a primary reference used for decades to argue that sex roles were purely cultural artifacts.
The alternate theory was evolutionary anthropology. At the time, it was interpreted in a very conservative, racist, sexist, and teleological way, so it’s hard to blame Boas, or even to be sure that his mistake wasn’t beneficial to us.
Nitpick: Boas rejected orthogenetic evolution, not Darwinian evolution.
Wikipedia:
This was during the period that Huxley described as the “eclipse of Darwinism”, which may well be an example of what Scott is looking for here — the period where evolution of species was generally scientifically accepted, but natural selection as a means was not. Biologists (and anthropologists, etc.) looked for other driving forces behind the change of species: theistic evolution, orthogenesis, neo-Lamarckism. This lasted until the merger of genetics with evolutionary theory in the modern synthesis.
Thankfully, determining whether AGW is correct based on whether SSC “can think of” other examples of scientific failures is laughably ridiculous. AGW, like everything else, ought to be judged on its merit based on the available evidence.
I havent had the time to read the 100+ comments preceding this one, but Im sure somebody mentioned continental drift, bacterial origin of ulcers, and the push for hydrogenated fats as major scientific failures.
But that’s besides the point. Science, even complex science is understandable for educated, science enthusiasts such as myself, provided the evidence is convincing. The evidence of the fossil record and genetics makes an overwhelming case for evolution. The double-slit experiment shows quite clearly that the major claims of quantum mechanics are true. Time dilation has been measured and confirms Einstein’s theory of relativity.
Where is this evidence for AGW? It’s warm outside? Seriously, it’s pathetic. Computer models? Right. AGW predicts it should be about 1 degree C or so warmer on earth right now, so that prediction fails. AGW predicts a hotspot in the upper troposphere (the source of the water vapor feedback) – it’s not there so that prediction fails. AGW predicts that historical records should show CO2 driving temperature, but the Vostok Ice cores show clearly temperatures driving CO2 levels by outgassing of the oceans, so that fails. AGW predicts that we should be warming at an unprecedented rate, but paleoclimatology shows that this is false too – the current warming rate is not even close to being unprecedented and neither are the current temperature levels.
On the other hand we can see clearly that there is major pressure for scientists to follow the AGW paradigm, on pains of being excommunicated and branded an Exxon-funded denier. Governments are clearly giving more grants and prestige to scientists who ring the alarm bells than to those who say everything’s ok. And politicians love the notion that they can “save the planet” by taxing and regulating us to death.
Well, you pass a Turing test of a sort because reading the first two paragraphs it seemed very likely you would continue on to say the deniers are completely delusional.
Scott hasn’t read or responded to things that actually fit his criteria and people keep posting terrible examples. Like, the 1900 cutoff is obviously reasonable and the least objectionable restriction but people ignore it.
The result of this method and discussion has yielded about an 8% failure rate for major controversial claims over 50-100 years and AGW does share many features (difficult to observe, or experiment with, and requiring long time scales for scientists to do work in the field) that are favorable to the critics in subdividing reference classes compared to disciplines like physics and medicine
It isn’t clear to me that recovered-memory therapy was ever mainstream enough scientifically to count here; but it was mainstream enough in society to cause quite a bit of upset in the 1980s. The “Satanic ritual abuse” moral panic thrived on the conjecture of repressed memories of abuse that were “recovered” through therapy, and which in retrospect it’s pretty clear were invented through what amounts to guided storytelling.
If we can’t come up with more than a dozen examples of science failure, I think it’s fair to say there were enough opportunities for science failure that these are very rare, much rarer than would be necessary for the anti AGW case to merit serious attention.
This coming from a self-described “rationalist” is pathetic. Essentially you’re saying science is rarely wrong therefore shut up.
No. What is the evidence which makes the case for AGW? It’s not there. I ask this all the time and I get the usual “scientists agree” and “it’s a whole body of complicated evidence”. THAT’S NOT SCIENCE!!!!
I understand enough complicated scientific theories that if evidence for a scientific theory cannot be presented under such pathetic pretexts, I dont buy the theory.
And what evidence does the IPCC use to make the case for Co2 being the driver of climate:
The observed patterns of warming, including greater warming over land than over the ocean, and their changes over time, are simulated only by models that include anthropogenic forcing.
Wow, so their computer models cant replicate the past correctly until they tweak it with Co2 and cooling aerosols.
Computer models are not evidence – they are a hypothesis. And they have been falsified by the current temperature trend.
Despite being skeptical of AGW, and especially of the proposed “solutions”, I think you’re being somewhat unfair to Scott when you say:
This coming from a self-described “rationalist” is pathetic. Essentially you’re saying science is rarely wrong therefore shut up.
The issue of whether scientists are wrong, as a group within a field, is a social-science question; one where finding out how often the phenomenon occurs is a reasonable start at investigating the question.
The main question Scott is asking is a fair and interesting one, namely how often is science wrong.
But to use this results to judge whether a particular theory is wrong or not is doing things backwards.
We in the Bayesian Conspiracy call that “establishing a base rate”. You might look into it sometime.
I can understand using a Bayesian analysis for some questions, but when something is directly knowable it seems ridiculous.
Should we use that in criminal trials too?
“Your honor, 95% of black males prosecuted by this District Attorney have been found guilty as charged. In light of this I propose we dispense with the trial and declare him guilty right now.”
Different heuristics are appropriate to different goals. Prior to doing the research your estimate should be the base rate. If you find the topic interesting and worth your time, then dive into the substantive disputes.
e:lol maybe I shouldn’t use “questions” to mean three different things in three sentences
“The issue of whether scientists are wrong, as a group within a field, is a social-science question”
No, the issue of why they are wrong is a social science question. The issue of whether they are wrong is entirely a a climatology question, although the Beyesian analysis of how readily we should trust them can and should draw on observations from many fields.
If 95% of blacks pulled into court were convicted, betting on their conviction for offered odds better than 1 to 20 would be an excellent idea. If you think that they shouldn’t be convicted, you either think that the court is racist or that the costs are so high that 95% certainty isn’t enough. Your analogy proves nothing. Bayes always wins. Come to the darkness.
I think the difference here is that I’m trying to use outside view.
I’m banning you for tone and overconfidence. It’s not really your fault, there are just too many annoying people around right now and I’m on a hair-trigger and you were the first person to trip it.
This is my favorite reason-for-banning that I’ve heard in a long while, due to its clarity, bluntness, and amusingness.
I am pretty sure > 50% of people who would self-identify as anti-AGW would believe what you call the Obviously Unreasonable position, which is a pretty big difference between that and the feminism example.
Absolutely not. You are just demonstrating your bias here.
Does it count as a failure of science if the problems with the theory are well known and clearly demonstrated, but in a different field?
Artificial intelligence from 1955 thru 1985 was ruled by symbolic AI. The problem areas for symbolic AI, which have to do with the inaccuracy of categories, and the importance of subconscious connotations and context, were worked out by Wittgenstein and analytical philosophers in the 1930s, and very thoroughly experimentally delineated in linguistics and psychology in the 1960s thru the 1980s (Rosch, Medin, etc., work on color categories, basic-level terms, prototype vs. exemplar theory, semantic priming, asymmetries in similarity judgements), yet AI researchers (with the exception of John Anderson) remained oblivious to this literature, and had to discover the same objections themselves through different angles. David Chapman, who commented above, was a key player in this.
What happened next was even odder: There was a counter-movement of “reactive behavior” and “embodied AI” that recapitulated hard Skinnerian behaviorism and anti-cognitivism within AI. I think “embodied AI” was David’s term, and I don’t mean to accuse him of this, but I do accuse Rodney Brooks of this. Brooks said that “The world is its own best representation”, and said intelligence does not require knowledge (see “Elephants Don’t Play Chess”). This is all very well when you’re building robots to walk across the room without running into things, much as it works very well when you are testing the reactions of mice to blasts of air. But every objection Chomsky made to behaviorism in his famous 1957 article applied equally well to Rodney Brooks’ vision of behavioristic intelligence, which is very similar to that embodied in Skinner’s “Verbal Behavior”.
Summarizing information about the world requires constructing categories and making approximations. Some people see the advantage of logic and abstraction; others see the advantage of context, flexibility, and speed. This same opposition manifested in philosophy as the positivists and analytics versus Quine and the structuralists and postmodernists, in psychology as cognitivists versus behaviorists, and in AI as symbolic AI versus reactive behaviorists.
I think all six of these dogma groups should be excluded from our list of “failures”, since each of them is half-right. Logical positivism and behaviorism are considered “wrong” today, but that’s, well, wrong.
There is a major meta-science failure here, which is the failure to recognize that philosophy, linguistics, psychology, and artificial intelligence are all the same field–at least, to the extent that to work in philosophy or AI, one should have training in all four fields. And you might have to throw neuroscience in there, too.
Not relevant to Scott’s question, but an interesting perspective on AI: The first conscious machines will probably be on Wall Street.
Whoah, this is the most interesting paragraph in this thread. I’ll have to dwell on it, thanks.
I think the real failure of AI was not incorporating the field of statistics (until as late as the 80s-90s).
String theory, most likely.
Keynesian economics. As I understand it, the theoretical framework used by neo-Keynesians has been adjusted so much to account for what’s happened since World War 2 that it bears little resemblance to the original theory. A subset of this is the fate of the Phillips curve which economists were trying to hold onto into the 1980s.
Dinosaurs were all cold-blooded.
Economists are still trying to hold on to the Phillips curve today. In the UK it is still taught in high-school economics, and I believe it is also taught at Cambridge. I recall it being used in a paper by one of the branches of the US Federal Reserve, but I don’t have the reference to hand.
I would draw a slightly different lesson from the many adjustments to original Keynesianism. It may be that New Keynesians and the Post Keynesians have some merit, but the original thing is so terrible that it’s like chemists calling themselves New Alchemists or Post Alchemists.
I don’t have much interest in reading, say, Plato or Adam Smith except for historical interest.
Adam Smith is still surprisingly relevant (and has nothing to do with Keynes). Modern-day protectionists often make the same sort of arguments that the mercantilists he was arguing against made back then. Many of his insights into how the market works haven’t really been improved on since. And Smith was an honest debater so even in the places where he was wrong you can still learn something from following his reasoning process.
(I’m told David Ricardo is even better in this regard but haven’t yet gotten around to reading him.)
All this talk about consensus re: global warming seems silly. Isn’t the whole point of science that it makes predictions we can verify? Why are we on this side-show instead of dissecting the data?
The most compelling anti-GW argument I’ve seen is this chart show predictions against actual measurements (spoilers: they don’t match at all). Maybe the chart is wrong, or lying, or something. I assume there is some response, and I would like to hear it.
(The other complication: I assume Nat Geo isn’t lying when they say that global warming seems to have paused for the past 15 years. If I were to say “there hasn’t been GW for a decade” is the 97% with or against me?)
The chart is, in fact, absolutely correct. Our best measurements indicate that 21st century has seen remarkably little warming. Geoscientists are broadly aware of this, duly confused by it, and discussing it in a number of public and semi-public venues.
Naturally, there are some calls for better instruments or methodology, basically people not trusting their eyes. There’s also discussion of ways to improve the models based on new data- i.e. some good thoughts about feedback cycles involving increased evaporation rates.
The basic mechanism of ‘CO2 converts photons to heat’ isn’t under dispute, but temperatures on the decade scale are clearly doing things we did not expect them to. This may well revise downward our estimation for civilization-level impacts of carbon emissions.
Roy Spencer has his critics: http://rationalwiki.org/wiki/Roy_Spencer
http://www.realclimate.org/index.php/archives/2011/04/review-of-spencers-great-global-warming-blunder/
Just glancing at the chart briefly, I see they cherry-picked the years. Their claims are true only if you start in 1983, and stop in 2011, 2012, or 2013. Climate change has been slower than expected for the past 15 years. If you have a model to predict climate change over the next century, you should expect it will be slower than predicted in some decades, and faster in others.
“The last 20 years” doesn’t really seem like cherry-picking to me. Triply so for any of those models which are younger than 20 years old, and therefore can’t be tested for predictiveness against earlier dates.
The Steady State/Continuous Creation theory of the universe (now replaced by the Big Bang).
I suppose you could argue that this doesn’t satisfy 3, because the Big Bang hadn’t been proposed, but there were certainly competing ideas about the creation of the universe (in this case, the idea that the universe was created at all, rather than being eternal).
Steady-State/Continuous-Creation was accepted by some for a while after Big Bang had been proposed, but once the microwave background was discovered, pretty much only Fred Hoyle still believed SS/CC. (I may be oversimplifying this.)
Sure, it was eventually defeated, which is why we now “know” that it was wrong. But why was it even the dominant theory in the first place? Geocentrism could at least claim to better fit the evidence back in its heyday.
Because something had to explain redshift, and a big bang leads to theological questions.
One of the great debates in earth science involves J Harlen Bretz’s theory on the formation of the Scablands. Geologists at the time were predisposed to gradualism and vehemently opposed Bretz’s ideas which involved rapid changes caused by giant floods.
http://en.wikipedia.org/wiki/Channeled_Scablands
Not really following rule 3 and 4, but a post by you on the pros and cons of blood letting, as in the 18th century medical practice, would be really interesting.
Plate tectonics?
Plate tectonics seems to fail “very well known”.
By the way, the whole id-ego-superego thing that people think Freud invented was actually the old model of how humans work dressed up to look sciency. It was called appetites, passions, and reason. Animals had appetites, angels had reason, but only in man was appetites and reason combined to form passions.
I love the approach you are taking, and you are probably right about the object-level conclusion, but I think the actual reasoning you present is wrong.
There are two kinds of fields: these which are legitimate science and these which aren’t. You acknowledge this when you write, “a cynic might say if we include psychology we might as well go all the way and include economics, sociology, and anthropology, raising our error count to over nine thousand”, but despite that you seem to take for granted that both climate science and psychology belong in the same cluster with physics and chemistry and not with economics and sociology. Which is interesting, because I’ve always taken for granted that psychology is in the latter group. (I think Richard Feynman gives psychology as an example of bogus science in The Feynman Lectures on Physics.) Based on nothing in particular I personally feel that climate science seems much more like real science than psychology does, but, it also seems to me that the real point of contention between the pro- and anti-AGW crowds is precisely which group climate science belongs to.
In other words, there are the following possibilities, ordered in decreasing probability of occurring for a randomly chosen pair of (field, claim that field makes).
1. Non-science is wrong.
2. Non-science is right. Even a blind squirrel finds a nut once in a while.
3. Science is right.
4. Science is wrong because the scientists were negligent in following their duty, perhaps because of motivated cognition, e.g., the already quoted N rays.
5. Science is wrong despite the fact that the scientists did everything right. For example, most physicists expected supersymmetry to be true, before the LHC failed to find the supersymmetric particles. I’m not exactly sure what the current status is, but let’s say we build an even bigger collider and there are still no superpartners, then the physicists who believed in supersymmetry, ie almost all of them, were wrong for no fault of theirs.
Your article is written as if the debate was between 3. and 4. I think in reality the disagreement is between 1. and 3.
There are two kinds of fields: these which are legitimate science and these which aren’t.
I don’t think this is a helpful dichotomy (at least, not for the purposes of this conversation). There is good reasoning and there is bad reasoning, and there are methods of inquiry which are truth-conducive and others which are not. Some academic fields contain more of the good and less of the bad, and some contain more of the bad and less of the good. But there’s no clear cut-off from “science” to “non-science.” There are certain family resemblances among fields commonly called scientific that are not shared with other legitimate academic fields — e.g., repeatable controlled experiments are present in many “sciences” but absent in, say, philosophy and history (both legitimate academic fields). But even these are not universal (for example, we can’t run a controlled experiment for historical evolutionary theories, but these theories are still considered scientific).
Although this is just armchair reflection, I suspect that what we find when we look at matters this way is that various factors correlate with a field being more prone to failures of the kind Scott discusses: e.g., lack of experimental controls, political incentives favoring certain theories, dominance of the field by powerful personalities, etc. Very roughly, the “hard sciences” will tend to do better than “soft sciences,” but I think this is oversimplifying in some cases: e.g., the dominance of the Copenhagen interpretation of QM seems to me to satisfy Scott’s (1)-(3) above (though of course not (4) — it’s too recent), but physics is as “hard science” as it gets.
It does seem to me fairly common in academia in general that academicians accept various dubious theories as fact because that is the current “paradigm” of their field, even if the original arguments/evidence in support of those theories are in fact quite poor.
What all this says about global warming I don’t know. I’ve no doubt that a lot of contemporary research on both sides of the issue is politically motivated, but it’s not as clear to me why climate scientists would have converged on this particular paradigm if there were not initial evidence — that the earth is warming is not the kind of claim that a priori fits better into the progressive/politically correct worldview than the conservative worldview, even if (in the United States, at least), it’s associated with progressivism. In other words, I can understand how the paradigm could be maintained by political motives, but not how it would have originated from political motives. Of course, it might have originated for non-political but non-truth-related reasons too. Any skeptics care to enlighten me?
I’m also inclined away from global warming skepticism because I know of a couple high-profile scientists who I respect who were initially skeptical and who then came to more or less embrace climate orthodoxy, and their shifts did not seem to me to be the result of political pressure.
You need to be more clear about what you’re calling “psychology”. I suspect you’re talking about psychiatric therapy.
“In other words, there are the following possibilities, ordered in decreasing probability of occurring for a randomly chosen pair of (field, claim that field makes).
1. Non-science is wrong.
2. Non-science is right. Even a blind squirrel finds a nut once in a while.
3. Science is right.”
I think there might be a mistake in the order there. Science is right less often than non-science?
This was intentional, since there is so much more non-science than science. Kind of like, a professor of history is more likely to know the name of the every US president than a member of general population, but if you hear a random person saying that the fifteenth president was James Buchanan, it’s still probably not a professor of history.
Possibly right, but then the $64,000 question becomes whether we can determine if a field is real science or fake science beforehand. Like, psychoanalysis probably wasn’t, but it sure looked like it at the time.
I agree it is a question of paramount importance. Unfortunately also a hard one, since non-sciences like masquerading as science. Various criteria have been proposed but none seems to work as well as one would like in all cases.
The two most common criteria are falsifiable predictions and reproducibility. If you have both, then you’re probably on solid ground. But if you don’t, that doesn’t necessarily mean that you’re a crook.
Psychoanalysis was lacking either, and I actually think it was possible to call. Feynman did. I’m now looking at the text which I wasn’t when I wrote the previous comment, and he was talking about psychoanalysis specifically, not all of psychology: “Next, we consider the science of psychology. Incidentally, psychoanalysis is not a science: it is at best a medical process, and perhaps even more like witch-doctoring. It has a theory as to what causes disease—lots of different “spirits,” etc. The witch doctor has a theory that a disease like malaria is caused by a spirit which comes into the air; it is not cured by shaking a snake over it, but quinine does help malaria. So, if you are sick, I would advise that you go to the witch doctor because he is the man in the tribe who knows the most about the disease; on the other hand, his knowledge is not science. Psychoanalysis has not been checked carefully by experiment, and there is no way to find a list of the number of cases in which it works, the number of cases in which it does not work, etc.”
Freud was the motivating example for Popper (1934).
>Freud was the motivating example for Popper (1934).
Yes, one of the. And while it isn’t science, I will one day write an account of why the wholesale rejection and ridicule of psychoanalysis is a mistake that gets at a root weakness of the rationalist project.
Pingback: Assorted links
Wikipedia has a list of superseded scientific theories. Obviously most of these will not fit into your reference class, and only one of the examples you gave is actually listed there so the overlap seems poor, but it still might be worth skimming through it to see if any fit.
http://en.wikipedia.org/wiki/Superseded_scientific_theories
There are also lists of discredited substances and topics characterized as pseudoscience.
http://en.wikipedia.org/wiki/List_of_discredited_substances
http://en.wikipedia.org/wiki/List_of_topics_characterized_as_pseudoscience
There’s far more politics in climate science than there is in most science questions. As others have pointed out, your reference class ought to take that into account.
Believing that AGW is happening at a non-zero level is not the same thing as trusting in climate modeling predictions, much less the full slate of climate change activism. Maybe this doesn’t even need to be pointed out around here, but it certainly isn’t a distinction recognized very well on the rest of the internet.
Another important point that often gets glossed over is that global warming is not the only (big) reason to be concerned about the environment. Even if global warming claims are exaggerated, it’s still incontrovertible that humans are damaging the environment in very significant ways, and in my opinion these provide just as much reason to be environmentalist as global warming does. Of course this doesn’t address the issue of how to best deal with pollution, overfishing, deforestation, etc. — and one can think these are problems without thinking that government regulations are the best ways to fix them — but the failure to draw a distinction between the existence of a problem and the existence of a particular solution is common to many political issues, not just global warming.
I think eugenics fits the bill. It was taken seriously for twenty some years (and longer in some places) and was the basis for various policies before becoming discredited.
That’s like saying the theory that there are two sexes has been discredited because sexism is bad. Eugenics can’t be discredited as a scientific theory. Plant and animal breeding is eugenics.
I would say that eugenics has been discredited, and this reflects poorly on the ability of science to resist political pressure.
Eugenics isn’t a scientific theory; it’s a political program. It’s in the same class as cap-and-trade, not AGW.
Perhaps “science” should’ve thought of this back when the wind blew the other way, and secured a better reputation! Had Western society had more advanced bioethics than it deserved and ended up with, perhaps eugenics would’ve been associated with relatively innocent things like Galton’s proposal to pay individuals from distinguished families for intermarrying.
Personally, I would lean very slightly towards biodeterminism, but I welcome today’s climate of outrage about “eugenics” as a whole, as a fitting and humbling reminder to entitled upper-class technocrats of how horrible it could be to end up on the wrong end of their ministrations. (It is particularly relevant that here and now, on the internet, many reactionaries are so insanely and openly sadistic against single mothers in the name of breeding them out.)
P.S. The whole gleefully sadistic and entitled character of such discourse routinely manages to corrupt even the discussion of biodeterminism on its own, doing away with the complex and confounded realities – such as, for example, the rationality of poor people in unpredictable environments acquiring a higher “time preference”. Some things make perfect sense from the inside, but not to the ostensibly dispassionate eye of the technocratic observer.
I agree with everyone else who is saying that this comment is probably confusing a change in morals with a scientific disproof.
However, it’s worth noting that eugenics as previously practiced is a terrible idea even if you agree with its moral goals. There was a very interesting paper on the effect of Nazi eugenics – the Nazis killed almost all the German schizophrenics in the Holocaust. The result: Germany has exactly the same level of schizophrenia as everywhere else today – which makes a good deal of sense as most schizophrenics are not themselves the children of schizophrenics. Several other Nazi eugenics programs had approximately the same lack of effect.
This makes me think that any early 20th-century eugenicists who had gotten the opportunity to put their ideas into practice on a large scale would have been unsuccessful.
One can argue this doesn’t make it a pseudoscience or failed field any more than the fact that we don’t have fusion yet makes fusion research a pseudoscience or failed field, but it seems to me like the people back then were pretty confident they had an engineering solution and weren’t going into it in a spirit of investigating what would work.
A possible failure: opposition to cryonics. Certainly recent and important, people and researchers stubbornly ignore it for decades because it’s weird, and cryprotectants have been shown to maybe preserve brain info. Yet because current politics is anti-transhumanist, cryonics remains niche.
Other transhumanist obviousness like developing heart transplants quick and effective enough to save everyone who has a heart attack fall into engineering and border on general Society Being Irrational and funding problems.
Cryonics is not merely ignored, but banned from science.
Assuming you mean artificial hearts as opposed to improving allocation of the hearts of dead people, there is this: http://en.wikipedia.org/wiki/Ventricular_assist_device
I submit the dietary theory that ‘too much’ salt causes hypertension and related ailments, where ‘too much’ includes what many people in first world societies consume.
One could reasonably argue it doesn’t meet criteria [2], but then I’m not so sure AGW does either. I wouldn’t consider either to be “one of the fundamental paradigms of an entire field”, but both are or were incredibly prominent, and largely unassailable (for decades).
I know I’m late to the game here, but I feel it’s worth pointing out that there is, today, a reasonably large, fairly vocal group of modern behaviorists. There are at least two blogs written by proponents of what I’m thinking of here (Gibsonian ecological psychology), though the writers at one might not think of themselves as behaviorists, per se.
Here is a post from the maybe-not-explicitly-behaviorist blog, and here is an explicitly behaviorist (and recent) post from the other blog (see here also).
>A cynic might say if we include psychology we might as well go all the way and include economics, sociology, and anthropology, raising our error count to over nine thousand.
Yeah, but that’s the whole point. Economics, sociology and anthropology all have a thing in common: they all deal with multivariate systems that are hard to observe and experiment on and where events can span years or decades.
Science is always right, by definition, but scientists are people and people aren’t right by definition. And like all people they respond to incentives which include both “not looking like a clown to my colleagues” but also “getting a research grant”. And really, what research grant could you possibly get by saying “there is not enough reliable data to verify hypothesis that look very good on paper, let’s revisit the issue in 50 years”? None, because you would be arguing about dismantling your field of study.
Political reasons as well as financial reasons force some physisicists and engineers to proclaim unreasonable postulates. Note that I do not use the term THEORY.. we understand the difference between theory and postulate. Those in the know keep entropy and relativity and dynamics and philosophy in their writings, esp those in peer reviewed papers, I rarely see anything in Physics Today (a popularization) that postulates any crazy shit. Maybe in this non peer reviewed toss magizine they don’t want to get a bad name: I hate to quote a recent paper: most carbon dioxide produced on the the planet earth comes from the soil (a definiate polotical phopa yet Phyics Today still published it). Did PT mean to debunk or what? Are they being cautious? Or just do they just like to publish a good read?
Sorry for my speling, i don’r skenz engl too well.
Another example: For a long time biologists believed that animal cells can replicate indefinitely. Leonard Hayflick eventually proved that normal cells don’t replicate indefinitely. (The cell lines in early studies were probably cancerous.)
My contribution to falsified ‘decided science’ is Sean Caroll’s endorsement of no inter-breeding of neaderthal and h. sapien as “demonstrated conclusively…in one of the really great contributions of genetics” http://books.google.com/books?id=-SqwP8CLdIsC&pg=PA261&lpg=PA261&dq=endless+forms+most+beautiful+sean+carroll+neanderthal&source=bl&ots=za2Kt2GNLA&sig=DZ7m_CSmVRDYrl5I0JL0mr5xTo8&hl=en&sa=X&ei=t_r_U5qYNoWUgwSZmoC4Dg&ved=0CB4Q6AEwAA#v=onepage&q=endless%20forms%20most%20beautiful%20sean%20carroll%20neanderthal&f=false
Of course the book on human genetics was published in 2006 and I laughed going into it: how many claims in here are going to be known to be false to an amateur in 2014. All that aside, let’s get to AGW – another area where many will be eating crow in ten years time as observations emerge (although I can’t tell you who).
Unlike any(?) example here, AGW is mainly a quantitative forecast, not a discovery of a quality of the way things came to be (human gene pool) or the way things are (Earth goes around the sun). This particular forecast is made in terms of decades to centuries in the future. It concerns a number: will Global temperature rise 0.5C, 1C, or 5C as we double CO2 by 2050. And as others have noted, the system under study is quite complicated, some say chaotic. And it involves extrapolation: how does climate behave when it is greatly outside any conditions we have ever observed.
To be as brief as possible: we leave the world of simple physics behind after the well agreed upon observation: 2x C02 -> 4W forcing TOA * .25 (plank) -> +1.0C warming. But most alarmists propose we should expect far greater warming than 1C, and that comes from internal positive feedback in Earth climate: so instead of 1C, many alarmist see it being multiplied by a factor of 2, 3, or 5 times over. The provenance of this number is solely from the domain of computer models trying to simulate climate.
The point is: severe (high positive feedback – the kind that keeps us up at night) AGW is not like any other science I’ve seen mentioned in the post or comments. It’s not experimentally driven; it’s not derived in any formula: it’s an output parameter from a computer simulation. The question to ask is not: “When has science been wrong?” but instead “When has a simulation (100 years into the future) ever been right?”