The General Factor Of Correctness

People on Tumblr are discussing Eliezer Yudkowsky’s old essay The Correct Contrarian Cluster, and my interpretation was different enough that I thought it might be worth spelling out. So here it is: is there a General Factor of Correctness?

Remember, IQ is supposed to come from a General Factor Of Intelligence. If you make people take a lot of different tests of a lot of different types, people who do well on one type will do well on other types more often than chance. You can do this with other things too, like make a General Factor Of Social Development. If you’re really cool, you can even correlate the General Factor of Intelligence and the General Factor of Social Development together.

A General Factor Of Correctness would mean that if you asked people’s opinions on a bunch of controversial questions, like “Would increasing the minimum wage to $15 worsen unemployment?” or “Which interpretation of quantum mechanics is correct?” or “are artificial sweeteners safe?” and then somehow discovered the answers to these questions, people who did well on one such question would do well on other types more often than chance.

This is a surprisingly deep and controversial issue, but one with potentially big payoffs. Suppose you want to know whose economic theories are right, but you don’t want to take the time to learn economics. Consider some position that was once considered fringe and bizarre, but now known to be likely true – for example, pre-Clovis settlement of the New World. Find the economists who believed in pre-Clovis settlement of the New World back when doing so was unpopular. Those economists have demonstrated a proven track record of being able to winnow out correct ideas amidst a sea of uncertainty. Invest in whatever company they tell you to invest in and make a killing.

I’m sort of joking, but also sort of serious – shouldn’t something like this work? If there’s such a thing as reasoning ability, people who are good at sifting through a mess of competing claims about pre-Columbian anthropology and turning up the truth should be able to apply that same skill to sifting through a mess of competing claims about economic data. Right?

If this is true, we can gain new insight into all of our conundra just by seeing who believes what about New World migration. That sounds useful. The problem is, to identify it we have to separate it out from a lot of closely related concepts.

The first problem: if you just mark who’s right and wrong about each controversial issue, the General Factor Of Correctness will end up looking a lot like a General Factor of Agreeing With Expert Consensus. The current best-known heuristic is “always agree with expert consensus on everything”; people who follow this heuristic all the time are most likely to do well, but we learn nothing whatsoever from their success. If I can get brilliant-economist-points for saying things like “black holes exist” or “9-11 was not a government conspiracy”, then that just makes a mockery of the whole system. Indeed, our whole point in this exercise is to see if we can improve on the “agree with experts” heuristic.

We could get more interesting results by analyzing only people’s deviations from expert consensus. If you agree with the consensus about everything, you don’t get to play. If you disagree with the consensus about some things, then you get positive points when you’re right and negative points when you’re wrong. If someone ends consistently ends up with a positive score beyond what we would expect by chance, then they’re the equivalent of the economist who was surprisingly prescient about pre-Clovis migration – a person who’s demonstrating a special ability that allows them to outperform experts. This is why Eliezer very reasonably talks about a correct contrarian cluster instead of a correct cluster in general. We already know who the correct cluster is, and all of you saying “I have no idea what Clovis is, but whatever leading anthropologists think, I think that too” are in it. So what? So nothing.

The second problem: are you just going to rediscover some factor we already know about, like IQ or general-well-educatedness? I’m not sure. WHen I brought this up on Tumblr, people were quick to point out examples of very intelligent, very well-educated people believing stupid things – for example, Newton’s obsession with alchemy and Biblical prophecy, or Linus Pauling’s belief that you could solve health just be making everyone take crazy amounts of Vitamin C. These points are well-taken, but I can’t help wondering if there’s selection bias in bringing them up. Yes, some smart people believe stupid things, but maybe even more stupid people do? By analogy, many people who are brilliant at math are terrible at language, and we can all think of salient examples, but psychometrics has shown again and again that in general math and language skills are correlated.

If we look for more general data, we get inconsistent results. Neither IQ nor educational attainment seems to affect whether you believe in climate change very much, though you can get slightly different results depending on how you ask and what you adjust for. There seems to be a stronger effect of intelligence increasing comfort with nuclear power. Other polls show IQ may increase atheism, non-racism, and a complicated cluster of political views possibly corresponding to libertarianism but also showing up as “liberalism” or “conservativism” depending on how you define your constructs and which aspects of politics you focus on. I am very suspicious about any of this reflecting real improved decision-making capacity as opposed to just attempts to signal intelligence in various ways.

The third problem: can we differentiate positive from negative selection? There are lots of people who believe in Bigfoot and ESP and astrology. I suspect these people will be worse at other things, including predicting economic trends, predicting world events, and being on the right side of difficult scientific controversies, probably in a way independent of IQ or education. I’m not sure of this. But I suspect it. If I’m right, then the data will show a General Factor of Correctness, but it won’t necessarily be a very interesting one. To give a reductio ad absurdum, if you have some mental disorder that causes you to live in a completely delusional fantasy world, you will have incorrect opinions about everything at once, which looks highly correlated, but this doesn’t necessarily prove that there are correlations among the people who are more correct than average.

The fourth problem: is there a difference between correctness and probability calibration? Suppose that Alice says that there’s a 90% chance the Greek economy will implode, and Bob has the same information but says there’s only an 80% chance. Here it might be tempting to say that one of either Alice or Bob is miscalibrated – either Alice is overconfident or Bob is underconfident. But suppose Alice says that there’s a 90% chance the Greek economy will implode, and Bob has the same information but says there’s only a 10% chance that it will. Now we’re more likely to interpret this in terms of them just disagreeing. But I don’t know enough about probability theory to put my finger on whether there’s a true qualitative difference.

This is important because we know calibration is a real thing and some people are good at it and other people aren’t but can improve with practice. If all we’re showing is that people who are good with probabilities are good with probabilities, then whatever.

But there are tantalizing signs that there might be something more here. I was involved in an unpublished study which I can’t upload because I don’t have the other authors’ permission, but which showed conclusively that people with poor calibration are more likely to believe in the paranormal (p < 0.001), even when belief in the paranormal was not assessed as a calibration question. So I went through the Less Wrong Survey data, made up a very ad hoc measure of total calibration skill, and checked to see what it did and didn't predict. Calibration was correlated with IQ (0.14, p = 0.01). But it was also correlated with higher belief in global warming (0.13, p = 0.01), with higher belief in near-term global catastrophic risk (-0.08, p - 0.01), increased support for immigration (0.06, p = 0.048) and with decreased support for the human biodiversity movement (0.1, p = 0.002). These were all independent of the IQ correlation. Notably, although warming and GCR were asked in the form of probabilities, immigration and HBD weren't, suggesting that calibration can be (weakly) correlated with opinions on a non-calibration task. Maybe the most intriguing evidence for a full-fledged General Factor of Correctness comes from Philip Tetlock and IARPA's Good Judgment Project, which got a few thousand average people and asked them to predict the probability of important international events like “North Korea launches a new kind of missile.” They found that the same small group of people consistently outperformed everyone else in a way incompatible with chance. These people were not necessarily very well-educated and didn’t have much domain-specific knowledge in international relations – the one profiled on NPR was a pharmacist who said she “didn’t know a lot about international affairs [and] hadn’t taken much math in school” – but they were reportedly able to outperform professional CIA analysts armed with extra classified information by as much as 30%.

These people aren’t succeeding because they parrot the experts, they’re not succeeding because they have more IQ or education, and they’re not succeeding in some kind of trivial way like rejecting things that will never happen. Although the article doesn’t specify, I think they’re doing something more than just being well-calibrated. They seem to be succeeding through some mysterious quality totally separate from all of these things.

But only on questions about international affairs. What I’d love to see next is what happens when you ask these same people to predict sports games, industry trends, the mean global temperature in 2030, or what the next space probe will find. If they can beat the experts in those fields, then I start really wondering what their position on the tax rate is and who they’re going to vote for for President.

Why am I going so into depth about an LW post from five years ago? I think in a sense this is the center of the entire rationalist project. If ability to evaluate evidence and come to accurate conclusions across a broad range of fields relies on some skill other than brute-forcing it with domain knowledge and IQ, some skill that looks like “rationality” broadly defined, then cultivating that skill starts to look like a pretty good idea.

Enrico Fermi said he was fascinated by the question of extraterrestrial life because whether it existed or it didn’t, either way was astounding. Maybe a paradox, but the same paradox seems true of the General Factor of Correctness.

Outside the Laboratory is a post about why the negative proposition – no such General Factor – should be astounding:

“Outside the laboratory, scientists are no wiser than anyone else.” Sometimes this proverb is spoken by scientists, humbly, sadly, to remind themselves of their own fallibility. Sometimes this proverb is said for rather less praiseworthy reasons, to devalue unwanted expert advice. Is the proverb true? Probably not in an absolute sense. It seems much too pessimistic to say that scientists are literally no wiser than average, that there is literally zero correlation.

But the proverb does appear true to some degree, and I propose that we should be very disturbed by this fact. We should not sigh, and shake our heads sadly. Rather we should sit bolt upright in alarm. Why? Well, suppose that an apprentice shepherd is laboriously trained to count sheep, as they pass in and out of a fold. Thus the shepherd knows when all the sheep have left, and when all the sheep have returned. Then you give the shepherd a few apples, and say: “How many apples?” But the shepherd stares at you blankly, because they weren’t trained to count apples – just sheep. You would probably suspect that the shepherd didn’t understand counting very well.

If, outside of their specialist field, some particular scientist is just as susceptible as anyone else to wacky ideas, then they probably never did understand why the scientific rules work. Maybe they can parrot back a bit of Popperian falsificationism; but they don’t understand on a deep level, the algebraic level of probability theory, the causal level of cognition-as-machinery. They’ve been trained to behave a certain way in the laboratory, but they don’t like to be constrained by evidence; when they go home, they take off the lab coat and relax with some comfortable nonsense. And yes, that does make me wonder if I can trust that scientist’s opinions even in their own field – especially when it comes to any controversial issue, any open question, anything that isn’t already nailed down by massive evidence and social convention.

Maybe we can beat the proverb – be rational in our personal lives, not just our professional lives.

And Correct Contrarian Cluster is about why the positive proposition should be equally astounding. If it’s true, you can gain a small but nonzero amount of information about the best economic theories by seeing what their originators predicted about migration patterns in pre-Columbian America. And you can try grinding your Correctness stat to improve your ability to make decisions in every domain of knowledge simultaneously.

I find research into intelligence more interesting than research into other things because improvements in intelligence can be leveraged to produce improvements in everything else. Research into correctness is one of the rare other fields that shares this quality, and I’m glad there are people like Tetlock working on it.

Discussion questions (adapted from Tumblr):

1. Five Thirty Eight is down the night before an election, so you search for some other good sites that interpret the polls. You find two. Both seem to be by amateurs, but both are well-designed and professional-looking and talk intelligently about things like sampling bias and such. The first site says the Blue Party will win by 5%; the second site says the Green Party will win by 5%. You look up the authors of the two sites, and find that the guy who wrote the first is a Young Earth Creationist. Do you have any opinion on who is going to win the election?

2. On the bus one day, you sit next to a strange man who mumbles about how Bigfoot caused 9-11 and the Ark of the Covenant is buried underneath EPCOT Center. You dismiss him and never see him again. A year later, you see on TV that new evidence confirms Bigfoot caused 9-11. Should you head to Florida and start digging?

3. Schmoeism and Anti-Schmoeism are two complicated and mutually exclusive economic theories that you don’t understand at all, but you know the economics profession is split about 50-50 between them. In 2005, a survey finds that 66% of Schmoeist economists and 33% of anti-Schmoeist economists believe in pre-Clovis settlement of the New World (p = 0.01). In 2015, new archaeological finds convincingly establish that such settlement existed. How strongly (if at all) do you now favor one theory over the other?

4. As with 3, but instead of merely being the pre-Clovis settlement of America, the survey asked about ten controversial questions in archaeology, anthropology, and historical scholarship, and the Schmoeists did significantly better than the anti-Schmoeists on 9 of them.

This entry was posted in Uncategorized and tagged . Bookmark the permalink.

391 Responses to The General Factor Of Correctness

  1. Lightman says:

    One problem that occurs to me:

    Sometimes we are more justified in false beliefs than we are in true beliefs. It might have been the case that, given the state of the evidence in say 2010, that it was more rational to deny the existence of pre-Clovian settlement of the Americas. Belief in pre-Clovian settlement of the Americas circa 2010 would then just represent a lucky guess – such people might in fact be worse at making predictions, in that they rejected the view that was better supported by the evidence.

    • Jiro says:

      If you know nothing about why they believed the belief, just the fact that it turns out to be true increases the probability that they got to it by evidence. They could also have arrved at it by luck, but generally, true beliefs are more likely to have been arrived at by evidence and less likely to have been arrived at by luck than false beliefs.

      You need to be a little subtler in making the luck objection. For instance, luck combined with a common source of belief can increase the variance–if each group is 50% right, you’d ignore them, but if their decisions have a common cause beyond just better reasoning ability, there’s a 50% chance that *all* of a group are right and a 50% chance that *none* of a group is

      • Bugmaster says:

        I don’t think it’s just “luck”, necessarily, but rather selection bias. There’s a common scam known as the “reverse pyramid” that utilizes the same principle.

        You call 1000 people, and tell them that you can infallibly predict whether a stock will rise or fall over some short term. You then give them a prediction, for free; but you tell half of them that the stock price will rise, and then tell the other half that it will fall. Then you talk to the 500 people whose prediction ended up being correct by pure chance, and make another prediction, totally free of charge. Then you call up 250 people… then 125… then 63… And before you know it, you’ve got 4 or so people who are totally convinced that you are an infallible stock market oracle who is never wrong. And then, and only then, do you ask them for money; as much money as they can spare.

    • AJD says:

      My favorite example of this: Aristotle proposed that there might be a continent of some kind surrounding the South Pole, and the idea caught on well enough that it remained pretty popular into the early modern era. Eventually, once south-lying lands like Tierra Del Fuego, Australia, and New Zealand had been circumnavigated and proven not to extend to the South Pole, the idea declined, and in 1814, Matthew Flinders finally dismissed the idea of an antarctic continent as having “no probability”. Flinders’s false belief was definitely better justified than Aristotle’s true belief.

      • Tom Richards says:

        Not sure about that. P=0 is a very, very strong claim indeed, and certainly not one that could be considered even nearly justified to my mind as regards “no Antarctic continent” in 1814. Even P(Bigfoot caused 9/11) /=0 (though there certainly are an awful lot of 0s after the decimal point).

        And in any case, aren’t you begging the question? Isn’t the subject under consideration precisely whether, for reasons we don’t fully understand, some people are consistently better at making judgments in cases where seeming best analysis of the evidence does not support them?

        • Fnord says:

          Like, forget the thing about 0 not being a probability, that’s probably excusable as rhetorical excess. The problem is that it was just a big chunk of unexplored territory, and high confidence that there was no landmass there was no more justified than high confidence that there was a large landmass there.

          • AJD says:

            I still think that Flinders was more justified in believing that there was no landmass there, on the basis of many years of exploration at increasingly southern latitudes which had shown no evidence of a landmass, than Aristotle was justified in believing that there was one, on the basis of aesthetics and false assumptions about geology (or whatever).

          • Austin says:

            Throw away the high confidence thing, and in general the claim “There is no large land mass around the South Pole” was, in 1814, a more rational belief than “There is a large land mass around the South Pole.”

            Baseline probability should have been about 70-30 based on the simple fact that the surface area of the earth is roughly 70% water.* Experiments had been conducted to determine whether or not there was a landmass there (by traveling increasingly close to the South Pole) and all of them had turned up negative. Bayesian inference says that each of those experiments should have increased people’s confidence that there was no land mass at the South Pole. So any well-calibrated person should have had greater than 70% confidence that Antarctica would not turn out to exist.

            As it turns out, 20% of the things that a perfectly calibrated person believes are 80% likely to be true, are actually false.

            * This number is slightly inaccurate since the world hadn’t been fully explored. But given that the fifth largest continent had been yet to be discovered and the approximate extent of all the world’s oceans was known, it shouldn’t have been any less biased towards water than that in 1814.

        • Ineptech says:

          I think you’re being awfully cavalier in assuming a weak case for Bigfoot causing 9/11. Consider the evidence:

          * Bigfoot is, by definition, a very tall, very hairy humanoid… just like Osama bin Laden.
          * Our primary exposure to Bigfoot is mysterious, grainy, sporadically released videos… just like Osama bin Laden.
          * Only crackpots believe that Bigfoot lives in the mountains of rural Kentucky… just like Osama bin Laden.

          • Nornagest says:

            TIL: Osama bin Laden is an undescribed species of hominid.

          • Airgap says:

            If we allow for rhetorical excess and interpret as “Yeti caused 9/11,” I think we’re halfway home. A Yeti sufficiently motivated to smash the corrupt, decadent West and establish worldwide Islamic government could easily make it down to Pakistan from the Himalayas now and then.

        • The Original CC says:

          Tom Richards: “And in any case, aren’t you begging the question? ”

          I must be reading an intelligent comment if the guy used “begging the question correctly”. I bet Tom Richards is right about everything else in his comment.

          Isn’t that the point of this whole post? 🙂

      • Izaak Weiss says:

        Or, Aristotle had some general factor of correctness that allowed him to make this prediction accurately without any knowledge, and Finders had a factor of incorrectness that condemned him to make this prediction wrongly even with lots of data.

      • scav says:

        Except it wasn’t justified by, for example, going and looking to see if it was true.

    • Autolykos says:

      Depends on how consistently someone has unjustified true beliefs. If it’s just once or twice, I’d chalk it up to them being lucky. But if they reliably hold beliefs I (or the general consensus) find unjustifiable, but later turn out to be correct most of the time, I expect them to know something I don’t.
      Intuition can sometimes integrate masses of seemingly unconnected data that you can’t possibly hope to reason about (or even write a computer program to feed into Bayes’ equation). And it may well be possible that some people just have a knack for finding and combining the right information at the right time and in the correct way. It might even involve feedback loops looking for “correct contrarians” who are also experts in their respective fields and doing original research.

    • Deiseach says:

      The problem is:

      I am a world-renowned economist. As economists go, I am a rock star god amongst my peers. Even non-economists know my name. I have economics all sewed up. Anyone wants an opinion on anything to do with economics, I’m the first name on their speed dial.

      That does not mean I know how to prevent blackspot in roses. So putting me on “Gardener’s Question Time” is not a good idea.

      If I want to know about roses, or pre-Clovis settlements, or how to paint a shed, I’ll have to rely on the opinions of experts in those fields. And as Scott points out, that’s mainly “What’s the consensus opinion? Okay, I believe that”.

      Unless I’m a supergenius polymath with the disparate talents and time to be able to investigate and master all the topics under the sun (just call me Ildánach), that’s what I have to do, so my opinion outside of economics has no greater or lesser weight than that of the cleaning lady who vacuums my office after I’ve gone home at night (indeed, the cleaning lady may be a keen amateur gardener who knows way more about roses than I do).

      So it would be perfectly possible for me to be sound and reliable when talking about economics, but completely out of my tree when talking about “is Pluto a planet”, Bigfoot or the best way to get wine stains out of a white silk slip.

      A contrarian cluster range of my opinions may let you know how good or bad I am at judging is the consensus opinion in various fields good or not, but nothing more. If the consensus today is “Bigfoot does not exist”, then I look good by saying Bigfoot does not exist and I look off the wall by insisting it does; if in fifty years time we find a real live Bigfoot, then I look good for believing in it and otherwise I’m one of the examples trotted out to be laughed at like The Man Who Didn’t Sign The Beatles and Lord Kelvin (yes, that Kelvin) saying X-rays would prove to be a hoax.

      Simply holding a crackpot opinion (by the standards of the day) does not tell us anything until we have definite evidence for the crackpottery one way or the other, and something like cryonics or Many Worlds is not something we can know right now is right or wrong (until the people who signed up for cryonics get/do not get successfully thawed out in fifty – two hundred years’ time).

      • Peter says:

        So, supposing there’s a five-person gardening panel on GQT: four gardeners and the rock star economist. A big argument breaks out about how to treat blackspot on roses, there are two leading methods, and the gardeners split 2:2 on the question. Arguments get traded back and forth, various anecdotes and studies and bits of evidence get mentioned, but none of the gardeners budge from their initial position. Finally the economist pipes up: “I’m no expert on this, but it sounds like Alice and Bob are more convincing on this issue than Clare and Dave.”

        So if I had been half-listening to the programme and didn’t really follow the argument very well, then in principle the economist’s input might swing me one way or another. On the other hand, the guy’s an economist, and I have specific problems with contemporary economists, so in practise, probably not. Now if the guy was a top historian or even a rockstar astrophysicist (not in the Dr. May sense), then I might say, yes, their general intelligence, skills at assessing evidence, etc. make them useful as a tiebreak, let’s go with the AB method.

        • Nita says:

          A big argument breaks out about how to treat blackspot on roses, there are two leading methods, and the gardeners split 2:2 on the question.

          Remember that the idea is to find the Correct Contrarian Cluster — i.e., the gardeners are split 9:1, and the economist says, “Obviously Jackie is right!”

        • Deiseach says:

          Let’s take Scott’s Alice and Bob. First case: Alice is 90% confident Greece will implode, Bob is 80% confident. What that means is both of them do believe Greece will implode, Alice just thinks it is going to happen faster/harder than Bob thinks it does. So there’s not really a disagreement about what is going to happen here, simply when it is going to happen.

          Second case: Alice is 90% confident Greece will implode, Bob is 10% confident. Now we have real disagreement. If Bob turns out to be right, in the face of Carol and Dave and Evelyn and Frank backing up Alice that “No, Greece is definitely going to implode”, then it begins to look interesting. IF Bob continues to have his predictions validated by What Happens Next, and his predictions continue to be in the face of the prevailing wisdom, we can start to say “That Bob, he’s onto something!”

          How does Bob do it? Superior rationality? High intelligence? Suppose Bob tells us that the pixies at the bottom of the garden whisper it into his ear under a full moon? Do we believe Bob (and that pixies have now been proven to exist)?

          We can test the economist’s blackspot remedy and see if it works. Cryonics – we’ll have to wait a good while to find out one way or the other. Many Worlds – there’s a lot more heavy lifting in the theoretical work to be done. Sure, perhaps in fifty years time, all the current physicists who reject it will be fodder to point and laugh at, along the lines of People What Believed In Phlogiston.

          Or maybe in fifty years time we’ll have proof positive that believing in the Many Worlds Hypothesis goes with wearing your underwear on your head and talking about your friend Harvey, the six foot tall rabbit pooka.

          Either way, the only proof of the pudding is in the eating: does the economist’s treatment kill my roses or let them flourish? Does Bob always back the right horse?

          • stillnotking says:

            I think everyone would agree that hypothesis-testing by observation is the canonical way to obtain truth; the arguments here are about hypotheses that can’t be tested, or can’t be tested yet, or for which the evidence is ambiguous, etc. “Will Greece’s economy implode?” is a question that will certainly be answered in time, but we want to know now. (The rose example falls into the category of hypotheses that can’t be non-destructively tested. Choose the wrong remedy and your roses all die.)

            My main problem with Scott’s GFC-by-CCC metric is that it seems like a fairly small effect. This makes it unhelpful for influencing behavior in situations in which people have vested interests, which is nearly all of them. Suppose that old chestnut the Redskins Rule turned out to be non-randomly predictive after all, for some small p. Fans of the predicted loser would still be unlikely to concede its validity in any particular case.

          • Deiseach says:

            \(T)he arguments here are about hypotheses that can’t be tested, or can’t be tested yet, or for which the evidence is ambiguous, etc.

            But there has to be something for us to start making a decision about “Bob is probably likely to be right about the pixies living at the bottom of the garden”.

            If we try the remedy and it kills the roses, yes that’s destructive and we’re likely to curse and say “I’m never listening to that chancer again!” but it’s a small destruction that doesn’t cause much harm. So based on whether or not the roses die, we can then have something to go on for “Did Bigfoot cause 9/11? Is the Ark of the Covenant under the Epcot Centre? Will cryonics work?”

            Otherwise, if all we have is “Bob holds weird opinions nobody else holds, or at least only a very few other oddballs hold them”, then all we have is your standard nutter on the bus, and I don’t think anyone here is going to argue that the person who spends the entire journey twitching and muttering to invisible entities is exhibiting their possession of some mysterious General Factor of Correctness.

          • stillnotking says:

            Scott’s point here is exactly that some people may have access to a… crystal ball, if you will, that makes their apparently uninformed opinions better than chance, or even better than ~half of informed opinions — the human equivalents of the Redskins Rule. By your reaction, I’d guess you are dismissing that possibility out of hand. I’m prepared to consider that it might be true. I’m just curious what we’re supposed to do with it. If Bob Smith, who has the highest measured GCF of any human on the planet, says anthropogenic global warming is real and we should take steps X, Y, and Z to fix it, that still isn’t going to be persuasive to those who oppose X, Y, and Z, because even Bob is wrong sometimes. (On a personal level, of course, I might turn to Bob for investment advice, but given what he’d likely charge, it might not be worth it.)

          • Deiseach says:

            We’re invoking crystal balls, now? 😀

            Okay, if we’re trying to decide “I am the prime minister of a small island nation. I know damn-all about economics. I have two advisers. Bob is recommending a policy Bill says will ruin the country. Bill is saying I should institute reforms Bob says will wreck the nation. How, oh how can I tell who is right?”, what we’re being asked to decide it on is “I know! I’ll ask Bob and Bill who their favourite football teams are!”

            We need something to go on rather than “Bob has a lucky rabbit’s foot – in his brain”. If you’re asking me to believe in Bob’s crystal ball, well fine, but how do I know Bob knows any better about which team is the best, how to paint a masterpiece of contemporary art (hint: painting? sound art is where it’s at now, baby!) or what is the best way to skin a cat, then I really think I need something more to go on than “Well, 9 out of 10 of his peers think Bob is a nutter because he believes in Atlantis, Mu and Lemuria”.

            Bob may well be really that smart, that well-informed, and that better than the Relevant Experts In the Field, and it may not be down to sheer dumb luck, but how can I tell if I don’t have some means of checking his predictions? I’m not dismissing the possibility that something along the lines of a General Factor of Correctness exists, but I do think we need some way of gauging who is the possessor of uncanny insight and who’s been hitting the magic mushrooms when it comes to “wacky, counter-intuitive predictions and the one weird trick that will fix the economy”.

            My second, related objection is this: the assumption here seems to be that Really Smart Guy will be equally really smart about everything and I don’t necessarily believe that’s so. At least not on a “So I did some quick cramming and now I can contradict people who’ve built academic lives around in-depth study of the topic” level.

            Again, yes, polymath geniuses exist. But as a rule of thumb, I think that an expert economist is an expert economist, and once they go outside their field, their opinion is no better than “reasonably intelligent amateur who takes an informed interest”. So trying to decide “Who is the best economist” based on “Are they right about maxi dresses: next big thing or dreadful rehash of the 70s?” is not much better than putting all the names into a hat and having a lucky dip.

            EDIT: Isn’t this really a form of the appeal to authority? General Factor of Correctness sounds a lot like the mediaeval attitude that “Well, Master Aristotle is a really great philosopher so we will also believe him on everything from embryology to geography”. People like to turn up their noses at those backwards Middle Ages when they solemnly quoted old books about things they had no experience of themselves in parrot-fashion, but aren’t we creating a modern version of the same? “Master Bob is a really smart economist, so I’ll take his advice about whether I should have that liver transplant or not and to hell with what the doctors at the hospital tell me!”

          • Mary says:

            “Second case: Alice is 90% confident Greece will implode, Bob is 10% confident. Now we have real disagreement. If Bob turns out to be right,”

            How could Bob turn out to be right? Either it will implode, or not. Both of them thought that possible.

          • RCF says:

            “What that means is both of them do believe Greece will implode, Alice just thinks it is going to happen faster/harder than Bob thinks it does. So there’s not really a disagreement about what is going to happen here, simply when it is going to happen.”

            No, it has nothing to do with how fast or hard the implosion will be. Bob is assigning ten fewer percentage points to the probability. They do disagree about what’s going to happen; Alice believes it will happen in 90% of worlds, while Bob thinks it will happen in 80%.

          • Deiseach says:

            If Alice is assigning 90% confidence to “Greece will implode”, that means she considers it much more likely it will implode than that it will not. Certainly by leaving that 10% gap, she is also entertaining the idea it will not implode, but it’s not (for Alice) very likely this will happen.

            If Bob assigns 80% confidence to “Greece will implode”, then he is not as confident as Alice, but he is still pretty confident it is more likely to implode than that it will not. 20% is more weighty that Greece will not implode than 10%, but it is still not a high level of “There’s a good chance it won’t happen”.

            So Alice and Bob are both agreeing that it is more likely than not that Greece will implode; Alice thinks that it is more likely than Bob does because she has a greater confidence level than he has, but they both have higher confidence in “implosion” than “non-implosion”.

            If Bob assigns 10% confidence, then he is not very confident at all it will implode, and so he must be more confident that it probably will not implode. Here there is a much greater gap between Alice and Bob; Alice is much more confident than not that Greece will implode, Bob is much less confident than not that Greece will implode.

            At least that’s what I mean by “Bob and Alice are disagreeing when Alice says she is very nearly absolutely certain Greece will implode, so she’s 90% certain it will, and Bob says he is very nearly absolutely certain it won’t, so he’s only willing to assign 10% that it will happen”.

          • Airgap says:

            Suppose Bob tells us that the pixies at the bottom of the garden whisper it into his ear under a full moon?

            This means that Bob is fucking with you. Get a sense of humor for chrissake.

            I really don’t think we’ve tried hard enough to rule this out in the case of Ramanujan, either. Is it really so unlikely that an unworldly sperg like Hardy just didn’t get Indian humor?

        • scav says:

          Interesting. Maybe one contribution to correctness outside your own domain of study is a general ability to weigh the arguments of opposing experts, and without knowing the facts in detail, intuit which of them are leaning on wishful thinking or signalling rather than the evidence?

          • J Witt says:

            I think this is an extremely important factor. Recognizing who is making good arguments, who is trustworthy, what types of bias experts have, what data being presented to you can be trusted and what needs to be verified. I think evaluating arguments is overrated relative to evaluating the speaker, because all arguments are filled with various forms of “trust me.”

      • Ith says:

        Yeah, I think this is an obvious but important factor. The article is dismisses the strategy “always agree with expert consensus on everything” as trivial, but that’s not really right, is it? Most people would rather reason from and confirm their priors instead of making the effort of learning what the expert consensus in a given field is and then updating their beliefs. So while the strategy may be obvious it is also often not used.

        I’ve written a bit more on the findings of the Good Judgment Project below, but a quick summary is that in order to be right about something, you need both the intellectual abilities to be right in general and the desire to be right about that specific subject. So in that light the explanation for experts in a field not necessarily being right about other fields thus seems simple: While they may have the intellectual ability to be right about many things, they don’t desire being right about subjects outside their field.

        Probably the best candidate for a General Factor Of Correctness is desiring to have reality-corresponding beliefs about as many things as possible, also having such a desire be stronger than your desire to confirm your existing beliefs. If you have such a desire you then need the ability and opportunity to pursue it, but I suspect you won’t often find people who want to be right while lacking the ability to act on that desire.

        • Jiro says:

          The article is dismisses the strategy “always agree with expert consensus on everything” as trivial, but that’s not really right, is it?

          The context here is an old article by Eliezer where he also suggests that to know the answer to a question about the economy, he could ask economists whether they believe in many worlds and accept their economics answers more than those of economists who don’t believe in many worlds. I’m pretty sure that the expert consensus is not in favor of many worlds (it’s neutral at best).

          • Ith says:

            Well, my argument is mostly that ‘agree with the experts’ is for many people too high a bar to clear, even for experts outside their own field.

            Yudkowsky’s approach seems pretty useless to me. If you look for economists who have what he considers to be the correct opinion on the many worlds theory, you’re most likely just going to find the (probably pretty small) set of economists who are also interested enough in quantum physics to have an opinion on many worlds who also agree with you.

            More generally, the approach seems to fail because it
            1) Requires you to have an informed opinion about a set of reasonably esoteric fields where you are certain a contrarian opinion is correct
            2) Requires that your certainty is warranted
            3) Requires enough other people in the field you’re actually interested in (e.g. economics) to have informed opinions, in public, about enough of the same esoteric fields you have opinions about for you to pick out a group of justified contrarians with some sort of certainty.

            To my mind, this is unlikely to happen.

          • Ano says:

            The problem is if these economists get their answers about physics the exact same way that Eliezer gets his answers about economics; by asking physicists if they believe in their favorite economic theory.

      • “If I want to know about roses, or pre-Clovis settlements, or how to paint a shed, I’ll have to rely on the opinions of experts in those fields. ”

        That assumes that “the opinions of experts in those fields” are readily determined. Figuring out who the real experts are isn’t always easy. To take your example of economics, Galbraith was a very prominent public figure who a non-economist might reasonably view as one of the experts—but had very nearly no reputation within the profession. Krugman somewhere comments that he discovered that Stephen Jay Gould was the equivalent in evolutionary biology. The same abilities that make you able to figure out your field well enough to do original work in it may make you more able than most to make sense of what is really happening in someone else’s field, who is a reliable expert and who a good writer skilled at pushing his public reputation—perhaps with political or ideological axes to grind.

        Carrying the point further, the superstar academic is better qualified to choose among the variant opinions of professionals in other fields, because he is better able to distinguish good arguments from bad, evaluate people by looking at an overlap between their areas of expertise and his, and similar tactics. In the climate controversies, I have a better opinion of Hansen than of Mann, in part because when Hansen talks about the economics of dealing with AGW he gets it right, in part because Mann’s pretense to be a Nobel Prize winner is evidence that he’s a flake.

        The expert in one field may also be better able to distinguish a consensus in another field that is based mostly on pressures for conformity from one based on good evidence, having observed similar patterns in his own field.

      • Adam says:

        An observation.

        You’re an expert in economics. The majority of people aren’t experts in anything more than being themselves. I would think that by virtue of you being an expert in something, you have practiced thinking skills that the majority of people haven’t practiced and are better equipped to evaluate information in other fields than the average person.

        Maybe a simpler way is saying that being an expert in complexity better prepares you for other complexity.

        • Deiseach says:

          My rockstar superman economics skills may indeed enable me to decide Professor Smith is a pushy self-publicist and Professor Jones is sound in his approach.

          That still only means I now decide I can trust Jones and Brown and Robinson when they say “St Brendan the Navigator was the first European to visit the New World” and think that Smith is talking out of his hat when he says “Nonsense, it was Leif Erickson!”

          It does not mean I am now an expert on history or navigation or mediaeval sea voyaging, it simply means I have a better chance than the man in the street at identifying who’s a chancer and who’s dull but informed and who’s keeping up with advances in the field. It still doesn’t let me say “I independently came to the same conclusion as Jones, Brown and Robinson* and so they are correct and so you can trust me when I advise you on spraying your roses, the best colour to paint your shed, and how to bake a really light sponge cake.”

          *Unless the corollary to that is “Because of my expertise on skin hide boats and how feasible it is that they could make a long sea journey”, which is not necessarily the same thing as “The expertise I obtained when I did a five-hour cram session on the topic is just as or even more trustworthy than the experience of those who have made a study of the subject for years”.

    • bbartlog says:

      This isn’t a good objection. At least, your example is not a good one to illustrate it. Maybe if someone has an astrological system for winning the lottery, and they win the lottery, I could see raising this point (but even then, we’re making certain *assumptions* about logical positivism and a consistent universe in order to undergird our dismissal of the astrologer).

      But in the case of Clovis, the whole point is that they reached a conclusion (not ‘made a guess’) based on limited evidence. Why and how did they reach this conclusion? That’s the whole mystery…

    • RCF says:

      You are responding to a hypothetical correlation by noting that there will be select examples of deviations from that correlation. That’s not much of an objection.

      The proposition under consideration is that the rule “If someone has been right on something you know the answer for, then that gives you non-zero information on a question whose answer you don’t know”. Information is a stochastic property. What happens with a special case doesn’t rebut an assertion of information.

  2. Jeremy says:

    I think the discussion questions are relatively weak evidence because they rely on only one point of data. If the man on the bus /also/ successfully predicted a mechanism of time travel, /then/ I would start digging.

    • Tom Richards says:

      I disagree. The correct prediction involved an astronomically unlikely combination of two hugely unlikely claims. The very fact of Bigfoot being responsible for 9/11 would be sufficient reason to have much lower confidence in our (or at any rate my) general worldview, and as such assign much higher (though still low) prior probabilities to other unlikely claims. The tramp’s hit on that still wouldn’t make me think his Arc theory was probable, but it would make me think it was sufficiently probable for a cost-benefit analysis to recommend digging.

      • Jeremy says:

        I was taking into account the possibility that there was something wrong with my mind, which I think is more likely than any of these scenarios. I suppose if I was mildly sure it wasn’t deja-vu-like effect, then I would start digging.

      • RCF says:

        “and as such assign much higher (though still low) prior probabilities to other unlikely claims.”

        That, by itself, does not give rise to Ark (note spelling) theory being any more likely. You are making the error of affirming the consequence: the Ark theory is unlikely, and clearly something unlikely is happening, so maybe the Ark theory is true. But you haven’t provided any reason to prefer “Both of the tramp’s claims are true” to “The Bigfoot claim is true, but the Ark claim is not”; you haven’t given any reason why the probability mass that previously was assigned to “Neither claim is true” should be preferentially redistributed to “Both are true” rather than “Just the Bigfoot one is true”. You’ve simply noted that there is probability mass to redistributed, and taken for granted that is should be redistributed to “Both are true”.

    • Desertopa says:

      I’d definitely start to consider his assertion that the Ark of the Covenant is buried under the Epcot dome worthy of investigation in the abstract sense. But for me to personally go there to dig for it seems absurd. Even given a 100% chance that the Ark of the Covenant actually is buried under the Epcot dome, what are my chances of actually finding it given the resources available to me should I personally attempt to dig for it? The Epcot dome is big, I’m in no position to bring in a bunch of excavators, and I’d get kicked out long before I got anywhere if I came in with a pickaxe and shovel. Believing that the Arc of the Covenant might plausibly be there doesn’t offer me a lot of opportunity for personal action.

      • Jaskologist says:

        Plus, what do you do once you find it? Try very hard not to touch or look at it? I can do that much better while it’s still buried.

        • CJB says:

          So on another place, someone I know posited a question that was, essentially ‘You meet a being that is friendly, not particularly helpful in a direct fashion, but has godlike powers and knowledge: what do?”

          I pointed out that of all the hypothetical situations you can put a human in, this is the one we’ve spent the most time thinking about.

          Which is to say:

          What do you do with the Ark of the Covenant? EXACTLY WHAT THE INSTRUCTIONS SAY. The only reason the Nazis got facemelted was they did literally the one and only thing the book tells you never to do or you die horribly. I mean- you can steal it and put it in front of an idol and all you get is mice and hemorrhoids.

          As for uses- I imagine the Israelis would be quite happy to have it back and probably pay quite well for it. And the good news is they can’t even attack you, because the Ark gives you victory over your enemies if you follow the proper rituals.

        • Albatross says:

          Opaque helmet. Flat bed truck. Robotic arm pre-programmed to open and close.

          I’d drive around showing it to a few people: Kim Young Un, Pootin, Assad, ISIS, the Westboro Baptist Church… open it up wait a bit. Close it.

        • Airgap says:

          The first step is probably to do something about the inevitable French guy Mike Anissimov will have hired to steal it from you. Hope you can ride a horse.

    • HeelBearCub says:

      In particular, it seems to ignore the law of large numbers. Every day, people hear mentally ill people make bizarre statements and predictions. As time goes on, some of these will “come true”. Given that one of these has come true, the most likely conclusion is that you were one of the people who happened to have heard a random prediction that then came true, not that the untreated schizophrenic is an oracle.

      • Furrfu says:

        Every untreated schizophrenic (and quite a few of the treated ones) is an oracle. Have you ever spent time conversing with someone experiencing psychosis? The conversation is a lot like a dream. (If dreams aren’t oracular, I don’t know what is.)

        Which is not to say that you can win the lottery by asking schizophrenic people to pick the numbers for you, of course. But psychotic ravings are at least as good as any historical oracle at giving you new, occasionally productive, ways of looking at things.

  3. Yildo says:

    The probability of the next coin flip coming up heads is still 50:50 no matter how many of the previous coin flips came up heads.

    • Scott Alexander says:

      False in the relevant sense. If the coin came up heads each of the past twenty times, consider that you have a biased coin that is near-certain to come up heads again.

      If some people are better at problem-solving than others, that’s the equivalent of their minds being biased coins.

      (consider by analogy the claim that there’s no point in trying to get all-star players on your baseball team; sure, they’ve gotten more hits and home runs in the past, but the chance of a coin coming up heads is always 50% regardless of past behavior!)

      • Murphy says:

        This reminds me of a section in one of the culture novels where the minds are studying a small number of humans with an uncanny ability to come to correct conclusions about situations given insufficient information.

      • At a considerable tangent, this is why prediction before the fact is a better test of a scientific theory than explanation after the fact. For details see:

        http://daviddfriedman.blogspot.com/2010/03/prediction-vs-explanation.html

      • Mary says:

        Yeah. It’s like that essay they always mention when discussing fundamental attribution error — how people tended to think that an essay for or against Fidel Castro indicated the writer’s views even when they were told that it had been assigned.

        Somehow, I think if they had gotten essays that told them that Castro was evil because by providing a shining example of a wholesome and well-governed society, he endangered our culture by making it look putrid by contrast, none of them would have made the mistake of saying the writer opposed Castro.

        If, instead, they used well-reasoned and written essays, supporting the view they wrote about would be the way to bet. Maybe not too heavily, but the way.

        • hamnox says:

          People who can pass the ideological Turing test for a random subject are depressingly rare yeah, so if an assigned topic is strongly written and well-supported (or appears as well supported as you think the topic can be, since you might not pass the ITT either) there’s a good chance the author believes what they’re writing.

    • bbartlog says:

      At some point, your assumptions about what is going on must change. Someone who dismisses eight or ten heads in a row as a fluke is probably on safe ground. Someone who dismisses forty is exercising a fanatical attachment to some assumption that they should really be re-examining.

      • Mary says:

        I observe that casinos do not test coins by throwing them a lot; they take ’em in labs and carefully measure for any imbalance.

  4. Cerebral Paul Z. says:

    One not-yet-mentioned problem with the “pre-Clovis” approach: it’s probably not sufficient for people who’ve succeeded in sifting through a mess of competing claims about pre-Columbian anthropology to have just the ability to apply that same skill to sifting through a mess of competing claims about economic data. For the skill set to transfer they’d likely have to go ahead and actually do the economic sifting– and a lot of the time they’d need for this has already been spent studying pre-Columbian anthropology. Your average pre-Clovis whiz is probably getting his economic ideas from reading the occasional op-ed or magazine piece, and our confidence in the result should be marked down accordingly.

    • pterrorgrine says:

      It sounds like Scott is saying that someone with a high correctness factor would guess right about both pre-Clovis anthropology and Schmoeism based only on flipping through op-eds in both fields, but the limited nature of that information seems to make objections like Lightman‘s more important.

    • DanielLC says:

      Perhaps they’re good at identifying others’ correctness factors, and figure out which experts to listen to. Or perhaps they understand the sort of biases that experts are likely to have, and can find the truth more accurately (but no more precisely) by correcting for this.

      • Cerebral Paul Z. says:

        No doubt that happens sometimes. More often, when I read an expert in one field sounding off in another, it reminds me of Tom Wolfe’s test pilots crashing their cars in late-night drunken rat-racing, in an attempt to prove that the Right Stuff applies in all endeavors.

        The important point is that when an anthropology maven decides which economic ideas to believe, he’s usually employing a different skill than he used to get his anthropological reputation; if he turns out to be right about economics as well, there’s likely to be some luck involved.

    • Smoke says:

      Right, the fact that it takes time to acquire expertise and people have finite time to spend weakly suggests that expertise in different areas should be anticorrelated.

      To take a concrete example, let’s say I’m a professor at a university where every grad student we admit is someone whose IQ is exactly 125 and spends exactly 40 hours a week on classwork. If I am talking to a grad student and they demonstrate comprehensive knowledge of graduate-level computer science, it seems reasonable for me to make a Bayesian update against them also demonstrating comprehensive knowledge of graduate-level biology.

      (This suggests that a general rationality/correctness factor may exist even if we can’t see it in the data.)

  5. Eli says:

    The closest thing I can identify to a General Factor for Correctness is consilience: small truths are interwoven with each-other in a massive number of ways, and the more knowledge you gain of various domains, the more you can spot the generalities to abstract out. Math actually consists of an entire field devoted to handling the abstracted generalities all on their own, having thrown away the concrete instances. But as a general rule for daily life, apparent propositions, domains, or belief-systems that only really agree internally, without being reconcilable externally to other domains of confident knowledge, are often more likely to range from overblown to pseudo-intellectual, or even into intentional falsehood, than to be unusually accurate “prophecies”.

    A mind that can more efficiently compress its specific experiences into abstracted models will achieve a lower generalization error — this is actually more-or-less a theorem of statistical learning theory.

    So: a guy says Bigfoot caused 9/11. This completely fails to relate to anything else I’ve ever learned, experienced, or even just heard. He’s probably just nuts.

    Yet, so: a guy says AI could destroy humanity. He explains the reason is because there’s no physical force compelling the machine to care about us, so why wouldn’t it just do what it wants? Well, I’ve certainly never seen physical forces compelling moral obedience, and I have seen psychopaths who just do what they want because they fail to possess mental machinery for caring about others, so actually, this seemingly crazy proposition is pretty consilient!

    By this metric, notably, you can immediately notice that Thomas Friedman is full of shit.

    • ADifferentAnonymous says:

      I can’t help imagining that Schmoeism predicts population boom and increased nautical investment in pre-Clovis conditions.

    • Steve Sailer says:

      Right, truths tend to connect to each other, while lies, spin, and political correctness tend to be dead ends.

  6. TomA says:

    The mental trait that you describe (and speculate about it’s actual existence) would likely have conferred a fitness advantage to our evolutionary ancestors. Making successful decisions in the face of great uncertainty could keep you alive long enough to reproduce.

    • Scott Alexander says:

      I don’t think that’s the right level on which to think about this. It’s like saying “being good at things is evolutionarily advantageous”. Well, so it is, but you can’t evolve “being good at things” as a trait directly. You have to see what the structure of being good at things is, how each different thing evolved, and what factors have led to the preservation or extinction of individual differences in them.

      • TomA says:

        The mental trait would be a subconscious integration of knowledge and deduction that emerges into consciousness as an actionable prediction. In the ancient ancestral environment, when this fails, presumably you die young. When it succeeds, you pass on your genes. An analogy could be the anthropomorphised descriptor of cleverness in some animal species such as foxes.

      • Pat says:

        I believe you can however evolve prediction traits by rewarding a neuronal structure that correctly predicts what inputs are received in the future – see e.g. stuff like the Memory-prediction framework.

        There’s a theory that ‘Ecstatic Seizures’ work by flooding the prediction component with ‘you’re correct!’ messages which brings on the euphoria feeling (which matches my own experience).

    • Not Robin Hanson says:

      Ehh. The tribe can survive being wrong better than you can survive being on the wrong side of the tribe.

      • TomA says:

        Or you become the leader of the tribe. There is no evolutionary advantage to being persistently stupid.

        • CatCube says:

          I’m not sure how you look at politicians worldwide and assume that being right is more likely to make you a leader than being good at stroking the egos of others.

          • blacktrance says:

            Being right need not be taking the right positions – an unscrupulous politician who’s right about what positions to take in order to win without being pressured to enact something disastrous and reputation-destroying has an advantage.

        • Not Robin Hanson says:

          There’s no evolutionary advantage to being persistently on the wrong side of the tribe either. The question is which is less likely to prevent you from reproducing.

          If you are on the wrong side of the tribe, you cannot be leader. No matter how right you are.

          • TomA says:

            During times of abundance, status quo behaviors are rewarded within the tribe because it promotes cohesion of interests. However, during times of scarcity, such as driven by environmental factors, innovative behaviors are necessary for rapid adaption. As hunter gatherer tribes moved into northern Europe and confronted radically changing seasonality (and associated abundance-scarcity cycling), they needed to evolve innovative thinking traits, such as described in this post. Hence the correlation with IQ increase.

          • TMK says:

            Europe is actually one of the stablest environments worldwide. If you want to look for something that is prone to large unpredictable changes, look for Australia or something, there you should find the smartest people on Earth, not in Northern Europe.

      • Unknowns says:

        Exactly. This is why you won’t find a “correct contrarian cluster” in politics. In the ancestral environment they would all be dead.

      • Autolykos says:

        Yup. The only thing people hate more than a smartass is a smartass who’s right. This is definitely not a trait that would increase your fitness in the ancestral environment.
        I don’t really grok why most people distrust anyone who looks smarter than them, but it’s easily observed. I can’t help but find that strange; trusting stupid people seems a lot more dangerous to me.

        • One reason not to trust smart people might be concern that they are smart enough to fool you into acting in their interest and against yours.

        • TomA says:

          The modern derogatory concept of smartass is a luxury borne of our current affluence (and near total extinction of existential hardship). Think survival competence instead, which is how that trait would have been manifested in that era.

        • Jimmy says:

          A smartass is someone who pushes you under him in status by showing that you’re wrong. If his arguments aren’t persuasive, its an annoyance. A smartass that is actually right can be a real threat to your status.

          If you’re a smartass that’s right, you might be really good at crushing ideological enemies, but you’re not making any new friends by doing it, and friends are important for politics.

          The problem isn’t with the “being right” part, it’s with the hostile and not-thought-through political strategy of trying to push everyone under you instead of making friends.

          Of course, from the correct smartass point of view, it’s seldom a “hostile” act (at least, not admitted to oneself as one). It’s seen as “I want them to see the right answer! I’m trying to help!”. However, they also see the wrong person as *already* lower than them in status. They might not be judgy about it, but they tend to internally run on rules like “people who are right should be listened to, even if they’re not as popular” so their status rankings clash with the rest of the tribe and result in conflicts when it comes up. The smart ass generally doesn’t have any respect for the existing system, seeing it as unfair and stupid, so they’ll act like it doesn’t exist – like they expect everyone to bow to *their* idea of how status should work. And they’ll end up arrogantly challenging the entire system, making more enemies than friends, and being somewhat bitter and resentful that their not-so-smart political strategy didn’t work as it “should”. And of course, none of is to say they’re *wrong* about their system being better, necessarily – it’s just that it’s irrelevant. No one is asking them.

          One strategy is to shut one’s mouth and begrudgingly accept one’s “unfairly” lowered status. One is to run away saying “screw you all, I’m going home”, and trying to play with a few like minded individuals where you can agree on more argumentative norms (or whatever it is).

          However, there’s also one where you try to understand *why* it’s so unfair. The one where you try to understand the ins and outs of how even “wrong” people see things so that you can work with them and make friends. And crush your enemies only when you can actually win at acceptable cost. In short, taking the blinders off and actually playing politics without selling out on your principles or running away from the problem.

          Personally, I think there’s a place for all three responses. If you can get the last one to work though, it has some nice advantages.

          At least, that’s how it looks to me, as a recovering self-identified “smartass who’s right” (who still kinda identifies as a smartass who’s right, but one who is *somewhat* more selective about when to be all “in yo face” about it” to people, and somewhat more likely to let people “be wrong”)

          • John Schilling says:

            If you’re a smartass that’s right, you might be really good at crushing ideological enemies, but you’re not making any new friends by doing it

            You’re making friends from the less-clever enemies of the people you are crushing; isn’t that kind of John Stewart’s and Steven Colbert’s entire shtick? And it seems to me they have achieved real political power in the process.

    • onyomi says:

      Being able to accurately perceive and predict the physical world definitely confers a survival and reproduction advantage up to a point (though there are cases in which, for example, truly accurate assessment of say, your own tribe’s goodness, might be disadvantageous, and it is in such places we find predictable biases). That fact is one of the best reasons to think our perceptions at all correspond to some really existing world.

      But like any other trait, we’d expect some people to have more of it than others, and for it to be more or less advantageous in different environments.

      • Adam says:

        The evolutionary advantage humans (and really, all living animals) have is pattern-matching to danger. Seeing every streak of yellow as a lion is advantageous to the onyx even when it’s wrong half the time, and this is generally true with any broad class of cases where the cost of being wrong in one direction is death and the cost of being wrong in the other direction is minor inconvenience. Evolution overfits the high-cost consequences.

    • bbartlog says:

      Is being correct (in a way that is at variance with received wisdom) about uncertain propositions really so advantageous? Are you sure that being small-c conservative and accepting the popular opinion isn’t generally better for you?

      • albatross says:

        Yeah, it’s hard to enjoy the eventual vindication you get for being right after that bit where the other villagers burn you at the stake for your correct contrarian opinions.

    • People have a wide range of abilities. Evolution is too slow and random to optimize for every trait that might be good.

      • TomA says:

        That is true, but the traits that are extant are the ones that made it through the gauntlet of chance and natural selection. It’s very difficult to account for the failures, because they are mostly lost to ancient prehistory.

  7. Brian says:

    It may be worth discussing anti-factors here. Anderson, Lin, et. al. from Wharton found Harbingers of Failure which:

    We show that some customers, whom we call ‘Harbingers’ of failure, systematically purchase
    new products that flop. Their early adoption of a new product is a strong signal that a
    product will fail – the more they buy, the less likely the product will succeed.

    Clearly, market-performance is (for however we care to define socially constructed) a culturally defined feedback loop constrained by needing to move real stuff around. But this sort of data mining should be able to find *correlations* in other sufficiently large markets-of-ideas. I’m not sure if we can draw a useful link between market-performance-prediction and “general correctness” but it may be worth discussing as why this sort of thing *isn’t* a useful concept to port over.

    • Thanks for the link. I wonder whether it suggests that people who are enthusiastic about products that have failed should be very cautious about starting businesses which are intended to sell to the general public. Nothing wrong with people like that looking for a niche market.

      • Steve Sailer says:

        I have such a long track record of liking products that failed in the marketplace that my wife suggested I start a market research firm in which I would be the sole owner, manager, and respondent. Firms would show me products they were considering introducing and if I really, really liked it, then they would break the mold, fire the guy who came up with idea, and bury any existing inventory at Yucca Mountain.

        • Deiseach says:

          I do that with television programmes.

          “I love this new show!”

          “Damn. That means it’ll be cancelled after the first season”. 🙂

        • Airgap says:

          Steve is just humblebragging about his superior taste; ignore him.

  8. Jiro says:

    To give a reductio ad absurdum, if you have some mental disorder that causes you to live in a completely delusional fantasy world, you will have incorrect opinions about everything at once, which looks highly correlated, but this doesn’t necessarily prove that there are correlations among the people who are more correct than average.

    Yes, it does, just not by very much. If people in delusional fantasy worlds are more likely to have incorrect opinions than average people, it follows that people who aren’t in delusional fantasy worlds are less likely to have incorrect opinions than average people.

    You are correct if you are using modes, however. Since few people are in delusional fantasy worlds, the mode is “not in a delusional fantasy world”, delusional people are worse, and nondelusional people are identical to the mode.

    The first site says the Blue Party will win by 5%; the second site says the Green Party will win by 5%. You look up the authors of the two sites, and find that the guy who wrote the first is a Young Earth Creationist. Do you have any opinion on who is going to win the election?

    1. Yeah. But fhis falls under the third problem. Also, I would assume that the guy is more likely to be unreliable because he is a creationist, but I still need to figure out which *direction* he’s unreliable in. Creationism probably means he’ll always underestimate or always overestimate one party’s chance of winning, not just that he misestimates.

    2. It is generally a bad idea to expend lots of resources based on personally being convinced that something is true when it has not been checked by others and otherwise stood up under testing and probing, since you have some probability of being incorrectly convinced. So no.

    3. If some factor leads to people both believing in a particular economic theory and believing in a particular archeological theory, it may just be that the factor got lucky. Imagine an extreme case where every economist flips a coin. If the coin comes up heads, they believe economic theory A and archeological theory B. If it comes up tails, they believe ~A and ~B. It then turns out that B is correct. Should I then believe A, on the grounds that all the economists who were correct about B also believe A?

    • DanielLC says:

      > Yes, it does, just not by very much. If people in delusional fantasy worlds are more likely to have incorrect opinions than average people, it follows that people who aren’t in delusional fantasy worlds are less likely to have incorrect opinions than average people.

      I suspect you misread this. There is no correlation about correctness among people who are more correct than average (i.e. ones who are not in a fantasy land). They are are less likely to have incorrect opinions than average people, but there is no correlation among the defined group.

    • albatross says:

      It depends whether the delusional people are a small minority, or a majority even of subject matter experts. If so, then finding thwt small non-delusional minority is worthwhile.

  9. Unknowns says:

    I don’t think we can reasonably deny that there will be at least one general Factor of Correctness, and also at least one Factor of Incorrectness (e.g. insanity). But it is questionable whether that factor will apply to all areas of thought at once, as Eliezer seems to suppose, and it seems to me we have good evidence that it will not. In particular the fact that political parties are so “unanimous” in their opinions indicates that it won’t. Because the political party issues shows one of two things: 1) one political party is absolutely right about everything; or 2) people adopt political opinions based on party affiliation, not reality. And 1) is clearly false, so the answer must be 2).

    And if 2) is the case, then if the general factor applies to politics, we should find a “correct contrarian cluster” of people who may belong to a political party or not, but have a set of opinions where they are consistently disagreeing with their party because they are right and the party is not. I don’t think anyone can find a substantial case of this, so I don’t think there is any “correct contrarian cluster” in politics.

    • Jiro says:

      I don’t think anyone can find a substantial case of this

      Nuclear power.

      Also atheists who were raised religious. (People reject their religions and become atheist; they don’t, in the same proportion, reject their religion to become another religion that is as far away from their initial religion as atheism is.)

      Of course, this has a problem: if a cluster of people consistently disagree with their party on a bunch of issues, they’ll form another party. You will then observe this as a case of people agreeing with their party, not disagreeing with their party, and you won’t be able to distinguish it from people who joined the party first and adopted the beliefs of the party second. Libertarianism may actually be in this position–Scott is a blue and not a libertarian, but when you look at the issues he disagrees with blues on, they are pretty much all issues that libertarians agree with, and not, for instance, non-libertarian conservatives.

      • onyomi says:

        The fact that libertarians don’t agree entirely with either major party, but agree with one party on some things, the other party on some other things, and neither party on some things, is, imo, a point in their favor.

        Of course, one could cobble together many other conceivable combinations of positions other than libertarianism that are the same, but none come to mind right now that are similarly influential and/or logically consistent today.

        For a similar reason I tend to feel slightly uneasy when I find myself agreeing 100% with anyone. I try to find some flaw, somewhere. Of course, it’s generally a good practice to try to poke holes in one’s own view, and maybe this is also just me trying to differentiate my own thought just to feel special (and I do, on occasion, read something with which I have 0 quibbles), but there’s also this general notion I have that, “no one person can be 100% right, so reality is always going to be slightly different.”

      • Unknowns says:

        Scott Alexander is probably more rational about politics than anyone else I know, but this suggests he has the same kind of problem (irrationally attaching to group opinions.) Libertarians basically have their opinions determined by one overarching general principle, and reality doesn’t work that way, so they can’t be right about everything. The blues won’t be right about everything either, so this to some extent supports Scott, but on the other hand you can’t really think that the reds will be wrong about everything, so why doesn’t Scott have any red opinions?

        • Tom Richards says:

          On the other hand, it does seem like there are multiple issues in which libertarians represent correct contrarian clusters within conservative political parties – Gary Johnson within the US Republicans and Daniel Hannan and (formerly) Douglas Carswell within the UK Conservatives, for example. And interestingly, Carswell now appears to lead a correct contrarian cluster on a number of issues within UKIP, and those issues are by no means all the same as the ones on which he was right to disagree with the Tories.

        • Jiro says:

          The point is that Scott is not a libertarian (and clearly has areas where he disagrees with libertarians), yet in the cases where he does disagree with his party, these disagreements almost always end up as disagreements in the libertarian direction, and not in other directions. This suggests that libertarianism is such a contrarian cluster.

          • Glen Raphael says:

            Expressing views that are literally correct is a luxury that actually-electable candidates and parties can’t afford. Libertarians can afford to say true things in public ONLY because they have absolutely no chance of getting elected.

            If Libertarians ever seemed to have a chance of winning office in large numbers, their elections would MATTER enough that their electoral process would select for candidates who pander more. The candidates would start saying what their polsters think will be popular with the electorate, the electorate would preference-falsify to pretend to believe what they’re supposed to believe, and the positions would all become unmoored from reality as much as mainstream republicrat views are.

            Whereupon some other party would have to take on the role of saying things that are actually true and not caring whether it’s popular – possibly the Green Party.

        • Anon256 says:

          You are aware that Scott wrote the FAQ about why not to have your opinions all determined by Overarching Libertarian Principle?

    • brad says:

      And if 2) is the case, then if the general factor applies to politics, we should find a “correct contrarian cluster” of people who may belong to a political party or not, but have a set of opinions where they are consistently disagreeing with their party because they are right and the party is not. I don’t think anyone can find a substantial case of this, so I don’t think there is any “correct contrarian cluster” in politics.

      I don’t really understand the claim. Do you think that everyone in the country agrees with either all of the Democratic Party’s positions or all of the Republican Party’s positions? Because that’s certainly not the case.

      • Jiro says:

        The idea is that people disagree with the Democrats and Republicans, but they don’t *randomly* disagree. People who disagree with their own party disagree on particular issues and in particular directions. So you get Republicans who oppose drug laws and Democrats in favor of nuclear power. But you don’t get Republicans who want to loosen the evidence for rape accusations at colleges, or Democrats who want school prayer, at least not in the same quantities.

        This lets you distinguish “people believe this because they are smart” from “people believe this because they believe whatever their party says”.

        • albatross says:

          Is there data that backs this up, or is that just your impression? It’s extremely easy to get a weirdly skewed picture of opinions of groups like Democrats or Catholics or gun owners by reading media sources.

        • brad says:

          It seems like you are disagreeing with Unknowns. He said “I don’t think anyone can find a substantial case of this” and you are saying well we have these common cases of people disagreeing with their parties (drugs & nuclear power). I agree that those as well as several other areas are common, and depending on what level is set for “substantial” I’d say there are many more.

          So either Unknowns overlooked a very common phenomenon or we both misunderstood what he was trying to say. That’s what I’m trying to figure out.

        • Fairhaven says:

          There is no evidence for your assumptions that politico cal disagreements with one’s party are only in conventional categories as depicted in the media.

          Many religiously conservative blacks and Hispanics believe in prayer but vote with progressives because they want the government benefits or they have been emotionally manipulated by race-baiting. Without this religious minority population, the blues would have trouble winning elections.

        • Nornagest says:

          I would bet my shirt that I could find Democrats who want school prayer. The relative silence in the media there is because of a stronger party line, not because there isn’t anyone signed up for the party who disagrees with it.

          Republicans who want to loosen standards of evidence for rape accusations… those are probably rarer, because that’s a position native to a certain flavor of feminism, and it’s harder to find Republicans who’re serious about that than it is to find Democrats who’re serious about public religion. But they probably exist, too.

    • youzicha says:

      I think an alternative model is to say that there is still a “general correctness factor”, but there is also a “politics factor”, and a given individual’s opinion is the sum of their loadings on both factors, plus noise. So the correctness factor still applies to political topics, it just usually gets swamped by the politics factor. From an inference point of view, that would be fine (provided you have enough data), you would be able to do a Netflix-style factorization and extract both.

      • Unknowns says:

        After thinking about it, I agree this is likely.

        It also corresponds with what I actually do, e.g. I in fact give more weight to Scott’s political opinions because they are his opinions, but not as much more weight as to other kinds of opinion.

  10. Nesh says:

    I would guess the one of the easiest ways to more correct then usual is to to de-simplifiy problems by acknowledging the fuzziness of the definitions and breaking things into components. For example rather then then the supporting or opposing the serotonin hypothesis of depression, have a list of hypotheses of both correlation and causation for each serotonin receptor type with happiness and depression. The problem is that is inefficient for most problems and isn’t applicable for questions of resulted in complex systems like elections, but being aware that your heuristics will fail in some edge cases seems like a good place to start.

  11. Alraune says:

    These people aren’t succeeding because they parrot the experts, they’re not succeeding because they have more IQ or education, and they’re not succeeding in some kind of trivial way like rejecting things that will never happen. Although the article doesn’t specify, I think they’re doing something more than just being well-calibrated. They seem to be succeeding through some mysterious quality totally separate from all of these things.

    Are they higher-level Keynesian beauty contest players?

  12. Sam says:

    1. Obviously I ignore both sites and go check Ladbrokes or literally any prediction market.

    2. Why would I want a Nazi-melting box? I don’t hang out with Nazis and have no need to melt anyone.

    3. Not at all. Both are almost certainly wrong in important ways, knowing economists. Nor do I care what either thinks about archaeology. And can someone tl;dr https://en.wikipedia.org/wiki/Settlement_of_the_Americas for me? Is the validity of “Clovis-first” even a question on which an (honest) expert should have more than 75% confidence? If not, the counterfactual behind #3 basically contains no information.

    4. How does one “do better” on a test the answers to which are controversial? According to who?

    Tetlock’s research just validates Bryan Caplan’s ultimately rather banal observation that you shouldn’t really listen to anyone who expresses an opinion on a highly uncertain and controversial issue but isn’t willing to bet on it. In my opinion, the most surprising related research finding (Servan-Schreiber) is that even betting for abstract stakes like reputation in a particular community will work, provided that reputation is well-quantified. I also find it pretty unsurprising that CIA analysts with their classified info can’t beat the best amateur, uncleared forecasters. The vast bulk of classified info probably shouldn’t be secret, but also isn’t especially relevant or even interesting, yet the fact of its classification will likely lead someone with clearance to overweight it. Moreover, different elements at the CIA face different political incentives: career civil service types probably face the least career risk by being exceptionally well-calibrated, but have little incentive to be highly discriminating, whereas fast-track political apointees angling for a cabinet job probably have the exact opposite incentives.

    Oh, and BTW, if you want to see prediction market addicts forecasting sports, weather, industry trends, etc., come join us at Inkling or a similar market. GJP happened to focus on international affairs because that’s where the money is. They also put a lot of resources into training, team-building, etc., and ended up with an exceptionally successful pool of superforecasters. But in many ways international affairs is a terrible testbed for studying correctness, because the questions that turn out to be interesting often can’t be or fail to be formulated in the right manner before the events of interest take place. (For example, you can ask if there will be a coup in country X by date Y, but first please produce a definition of coup which uncontroversially either applies or does not apply to every single political power shift in history.) Sports, weather, etc., are easier domains in which to formulate precise questions, provide a wealth of statistical data for coming up with good predictions, and yet still abound with ill-informed pundits all to eager to make lousy predictions.

    • Scott Alexander says:

      I’m not interested in prediction markets per se (well, I am, just not in this context), I’m interested in the observation that certain individuals seem to consistently do very well in them.

      Besides, if I ever get into prediction markets, it will be because I finally overcame my procrastination and decided to win some easy money betting against Bernie Sanders.

      (and possibly Biden running for President. Right now it’s at about 50%. Does anyone seriously expect him do this?)

      • Sam says:

        Well I am looking forward to reading Tetlock’s book on superforecasters when it comes out this fall, but I would be surprised if it contains anything too surprising.

        On your aside: I’ve made much more money by betting against Hillary. (For the general, not the primary.) Her fans think she’s got a 65% chance which I don’t buy, and her haters will occasionally bid that down to 45% or so. On a good day I can get in between. There is not so much to be made by shorting Bernie, even at a place like predictit.org where his market price is an astounding 23%, factoring in fees, market depth, etc.

        • Deiseach says:

          Paddy Power will give you 1/7 for Hillary winning the nomination and 7/1 for Bernie winning it; Joe is 16/1 which might be a good price if you want to chance an each-way bet.

          For the Republicans, it’s Jeb Bush 7/5 which sort of seems right to me (I don’t think it’s a good idea for the Republicans to pick him, but he has the family machine to back him up) and Marco Rubio at 7/2, which had me going “Marco who?”

          As for winner, they’re giving Hillary as 10/11 and Jeb as 3/1. Hmmm – I don’t know. I think Sanders is more likely to split the Democratic vote if he insists on running rather than graciously standing aside for the good of the party and that’ll weaken Hillary going in; I could see disgruntled Sanders supporters simply not bothering to vote at all rather than vote for her, which would then be handing the election to the Republicans.

      • LCL says:

        I’d rate Biden running for President around 35%. I see the yes side of the bet as mostly counting on two factors:

        1. Most people in his circle are telling him to do it
        2. He is likely to listen to those people

        1 is a reasonable assumption for any prominent politician considering a run. You’d expect their circle to comprise people who think the politician is great or at least has something to contribute. Plus the self-interest of a small chance that they’ll end up being in the circle of the President.

        2 would normally be a sticking point for someone with Biden’s political experience, as you’d expect him to realize it’s a low percentage political move. But the bet there would be that he’s looking for a distraction from grief and a way to return a sense of purpose to his life after the saga of his son’s illness and death. Not because of anything specific about Biden, but just because those are things people in general are often looking for in similar situations. A presidential run is certainly a distraction and certainly infused with purpose. It may make him more susceptible to listening to his boosters than he normally would be.

      • John Schilling says:

        At this point in Biden’s career, his choices come down to running for president, or becoming a forgotten and impotent something-emeritus. He’s too old, and his reputation too weak, for him to either start down a new path or wait until 2020.

        It is possible that he’s ready to wind down his career and focus on non-political matters for the remainder of his life. But if a career politician’s choices come to the greatest job in politics ever and forgotten emeritus-something, I wouldn’t bet everything on his choosing “forgotten emeritus-something”. If an otherwise impotent nobody has even a long shot at more power than any other human being ever, he might just go for it.

        It would, of course, be an extremely long shot at this stage. But his campaign, at this stage, would need to be little more than a placeholder campaign as the Democratic Alternative Who Isn’t A Crazy Socialist Just In Case Hillary Drops Dead; he doesn’t have to undertake the herculean effort of building a campaign that can defeat Hillary because that’s not going to happen, and so he can expect to take over the working bits of Hillary’s campaign if his own is ever really going to make a go of it. And he doesn’t have to plan on staying in office past 2020, because that’s not going to happen either.

        • onyomi says:

          I think one thing which has become very apparent with the current Republican field is that running for president has become a very winning life choice, even (or maybe especially) if you fall into that large not-entirely-implausible-but-still-not-likely-to-win category which now seems to include most senators and governors.

          There is only shame if you win the nomination and lose the general. There is absolutely no shame in losing the nomination unless it was viewed as yours to lose in the beginning (as it is with Hillary now). On the contrary, running an even remotely plausible campaign for the nomination greatly increases your speaking fees, your probability of getting a tv show, of being invited on tv shows to comment all the time, etc.

          I think there are fewer democrats doing what the republicans are now doing only because the establishment is so settled on Hillary that running against her might be viewed as an act of party disloyalty. Though I do wonder if there’s something about conservatism or the GOP more generally that has started to encourage this for them in particular. Maybe conservatives, though they have a low view of government in the abstract, have an even higher view of the majesty of the office of the POTUS than do most liberals, meaning that even trying to run for the office puts you in that lofty “potential president material” category.

          • Jaskologist says:

            It much less meta than that. The past few election cycles wiped out a lot of the potential Democractic talent. There are currently 31 Republican governors to 18 Democratic governors, 31R:11D state legislatures. 54D:44D senators, and 246R:118D House members. The problem is compounded by the fact that the ones who got wiped out were more likely to be the newer, younger ones with potential. Now they’re just “that guy who couldn’t win his own state.”

          • ddreytes says:

            I think that a large number of Democrats aren’t doing it this year specifically because of Hillary, yes. It’s hard to overstate the degree to which Hillary is dominating the Democratic establishment ATM.

            But I also think that the incentives are very different for Republicans and Democrats – not so much for an ideological reasons, but for two other reasons.

            First, my sense is that relatively speaking, Democrats tend to care more for party loyalty, Republicans for ideological loyalty. That makes it much easier to justify a presidential run if you’re a Republican.

            Second, I think the media landscape is dramatically different for the right and the left, which changes the career paths, and hence the incentives to run for President if you don’t have a good chance to win, on each side.

            EDIT: @ Jaskologist – while there’s some truth to that, I think there’s also a reasonable number of people who would probably or certainly be running this year if Hillary weren’t. It is certainly not the case that every potential candidate in the Democratic Party is running.

      • Steve Sailer says:

        Scott,

        My impression of Tetlock’s Good Judgement Project is that the results have been a little less counterintuitive than that. The winners tend to be very smart, very well-educated people who are fascinated by foreign policy and put in long hours mastering the up to the moment situation in various countries around the world.

        I signed up to participate, but decided it would be too much work for me to achieve even mediocrity among a bunch of first-rate world affairs junkies.

      • Deiseach says:

        It’s Joe Biden, he’s likely to do feckin’ anything.

        I agree that you’ll clean up on Bernie Sanders, though. Bet big! Bet the next six months’ rent!

    • Shieldfoss says:

      4. How does one “do better” on a test the answers to which are controversial? According to who?

      Wait until the answers are no longer controversial, see who the experts now side with.

    • RCF says:

      Why do we need a prediction market for sports? Don’t they already exist in the form of bookmaking?

  13. Michael Watts says:

    1. Five Thirty Eight is down the night before an election, so you search for some other good sites that interpret the polls. You find two. Both seem to be by amateurs, but both are well-designed and professional-looking and talk intelligently about things like sampling bias and such. The first site says the Blue Party will win by 5%; the second site says the Green Party will win by 5%. You look up the authors of the two sites, and find that the guy who wrote the first is a Young Earth Creationist. Do you have any opinion on who is going to win the election?

    No.

    3. Schmoeism and Anti-Schmoeism are two complicated and mutually exclusive economic theories that you don’t understand at all, but you know the economics profession is split about 50-50 between them. In 2005, a survey finds that 66% of Schmoeist economists and 33% of anti-Schmoeist economists believe in pre-Clovis settlement of the New World (p = 0.01). In 2015, new archaeological finds convincingly establish that such settlement existed. How strongly (if at all) do you now favor one theory over the other?

    Relating this to the real world, I’d expect to learn that the survey results were something like this:

    | Schmohist | Anti-Schmohist
    --------------------------------------------
    Pre-Clovis support | 8 | 6
    Pre-Clovis against | 4 | 12
    No opinion | 188 | 182

    …and I’d be unlikely to want to draw any conclusions from that. If the economists do actually all have opinions on Pre-Clovis settlement of the Americas, that would be because it’s been politicized (either generally, or theoretically just among economists), and I wouldn’t want to draw conclusions from that, either. The opinions that end up in political bundles are not, I believe, generally viewed as related to each other, instead achieving high correlations because of group enforcement.

    There’s an essay on less wrong that specifically addresses this question: http://lesswrong.com/lw/gt/a_fable_of_science_and_politics/ . I’ll excerpt at length:

    we shall suppose the first Undergrounders manage to grow food, find water, recycle air, make light, and survive, and that their descendants thrive and eventually form cities. Of the world above, there are only legends written on scraps of paper; and one of these scraps of paper describes the sky, a vast open space of air above a great unbounded floor. The sky is cerulean in color, and contains strange floating objects like enormous tufts of white cotton. But the meaning of the word “cerulean” is controversial; some say that it refers to the color known as “blue”, and others that it refers to the color known as “green”.

    In the early days of the underground society, the Blues and Greens contested with open violence; but today, truce prevails—a peace born of a growing sense of pointlessness. Cultural mores have changed […] The conflict has not vanished. Society is still divided along Blue and Green lines, and there is a “Blue” and a “Green” position on almost every contemporary issue of political or cultural importance. The Blues advocate taxes on individual incomes, the Greens advocate taxes on merchant sales; the Blues advocate stricter marriage laws, while the Greens wish to make it easier to obtain divorces; the Blues take their support from the heart of city areas, while the more distant farmers and watersellers tend to be Green; the Blues believe that the Earth is a huge spherical rock at the center of the universe, the Greens that it is a huge flat rock circling some other object called a Sun. Not every Blue or every Green citizen takes the “Blue” or “Green” position on every issue, but it would be rare to find a city merchant who believed the sky was blue, and yet advocated an individual tax and freer marriage laws.

    One day, the Underground is shaken by a minor earthquake. A sightseeing party of six is caught in the tremblor while looking at the ruins of ancient dwellings in the upper caverns. They feel the brief movement of the rock under their feet, and one of the tourists trips and scrapes her knee. The party decides to turn back, fearing further earthquakes. On their way back, one person catches a whiff of something strange in the air, a scent coming from a long-unused passageway. Ignoring the well-meant cautions of fellow travellers, the person borrows a powered lantern and walks into the passageway. The stone corridor wends upward… and upward…

    Now history branches, depending on which member of the sightseeing party decided to follow the corridor to the surface.

    Barron thought of the Massacre of Cathay, where a Blue army had massacred every citizen of a Green town, including children; he thought of the ancient Blue general, Annas Rell, who had declared Greens “a pit of disease; a pestilence to be cleansed”; he thought of the glints of hatred he’d seen in Blue eyes and something inside him cracked. “How can you be on their side?” Barron screamed at the sky

    Daria stared down the calm blue gaze of the sky, trying to accept it, and finally her breathing quietened. I was wrong, she said to herself mournfully; it’s not so complicated, after all. She would find new friends, and perhaps her family would forgive her…

    “Stupid,” Eddin said, “stupid, stupid, and all the time it was right here.” Hatred, murders, wars, and all along it was just a thing somewhere, that someone had written about like they’d write about any other thing.

    It’s hard to read that essay as support for the idea that when the devotees of one idea largely agree on one side of a different contentious idea, the fact that they’re right about the first one suggests they’re also right about the second. Sure, this is a constructed example, and quite arguably an artifact of politicization, which does things this way on purpose. But I don’t see how you can avoid both the problem of the issue being politicized, and the problem of extremely small sizes in the group of “has opinions on two very different issues”.

    • Nita says:

      Relating this to the real world, I’d expect to learn that the survey results were something like this:

      | Schmohist | Anti-Schmohist
      ——————————————–
      Pre-Clovis support | 8 | 6
      Pre-Clovis against | 4 | 12
      No opinion | 188 | 182

      YES. Thank you.

      And when you select for controversial opinions on several issues, your sample will end up even smaller and weirder.

  14. Samuel Skinner says:

    “3. Schmoeism and Anti-Schmoeism are two complicated and mutually exclusive economic theories that you don’t understand at all, but you know the economics profession is split about 50-50 between them. In 2005, a survey finds that 66% of Schmoeist economists and 33% of anti-Schmoeist economists believe in pre-Clovis settlement of the New World (p = 0.01). In 2015, new archaeological finds convincingly establish that such settlement existed. How strongly (if at all) do you now favor one theory over the other?

    4. As with 3, but instead of merely being the pre-Clovis settlement of America, the survey asked about ten controversial questions in archaeology, anthropology, and historical scholarship, and the Schmoeists did significantly better than the anti-Schmoeists on 9 of them.”

    Isn’t communist versus capitalist an example of why this wouldn’t work? If one position is ideological, like all political groups it will bring its bundle of correct and incorrect positions.

  15. Varqa says:

    To me, the second situation (with the man who claimed that Bigfoot caused 9/11) seems much more clear-cut than the other scenarios, and I think the reason is that he clearly has some additional piece of information that you (and most other people) don’t have. In the other situations, the same information is available to everyone, and their differing opinions depends more on how they interpret it. These look like two fundamentally different types of situations.

    It seems like the second scenario (hidden information) is less common, especially since people will generally share their information in support of their arguments, and everyone ends up having the same information again. However, I think “ability to find novel pieces of information” seems like an ability that could definitely contribute to a general factor of correctness.

    • RCF says:

      How is it clear that he has private information? And wouldn’t that make him being correct about Bigfoot give less information about the Ark?

  16. Earthly Knight says:

    1. I would favor the pollster who believes in evolution over the pollster who does not. Any interpretation of data confers numerous degrees of freedom on the interpreter, and I am a lot more confident that the evolutionist’s judgments will not be driven by wishful thinking. Also, hackneyed joke about how it’s a logical truth that the Green Party will never win an election so P=0.

    2. Learning that bigfoot caused 9/11 would so undermine my confidence in the media, the government, common sense, and natural science that I have difficulty imagining what my resulting belief set would look like. I am not sure I would still be confident that my name is E.K. I think this scenario might just be too weird to elicit useful responses.

    3. If I found out that one sect of economists had correctly predicted the discovery of pre-Clovis settlement while another had not, my first, second, and third instincts would be to look for explanations that make it a spurious correlation. Maybe the Schmoeist economists are more liberal, and pre-Clovis settlement comports well in some subtle way with liberal ideology. Maybe risk-takers are drawn to Schmoeism, and believing in pre-Clovis settlement was, at the time, a pretty big risk. Only after all of the hypotheses that don’t depend on some sort of cross-disciplinary prescience are ruled out (and good luck with that!) would I seriously entertain the idea.

    4. If the Schmoeist economists consistently outperform the anti-Schmoeists on a wide array of scientific predictions, I would be inclined to think they were more reliable and subscribe to the Schmoeist newsletter, yes.

    • Deiseach says:

      Any interpretation of data confers numerous degrees of freedom on the interpreter, and I am a lot more confident that the evolutionist’s judgments will not be driven by wishful thinking.

      Oh really? And the evolutionist-pollster might not be hoping, for example, that the Pomegranate Party will win because they’re running against the Sacrifice to Baal Party, and the Baalists are those crazy creepy religion-obsessed nutjobs while the Pomegranates are nice and liberal and recycle and worry about global warming and really they’re just the right kind of folks – so if maybe perhaps a tiny tiny bit of data massaging makes them look like they’re doing a small bit better than they actually are, that may encourage the floating voters to go Pommy not Baal?

      You’re absolutely sure that could never, ever happen? Because look at the results of the recent British election and how everyone that counted as a pundit was convinced the Tories under Cameron would get hammered and that didn’t happen in reality.

      • Earthly Knight says:

        Everything you say is totally possible. But in the case of the creationist I have strong evidence that his judgment is often distorted by wishful thinking. I have no such evidence for the other pollster.

  17. knz says:

    I’m reminded of the story about how unscrupulous hedge funds sometimes create lots of parallel, slightly different funds at the same time. One of these will, by sheer chance, inexplicably beat the market five years in a row. The hedge fund goes on to tout that fund as evidence that they’re investing geniuses.

    Let’s say there’s an issue that smart people at large are spilt 50-50 on, and tribe X is really convinced that one side of the issue is right. Then it turns out that tribe X is right. There are two kinds of possible explanations:

    1) tribe X collectively reasoned out the correct solution, because they’re better at being right
    2) tribe X chose their side based on a variety of cognitive biases or life experiences common to tribe X members. or, a small number of influential but possibly fallible tribe X members chose that side, and it spread as a cultural meme to everybody else (e.g. Eliezer and many-worlds), etc. (i.e., any factor that has nothing to do with how smart or ‘generally correct’ tribe X members inherently are)

    Seeing that tribe X was right certainly causes a Bayesian update that makes (1) more likely than before. But I think in any case, the prior for (1) is very, very low, exactly _because_ people at large are split 50-50. If an issue is 50-50, there must be compelling evidence and good arguments on both sides.

    The result is that if you ran this kind of experiment on, say, ten 50-50 issues and 1000 tribes, and you found one tribe that got all the issues right, I would still think it was far more likely they got lucky.

    • brad says:

      The problem is more insidious when it isn’t one hedge fund company deliberately seeking to employ a deceptive strategy, but 10,000 independent hedge funds all starting out in a given year. The five year survivors may well be honestly convinced that they are investing geniuses rather than lucky.

  18. onyomi says:

    I think almost everyone makes these kinds of judgments all the time, consciously or subconsciously, and for better or for worse. I tend to assume people who make a lot of spelling and grammar errors are more likely to be wrong, for example. This may be a helpful shortcut, but there are definitely pitfalls as well: people tend to assume that people with certain accents (British) are smarter than others (Southern), that people who agree with them on one highly questionable issue (religions) are more trustworthy on other questions, etc.

    • Jon Gunnarsson says:

      This may be a helpful shortcut, but there are definitely pitfalls as well: people tend to assume that people with certain accents (British) are smarter than others (Southern)

      Are you sure that’s false?

  19. Loquat says:

    Do you consider Yudkowsky himself to be someone who might qualify as having a high General Factor of Correctness? Because I haven’t been able to take him seriously ever since finding out about Roko’s Basilisk – it’s a perfect example of smart people taking lots of individually-logical steps to come to a completely ludicrous and insane conclusion.

    Also, if a small group of people prove to be consistently really good at predicting not only major international events but also “sports games, industry trends, the mean global temperature in 2030, or what the next space probe will find” and they’re not terribly well-educated or well-informed about the things they’re so consistently correct about – it’s possible they’re actually making decisions in a way that everyone else could theoretically learn to imitate, but it’s also possible they’re just psychic.

    • brad says:

      You don’t even need to go off into overly specific predictions of the far future. You can just look at the linked article and the bizarre insistence that the many worlds interpretation is a “slam dunk”. Given that there’s no evidence for it (though none against it either), having such a high confidence strikes me as to be irrational.

      Maybe, *maybe*, if there were enough backward looking predictions a la Pre Clovis to demonstrate a remarkable track record, but if there had been I expect it would have been prominently trumpeted in the post.

      To put it another way, even if the correct contrarian cluster exists, I’d be skeptical of anyone’s claim to be in it without some actual evidence.

    • Saint_Fiasco says:

      >Do you consider Yudkowsky himself to be someone who might qualify as having a high General Factor of Correctness?

      I don’t think you can, because Yudkowsky’s contrarian opinions are on matters that are not settled yet. If we wait until some of his weird theories are confirmed/disconfirmed, we will then be able to say what his Factor of Correctness is.

    • MicaiahC says:

      Wait what? Yudkowsky banned someone for basically having a LW specific troll, how does this indicate that he’s wrong, re: basilisk?

      • Loquat says:

        Because he didn’t ban on grounds of simply trolling, he banned on grounds of the idea being intrinsically dangerous, and according to RationalWiki he still doesn’t believe it’s safe to talk about the concept of “acausal trade with possible superintelligences”.

        • Nornagest says:

          RationalWiki is not what I’d call a reliable source on this topic, but I’ve seen references to the basilisk blanked as late as a year ago. I think Roko deleted his own account, though, albeit only after Eliezer threw a fit and blanked a bunch of his posts.

        • RCF says:

          Cite for the claim that he believes that it is intrinsically, rather than contingently, dangerous?

        • Anonymous says:

          He banned discussion because specific people had nightmares.

    • 27chaos says:

      I am not in love with Eliezer, but he doesn’t believe in Roko’s Basilisk.

      • Jiro says:

        Eliezer doesn’t believe in the exact version of Roko’s Basilisk that was causing trouble, but he does believe that Roko’s Basilisk-like ideas are serious threats.

        • Izaak Weiss says:

          Believing that Basilisks are possible doesn’t seem that crazy. Human minds probably have exploitable flaws.

          • James Picone says:

            That’s not the crux here – Roko’s basilisk was really only metaphorically a metaphorical basilisk in that it made people upset and part of the argument was that being aware of the argument made it worse.

            Broadly speaking, the idea was that future AI might acausally punish you for not doing everything you can to make future AI happen.

          • Airgap says:

            Except they aren’t exploitable by Basilisks because most people think Roko’s Basilisk is retarded. Only nuts like Eli are affected. For most people, Basilisks have no persuasive power, so “Acausal Punishment” is just sadism. The simplest solution is to ban anyone who doesn’t think Basilisks are retarded from working on AI. Arguably, this would have no effect on MIRI.

            It strikes me that the best argument for banning discussion of Basilisks is that it allows you to spring it on prospective AI researchers during an exam. “Consider the following AI risk…[Basilisk]. Discuss.” The correct answer is: “Are you serious? Is this a joke? Because it sounds like one.” Either that or Eli is attempting to capitalize on intellectual envy (“I’m smarter than Eli at something! Hooray!”) to further immunize the population of the world against the Basilisk. Because 99.9999% isn’t good enough for existential risks or some shit.

            Of course, the real explanation is that everyone has off days, but the other explanations are much more fun.

    • RCF says:

      Can you unpack “psychic”?

  20. discorded says:

    It may be relevant that the tails come apart and so we should be wary of expecting the best economists to also be better at thinking about anthropology than the almost-best economists, for instance. That is, there may be a correctness factor for the general population that doesn’t work the same way for experts. If there’s a correctness factor for economics and another one for anthropology and they’re correlated, they may nevertheless be uncorrelated or even negatively correlated among the best economists.

  21. E. Harding says:

    1. No opinion
    2. I’d start digging as fast as I could.
    3. By a small to moderate margin.
    4. By a good to moderate margin.

  22. kenzo says:

    > I think in a sense this is the center of the entire rationalist project.

    And that’s why the central rationalist text is Tetlock’s Expert Political Ju– wait, what? who? *Many Worlds*?

    Seriously, though. One better criticism is that even if there’s a general factor of correctness, that doesn’t mean that if you look at a particular subgroup that’s correct in one particular area they’re more likely to be correct in another particular area. (Statistics! Our intuitions are bad.) As a perfectly plausible mechanism and relevant subgroup-drawing, experts may know a lot in their own fields, but are actually worse when they venture an opinion at all (or at least a contrarian one) on expertise-requiring questions in other fields, because you only have so much time to become expert in things. [edit: discorded beat me to it above]

  23. Zach Pruckowski says:

    Economists who believed in pre-Clovis American settlements may have just sat at the faculty luncheon with a particularly clever and contrarian anthropologist/archeologist/historian and been convinced of it. If they didn’t carefully weigh all the evidence and make the correct conclusion, that would be a major confounding factor. (Of course, if a person has a network of people who tell them correct contrarian things, then that’s useful too)

    • Scott Alexander says:

      I agree there’s certainly noise, I’m asking if there’s also signal beneath it.

      • Peter says:

        I think another question is, “if there’s signal, how much?”

        It’s possible for there to be some weak cue that lets you predict something better than randomly guessing, but not necessarily much better. In principle you could integrate that with a bunch of other weak cues and possibly even some strong cues and get a better result than if you didn’t use that cue… but in practise the gains to be had might be so small, that the chances are that the noise might mess things up more than the signal helps.

        I’ve often had this problem when working with machine learning. Yes, there are lots of techniques that are reasonably robust against throwing lots of “junk features” at them, but “reasonably robust” isn’t the same as perfect and I’ve often got slightly better results by keeping the weakest cues away, even if the weakest cues on their own allow predictions substantially better than chance.

      • Zach Pruckowski says:

        I may not have made my case well with that example. Let’s try the Real Life example of me. If I had been asked my opinion of pre-Clovis American settlements at any point between 2003 and before I read this SSC post, I would have said “Yes”, including at the times it was a contrarian opinion. However, I would have said “Yes” because “Clovis reigned in the 5th century, and obviously the Native Americans must have been in the Americas before that”. So I would have given the correct contrarian answer completely by coincidence. Worse, I would have had high confidence despite not remotely understanding the question.

        If you’re going to use holding Correct Contrarian opinions as a proxy for general ability to discern truth, you’ve got to ensure that the contrarian is holding the correct opinion because he/she properly utilized their truth-discerning mechanisms. Unless you’re going to tell me that reading the first paragraph of this comment makes you more likely to trust my opinions in my areas of expertise (local politics/campaigns and software).

        • Have a simple, possibly boring heuristic– it’s reasonable to assume that ancient things started earlier than is currently believed. Our knowledge of the past is very incomplete, which means that there’s a good chance that earlier examples of whatever will eventually turn up. There are common sense limits– no agriculture before the existence of life– but we aren’t anywhere near those limits.

    • The ability to figure out which contrarian academic is likely to be right is one of the things that might make you both successful in your own field and better at reaching conclusions in other fields. More generally, the ability to distinguish honest arguments by people who care whether they are right from attempts to persuade by people who care whether you reach their conclusion, is an important input to making sense of the world.

      • LCL says:

        I agree, this example would be a good illustration of the skill underlying the correct contrarian cluster (if there is one). To the extent such a thing exists, it would be largely based on skills related to evaluation of evidence and sources. Hearing a good argument from a good source and finding it convincing is a demonstration of that skill.

        I don’t think it’s very useful to think about the correct contrarian cluster in terms of polymaths who accumulate world-class expertise in numerous fields. Even if there do turn out to be a few of those, it’s not really actionable. World class multi-field polymaths would be difficult to cultivate. And you can’t just ask them about a subject in a field they haven’t studied yet; since their correctness is based on deep study, you’ve got to give them time to study it.

        Much more useful would be some factor of source and evidence evaluation that allows people to judge correctness from hearing/reading divided expert opinion, even second or third hand like through journalists’ reports. If this exists, it’s likely to be teachable. Or at the very least you can find people with a lot of it, give them a field they haven’t studied and some cursory information about it, and learn something useful from their impressions.

  24. Shmi Nux says:

    Huh, I originally misinterpreted the Correct Contrarian Cluster as Correct (Contrarian Cluster), whereas it’s the Cluster of Correct Contrarians. No wonder that Eliezer’s term didn’t make sense to me.

  25. D says:

    with higher belief in near-term global catastrophic risk (-0.8, p – 0.01)

    Is it really -0.8? Perhaps -0.08?

    This paper by Tetlock et al. is relevant here. They had a bunch of people forecast geopolitical events over two years. Then they looked at how various individual differences variables and experimental manipulations (e.g., putting people in different kinds of teams) correlated with prediction accuracy. Their main results are in Table 3; the signs are negative because lower Brier scores are better. However, I am somewhat skeptical of these results as the structural equation model they use (Figure 4) seems to be misspecified (several Heywood cases).

    • RCF says:

      Or +.8? It doesn’t make sense to say that there’s a correlation with higher belief, and then give the correlation as a negative number.

  26. Tom Richards says:

    1. Yes. Whatever I already thought. My prior confidence in pollsters is low enough that knowing one of them has a borderline-certifiable belief in another area doesn’t make me think him or her much less reliable than the other.

    2. I dig. The Bigfoot 9/11 discovery has already, as I mentioned above, made me much less confident in my previous overall worldview, making unlikely predictions by anyone significantly less unlikely. Even with the tip from the Bigfoot-predicting tramp, I probably only rate my chances of finding the Arc somewhere around… 15%, maybe, but that’s a high enough probability to recommend digging given the potential payoff.

    3. Negligibly-very slightly.

    4. Meaningfully.

    As an aside, I’m pretty sure the general factor of correctness is something Pratchett intends us to think of Vetinari as scoring astonishingly highly in.

  27. Smithely says:

    If you had asked a white racist what would happen in Zimbabwe/Rhodesia once (black) majority rule came into effect, I am strongly confident that the racist would probably have made a better prediction than an anti-racist. They would have been ridiculed and vilified by the anti-racists for this, but time has shown they would have made more accurate predictions.

    What does this tell us? Is racism correct? If racists can be right about some things, are they right about others?

    Today’s racists are far more likely to view climate change as not existing. They are ridiculed and vilified for this in the same way our Rhodesian racist (who was right) would have been about his “racist” predictions.

    Even people who have a good record of correctness on some issues can have blind spots on others. We are all influenced by ideologies and social groups to which we belong.

    A person who is correct on all the dry, scientific, non-political issues can throw their rationality out the window when it comes to issues with a political dimension (which seems to be pretty much everything now).

    Correctness probably rewards cynicism and social detachment. Most people probably wouldn’t want to pay the costs required to be correct on everything.

    • Alraune says:

      What does this tell us? Is racism correct? If racists can be right about some things, are they right about others?

      “Revolution creates freedom” factoid actually statistical error. Average revolution creates 0 freedoms, Washington George, who lived on his own continent full of natural law theorists, was an outlier and should not have been counted,

    • Matt M says:

      Indeed. I think ultimately this boils down to a question of which is more important – correctly predicting an outcome or the “quality” of your reasoning?

      Let’s say that tomorrow, we stumble upon some sort of completely scientific and incontrovertible proof that climate change is entirely fake and represents zero threat to humanity whatsoever and never will. What would the reaction be by those who, previously, were fervent believers in climate change? An apology? An admission of a mistake? Compliments to the foresight and prescience of their former opponents?

      Not bloody likely. Almost certainly, they would grudgingly admit being incorrect, but immediately follow it with something like “Those deniers just got lucky. Based on the evidence we had at the time, advocating for intervention against the threat of climate change was still the most reasonable and ‘correct’ position to take.”

      My guess is that this is how most of the world sees racist Rhodesians – as idiots who got lucky. And there probably aren’t enough major issues like this that show a huge level of contention followed by some ultimate outcome where predictive ability can be properly judged to form a large enough sample size to draw any meaningful conclusions on the data.

      • RCF says:

        Judging by my interactions with pro-AGW advocates, they would refuse to accept the evidence, declare anyone who presents the evidence a stooge for oil interests, present specious arguments for why it should be ignored, and then threaten banning for anyone who tries to explain the problem with their arguments.

    • Anecdatum: a racist once told me that if black south Africans got the vote, the situationwould absolutely definitely turn out the same as Zimbabwe.

    • Airgap says:

      Do you think white South African/Rhodesian racists would have been best described as cynical and socially detached? Or racists generally? How many do you know? I think what’s really going on here is that most of the detached, cynical racists already comment at SSC. To form a different opinion, you’d have to seek racists out, and why would you?

  28. mister k says:

    My problem with this is subject expertise vs generalism. For instance, EY’s oft repeated claim that someone who accepts many worlds is more likely to be rational would, at the very least, have to have looked into it. This is not a time free use of one’s time, and thus detracts from one’s study in one’s chosen field. That is, the more you know about other fields, the less you know in your chosen field than a hypothetical you who devoted all their time to their field.

    This idea, specialism vs generalism, is something I’ve noticed in day to day life. During my academic studies, some colleagues would essentially devote all their time to their field. As a result, they were better at it than I was. I am fairly confident that I had a broader base of knowledge than them, but if it came to a question about this particular field, they were more likely to be right than I was.

    If an economist was right about the clovis sites, I might wonder if they’d spent some time studying to be so, which might make them less expert in economics.

    • MartinW says:

      That sounds a bit like the RPG fallacy.

      Yes, in theory everybody has a finite amount of time available for studying and learning things, making knowledge a zero-sum game. But in practice, some people are just astonishingly good at picking things up quickly, while others will never be more than mediocre in any field even if they spend every waking hour of their life on learning about it. (Sorry, Malcolm Gladwell.) I doubt that raw amount of time spent on studying is very often the limiting factor in how much someone can know.

      In fact, it seems that expertise in one area and above-average knowledge/skill in other areas are very often positively correlated. In my experience, the more competent someone is in their day job, the more likely it is that they have at least one unrelated hobby or interest which they also excel in.

      E.g. take Richard Feynman, who was a world-class physicist and a semi-professional artist and who decided to go and study Mayan hieroglyphs one day and almost immediately discovered some things which the experts in the field had missed. If I learned that Feyman, after spending just a few weeks of learning about some new area of science which he had known nothing about beforehand, had formed an opinion about it which was contrary to the expert consensus in that field, I would be willing to bet more than even money that Feynman would turn out to be right in the end.

      • mister k says:

        I really don’t think this is generally true. There certainly exist individuals of such extreme intelligence that they do well in multiple fields, but, actually Feynman is a great example. In one of his books he mentions that he and his fellow physicists had spent lots of time inventing clever new stats techniques, only for him to go talk to a statistician who was already aware of them! The point being that as smart as Feynman was, he alone wasn’t able to go into a field he wasn’t familiar with and invent things no-one had ever seen before. So , using this principle you might find Feynman making incorrect predictions about statistics based on his more limited knowledge and then conclude he wouldn’t make very good predictions in physics!

        Yes, a very smart person has the ability to master lots of subjects, but not the time! Feynman was good at lots of things, but he was expert at one.

        • MartinW says:

          But note that in your anecdote, although it turned out that Feynman had been re-inventing the wheel, he wasn’t actually wrong — the techniques he had discovered were valid and useful, just not as original as he’d hoped. And in the case of the Mayan hieroglyphs he did in fact discover something new which was later accepted as valid by experts in Mayan archeology.

          Anyway, my point was not that Feynman could just walk into any field and start second-guessing the experts there. My point was that his reaching impressively high levels in several other fields, does not appear to have come at the cost of his accomplishments in physics (unless you want to claim that he would have won two Nobel prizes if he had focused himself more).

          And if he did take a contrarian position on a topic which he had only started investigating a few weeks before, then based on his track record I would assign a pretty high probability to the hypothesis that he had indeed discovered something new which all the domain experts in that field had previously missed.

        • This corresponds to my father’s story about Leo Szilard. Szilard would come to my father (they were both U of Chicago professors) with some idea in economics. It was generally right, but something economists already knew.

          • Jaycol says:

            @David Friedman It’s interesting you bring up that anecdote, since when I try to think of an anecdotal counterexample to yours, I actually would tend to think of someone like you yourself. If I wanted to counter the “RPG Fallacy” mentioned above (which I think is a great name), I would point to someone who manages to be an expert in price theory and legal theory and historically-informed medieval cooking, combat, and crafts, who has dozens of science fiction concepts and ideas for practical inventions going back decades visible online, who can explain both trial by ordeal and law without the state in comprehensible and rather convincing ways, who remembers and can draw from this bank of knowledge as well as from the millennia of the shared Judaic tradition at will to illustrate a point in conversation, and who manages to teach, travel, talk, and still be a regular contributor in numerous online conversations like this one on at least a weekly basis, with insightful comments to contribute to each. It’s not obvious to me that you are seriously substituting the possibility of being an even better world’s-top-price-theorist when you decide to design a better rope bed.

            This is not just flattery; it really does seem like there is some more general factor at work than just expertise in a field or two. Yet for another totally unscientific and anecdotal illustration, I go to U. of Chicago and my girlfriend works at a pet supply store on 55th street in Hyde Park. Their clientele thus include a number of Nobel Laureates and other experts, but while that seems to be related to how much they are willing to invest in their pets (it is a specialty store and their staff are pet nutrition experts), the smartness of these clients does not necessarily predict the quality of their pets’ diets. Just last week my girlfriend had to help a certain laureate who helped found the Institute bearing your father’s name, when he came in with a seemingly malnourished labradoodle with stomach issues that she thinks may have been caused in part by how this respected economist fed it.

            It’s not immediately apparent from our (completely unsystematic) examination of anecdotal evidence that either one case (expertise in x as anticorrelated substitute to expertise in any nonspecific field ~x) or the other (expertise in x positively correlated with expertise in any nonspecific field ~x) is more common, or the rule rather than the exception. And expertise is distinct from novelty, and novelty from correctness. Still, you seem not only to be able to become an expert in a variety of fields, but to use expertise in one to contribute novelly to others (e.g. in re: trial by ordeal, many historians might simply stop at “well the priest really believed in God, the past is a different country, we’re limited by the Enlightenment episteme, etc.” and not go on to “yes, but it really does seem like God miraculously favored the defendant a lot more often than one would expect!”–I expect that it required an economic intuition to look at this historic legal process from the perspective that explained why it unexpectedly worked). And so despite the difference between your theories being plausible and them actually being empirically correct, sometimes I am tempted to use your position as a barometer for matters like AGW, etc., even though using that principle consistently with anyone seen as smart might end up making my dog sick.

          • Jaycol:

            While I may in some context have explained the theory about why trial by ordeal worked, I didn’t come up with either the theory or the evidence. Both are the work of my friend Peter Leeson.

    • keranih says:

      In my field, the most respected figures are, in general, specialists in a given area. In my experience, they are very clear on what is and is not in their area, and in what areas they defer to the opinions of others.

      In a way, having a definitive opinion is a sign of being a generalist with limited knowledge.

      Perhaps a “general factor of correctness” is a social skill of determining what sort of other people to listen to, which may mean “only listen to people who don’t make definitive statements” or “only listen to people when they put a great deal of caveats and preconditions on their statements.”

      I could see where, over time, this would repeatedly lead to a confidence interval that did not include the null.

  29. James Babcock says:

    The continuous, weak-evidence version of this effect is very hard to apply. But the negative selection version works very well. It’s simple: whenever you read something that’s stunningly stupid, close the tab and pretend you’d never read anything by that author. Do this consistently while also keeping careful track of which sources are primary sources and which sources are secondary sources, and you will sometimes find that what looked like a 50/50 split among experts was actually a 100/0 split among non-stupid experts, balanced out by secondary sources and an occasional idiot.

    • Gilbert says:

      So you expect there to be a class of people who never say anything stunningly stupid? If you interpret stunningly strictly enough for that to be true I suspect you won’t get to eliminate enough experts for the heuristic to be worthwhile.

    • Rowan says:

      I expect the main effect of a heuristic like this is false positives on “stunningly stupid” causing you to disregard the opinions of those in opposing tribes.

    • This is reminding me of Geneen Roth, a writer of self-help books. I found her books engaging and initially plausible.

      She wrote a book about figuring out *exactly* what you want to eat, then eating the least tolerable amount. Her weight was low normal and life was good.

      Then she discovered she had a systemic yeast infection, and most of the foods she liked were actually making her sick. One of the few foods she could tolerate was peanut butter, and she gained weight.

      She developed an emotionally stressful method of emotional change and losing weight and she made a bunch of money from it.

      She lost all the money to Bernie Madoff, and wrote a book of financial advice.

      At that point, I was done with Geneen Roth. Perhaps I should take another look at her books to see if I can figure out if there’s something she’s getting wrong in general, and if it’s a mistake I’m making.

    • Airgap says:

      whenever you read something that’s stunningly stupid, close the tab and pretend you’d never read anything by that author.

      The problem is people keep linking me to it. And then I have to explain why it’s stupid. I’m starting to think pretending I believe popular experts is a better solution.

  30. Andrei says:

    I first thought that “Clovis” referred to the Merovingian king and that “pre-Clovis settlement” meant something like “it would have been preferable if Europeans had colonized the Americas before Clovis was king”.

  31. Ith says:

    They seem to be succeeding through some mysterious quality totally separate from all of these things.

    After reading through the Psychology Today article linked from the Wikipedia page on the Good Judgment Project summarizing the research, the qualities required don’t seem that mysterious to me.

    From https://www.psychologytoday.com/blog/the-sports-mind/201505/whos-best-predicting-the-future-and-how-get-better :

    Their studies show that:

    Individuals with above-average IQ scores tend to perform better on political forecasting tests, and
    Those who possess more relevant crystallized intelligence—for example, larger vocabularies and a more sophisticated understanding of current events and world affairs—outperform people with less.

    Further, good political forecasters tend to be more open-minded. They are more willing to adjust their beliefs in light of new evidence and less susceptible to holding onto opinions dogmatically. The most accurate forecasters also possess a deterministic world view. They understand the world through the laws of probability, rather than belief in supernatural mechanisms, such as fate, destiny, or providence.

    Good forecasters possess a larger appetite for intellectual challenges. They are drawn to problem solving and score higher on scales measuring one’s “need for cognition.” They have a competitive streak and become personally invested in getting the right answers. And they exhibit an active desire to outperform other test-takers.

    According to the researchers, the interpersonal context in which forecasters make their predictions matters, as well. Elite forecasters are more likely to seek out conversation with other forecasters. They enjoy discussing their pet theories and are more likely to probe the knowledge of others.

    So basically, good forecasters are people who:
    – Want to have beliefs about the world that accord with reality instead of wanting to confirm their existing beliefs
    – Primarily rely on evidence, actively seek out new evidence, and are willing to update their beliefs in light of that evidence
    – Have the knowledge and intellectual capacity and skills to act on their desire to be correct about reality

    Note: Haven’t commented here before, so apologies in advance for any mistakes in formatting.

    EDIT: Link to full paper is here: http://pps.sagepub.com/content/10/3/267.full.pdf

    The summary above seems to be basically accurate, although the paper obviously has more detail. The researchers have included some data on the characteristics of superforecasters vs. others, so for anyone interested in this it’s well worth a read.

    • Ith says:

      Hm, I just get a subscription page when trying to access the paper now; luckily I saved a copy. I suppose I’ll take this as a sign that it was my fate to read that paper so that my beliefs could be supported and reality bent the rules for me.

      • LCL says:

        Were you previously using wireless at a library (or perhaps on a college campus)? I’ve found that some publishers recognize connections coming from a subscribing library and grant access even if you don’t go through the library page or log in. Sage might have been one.

    • Steve Sailer says:

      For whatever reason, it’s hard to find online the Good Judgement Project’s old questions, so the project is more mysterious sounding than it ought to be. But when I finally did get to read the questions, it turns out that they tend to be of the “Will the current Grand Poobah of Lower Slobbovia still be in office on December 31, 2015?” variety.

      If you want to do better than pure guessing, it helps to study up on Lower Slobbovia current affairs, history, economics, constitution, rumors, etc. And it would help to be able to read Slobbovian and maybe have visited Slobbovia. (Or find a betting website and just delegate your answer to that.)

      My impression was not that you could ace the contest by having some kind of rare insight into how the world works in general or which way the winds of history were blowing. Instead, it looked like a huge amount of burning the midnight oil.

  32. Hackworth says:

    “when they go home, they take off the lab coat and relax with some comfortable nonsense. And yes, that does make me wonder if I can trust that scientist’s opinions even in their own field ”

    This only shows the author’s own biases against “comfortable nonsense”. What does even fall under that term? Doing hard drugs? Monthly binge drinking? Watching Judge Judy religiously? Reading shallow works of fiction? Working on your oldtimer car? Playing table games with your friends? Having and keeping friends in the first place? Doing charity work unrelated to your field? Having any hobbies at all? Would anyone question Richard Feynman’s abilities in researching and teaching theoretical physics because in his spare time he was also a painter, drums player, safe cracker, night club visitor, and womanizer?

    Nearly everyone occasionally needs a break from their primary field of interest, but that does not automatically call into question their abilities in that primary field. The only people I can think of that are both exceptional in what they do and never need a mental break from it by doing something else, I call savants.

    • Shieldfoss says:

      What does even fall under that term?

      Believing or disbelieving things based on religion/politics/other instead of using a basis of evidence and rational thought.

      Would anyone question Richard Feynman’s abilities in researching and teaching theoretical physics because in his spare time he was also a painter, drums player, safe cracker, night club visitor, and womanizer?

      None of those are true beliefs and none of those are comfortable nonsense beliefs, because none of those are beliefs.

      • Hackworth says:

        The quote I commented on was not limited to beliefs and ideas. It was about “[taking] off the lab coat and relax[ing] with some comfortable nonsense.” The distinction between beliefs and activities is artificial; in the end, everything we believe and everything we physically do is for the benefit of our brain anyway. Scientific work outside of non-scientists’ imagination is far from flashy and exciting all the time. It can be boring, repetitive, and the road to success (if success happens) is paved with setbacks. Scientists are still human, and humans need physical and mental distractions if they don’t want to burn themselves out on their job. If Judge Judy or the Holy Mass will do the job without interfering with their work, then by all means, let them.

        But it doesn’t matter anyway, the argument works the same for pure beliefs and ideas. The Wikipedia list of Christian thinkers in science shows that believing in science and believing in religion don’t have to be mutually exclusive to be notable in either. There are people who are able to distinguish between those two aspects of their lives, and I harbor the belief that this is the real meaning of rationality. It’s not to shut out all superstitious thought, just like Catholicism would like you to shut out all heretic thought, but being able to separate them in a real way, so that one does not obviously interfere with the other.

        • Shieldfoss says:

          The quote I commented on was not limited to beliefs and ideas.

          Yes it was. “Nonsense” is a truth value and only makes sense in the context of things that can have truth values.

          The Wikipedia list of Christian thinkers in science shows that believing in science and believing in religion don’t have to be mutually exclusive to be notable in either.

          This would not surprise the author, since that is his entire point.

          There are people who are able to distinguish between those two aspects of their lives, and I harbor the belief that this is the real meaning of rationality.

          You do you, I guess. Meanwhile, the author is trying to find a consistent method of separating truth from nonsense.

  33. Commenter says:

    “The fourth problem: is there a difference between correctness and probability calibration? Suppose that Alice says that there’s a 90% chance the Greek economy will implode, and Bob has the same information but says there’s only an 80% chance. Here it might be tempting to say that one of either Alice or Bob is miscalibrated – either Alice is overconfident or Bob is underconfident.”

    In the long run you can figure this out by giving people points proportional to to the negative logarithm of the probability they assign to the event not occurring. If Alice says 90% chance of collapse and Bob says 80% chance of collapse, if there’s a collapse then Alice gets -ln(.1) points and Bob gets -ln(.2) points. If there isn’t, Alice gets -ln(.9) points and Bob gets -ln(.8) points. I guess you could call these points “nats” or whatever.

    • Gilbert says:

      This breaks down if
      a) calibration is harder on some problems than others (e.g. everybody is well-calibrated on dice-throws) and
      b) some people make more predictions in some of these categories and some more in others.

  34. Gilbert says:

    Three somewhat disconnected points here:
    1. The math guarantees you’ll find a general factor for basically anything. (Yes, it’s more complicated then that, but not interestingly so.) The question then is if it is something real or just an artifact. Interestingly, the realness of a general factor for predictions would be unusually testable. Basically one would measure it for past testable predictions and then remeasure for then-testable now-future predictions twenty years later. If it stays basically stable it’s probably real, if not not. My guess is factors for specific fields (economics, archeology…) will be stable, the general factor won’t.
    2. Yudkowsky claims the domain experts are clearly wrong on several otherwise basically unrelated things. This basically requires the general factor of correctness not only to exist but to be really important.
    Because if it is that important, then it may be the main explaining factor for some real people’s opinion sets and then someone claiming the experts are obviously wrong on lots of things might just have a high general factor of correctness. On the other hand, if that factor is not a very important thing, that option goes away and someone claiming the experts are obviously wrong on lots of things is almost certainly a crank.
    3. If you identify the general factor of correctness with rationality it would be basically explained by biases affecting people less or more. The individual biases are mostly measurable. Carrying on from there, it could be two stories
    -There could be a general factor for non-biasedness which then should be a less noisy version of the factor for correctness
    -and/or there could be some biases affecting practical judgments much worse than others.

    • The math guarantees you’ll find a general factor for basically anything. (Yes, it’s more complicated then that, but not interestingly so.)

      Yes. This is an important point that deserves promotion. But I either don’t understand, or disagree with your second point that:

      The question then is if it is something real or just an artifact. Interestingly, the realness of a general factor for predictions would be unusually testable. Basically one would measure it for past testable predictions and then remeasure for then-testable now-future predictions twenty years later. If it stays basically stable it’s probably real, if not not. My guess is factors for specific fields (economics, archeology…) will be stable, the general factor won’t.

      I’m not an expert in factor analysis, but based on what I know my intuition is that there isn’t a strict dichotomy between “real” and “artifact.” And my understanding of what you’re proposing to quantify a factor’s “reality” sounds like test re-test reliability. The reality of a concept is more like validity. I think your proposal measures something important, but not reality/validity.

      p.s. For what it’s worth. In 2012 I was made one of the Good Judgment Project’s first superforecasters and starting in 2013 I joined the GJP statistical research staff.

      • Whoops. I spoke to soon. I don’t think its true that a general factor always appears. It will always appear when sub-tests are all positively correlated, but I’m not sure we have good reason to think all sub-tests of “correctness” will be positively correlated.

        http://bactra.org/weblog/523.html

        http://infoproc.blogspot.com/2013/04/myths-sisyphus-and-g.html

        Regardless though, I think the main point is that the existence of a general factor would be far from sufficient to prove that the general factor is very useful.

      • Gilbert says:

        Short answer: you’re probably right.
        Long answer:
        In principle I think there is a fairly clear distinction between causal (“real”) and non-causal factors: If you adjust for all the causal factors the residual correlations should vanish. Of course practically tests for that have low power, so if you postulate a sufficiently complex hierarchy of factors you won’t be able to prove its unreality. But in principle you could tell, you just need infinitely many test subjects! 🙂

        On prediction, my basic thought was that a general factor of whatever will be stable if the correlated variables are stable. So e.g. an IQ-test can be reliable because it is made of tests for arithmetic, geometry, verbal stuff etc., all of which would be individually reliable. If all the subscores oscillated widely with the general factor staying stable, that would actually be a good argument for gbeing real.

        Now on predictions I thought we wouldn’t have that problem, because then a high general factor of correctness would actually be a prediction about the future, i.e. an actual experiment. Thinking again, you’re probably right about that being too optimistic. The problem is that expertise in specific domains is probably real and thus stable and that would give us the same problem as with psychometric tests.

    • Smoke says:

      The math guarantees you’ll find a general factor for basically anything. (Yes, it’s more complicated then that, but not interestingly so.)

      Can you explain/link to this math?

      • Gilbert says:

        Like Michael Bishop said, it’s technically for every group of positively correlated variables.

        You can stretch that further if you find excuses to ignore some correlations. For example, some people here thought experts would do worse outside of their area of expertise. If that turns out to be true, we can look for a general factor of non-expert prediction. And then if we find some category of prediction nobody is very good at (say games of chance or specific economic predictions) then we can just declare them as not very p-loaded. So basically if a general factor sounds plausible then so will a selection of basic variables that guarantee its existence. So basically the data will confess to almost any general factor you would bother looking for.

        For example, psychiatric diseases are prone to comorbidities, i.e. mostly positively correlated. Of course some of them are defined as opposite excesses, so you would probably set your original variables as disturbences of healthy mind functions. Voilà, now you’ll get a general factor for insanity.

        • Steve Sailer says:

          Yes, but that seems like it would be a sensible finding: people who are expert in one difficult field tend to be more expert in other fields than people who aren’t expert in any fields. And games of chance are outside the realm of potential expertise.

          This seems pretty inevitable, but a lot of people don’t understand it to their pocketbook disadvantage. Many people believe that there exist expert systems for playing the state lottery or whatever that will give them better than random results. Most of those credulous individuals, however, are not experts in any professional field. Conversely, people who are recognized professional experts in some individual field are much less likely to believe in the existence of expert systems for beating the lottery.

          The reality of this is hard to recognize, however, because most people who think about this kind of question use examples of quite restricted range rather than the general population. For example, dentists are said to be notoriously poor investors in real estate developments, often getting fleeced by professional developers into putting their money into projects where the lead developers have crafted the contractual terms excessively in favor of the expert insiders (themselves).

          Okay, but this example is leaving out pretty much the bottom 50% or so of the population, the folks who might imagine that somebody has an expert system for beating the lottery. Compared to the general population, dentists on the whole would probably score above average in almost any field of expertise.

  35. The psychologist Keith Stanovich is constructing a rationality quotient test. Not sure if this is quite what you’re after, though.

    http://io9.com/a-test-to-measure-how-rational-you-really-are-609412488

    Also, ClearerThinking (led by Spencer Greenberg) are working on a similar project.

  36. US says:

    Time spent on topic X is time that cannot be spent on topic Y. The existence of a hypothetical ‘general correctness factor’ seems to also implicitly be an argument that time spent on a topic should not be much related (in the extreme case: unrelated) to belief accuracy on that topic. This seems wrong.

    One might also think of the correctness factor as an amplification parameter which modifies belief accuracy based on signal strength; a ‘people high in ‘general correctness’ can do more with less’-type thing. You still come across the problem that you need a signal to amplify, and it seems to me that experts have such a lot of knowledge about the topics in which we’re interested that in general the disparity in initial signal strength between the ‘right in general-guy’ and the ‘expert’ should be so high as to make the (non-field-knowledge-related) amplification irrelevant. I’m not saying knowledge about one topic cannot be applied to increase accuracy on other topics, but the topics need to be related, which brings into serious doubt the extent to which such an effect can ever be as ‘general’ as seems to be implicitly assumed; it seems plausible to me that one should be able to use knowledge of mathematics and statistics to improve judgments on e.g. a variety of medical topics, whereas it seems less clear just how statistics would help you make sense evaluating the accuracy of historical sources dealing with the true state of affairs in 16th century Naples.

    A related problem which may have been mentioned in other comments is that how you evaluate correctness is not perfectly clear, and ideally certainly more complicated than it is made out to be in the post. People can be right about stuff for the wrong reasons, or they can be wrong for the right reasons (e.g.: ‘not enough evidence’).

  37. lumenis says:

    The corporate culture at Amazon.com includes the veneration of a list of “Leadership Principles”. One of these principles is “Be Right a Lot”.

    Unlike meaningless but externally similar lists at every large corporation (e.g. integrity, excellence, obsequiousness), the Leadership Principles are part of *actual* daily conversations about what the right/Amazonian thing to do is.

    This happens more often, admittedly, with Principles like “Customer Obsession” and “Frugality”, which are applicable to individual decisions. However, it does also lead to occasional discussions of what it even *means* to “Be Right a Lot”, how that’s in conflict with having a “Bias for Action”, and how the heck one even goes about assessing such a propensity in an interview.

    • Smoke says:

      What secret does Amazon have for getting people to actually pay attention to its corporate manifest that other companies lack?

  38. Hopefully I didn’t misread the post (bit short on time), but I think one other issue is where the available “evidence” is actually noise and peoples opinions are just noise. I think stock market prediction is sometimes a little like this. Often people predict up or down, or they predict a crash or correction is coming “soon”, and if you have a decent number of punters, some will be correct by chance alone (for example, up or down, 2^5, you only need 32 people to “discover” someone with the prophetic ability to predict the last up or down trends). If you select those people as GF for correctness, you’re in trouble. I guessing you could detect the noise-like nature of the evidence by the distribution of opinions, but I haven’t thought about it a lot and my stats isn’t going to win any awards either.

  39. Patrick L says:

    No one is going to point out that Archaeologists are pretty split on the pre-clovis stuff about 2/3rds for – 1/3rd against? Large amounts of the evidence for pre-clovis is pretty loose, and that the field puts up with it because they generally like the people proposing it? Which isn’t to say there aren’t one or two good sites and there’s not some good work done… but the science is far from settled.

    On the topic at hand, I’ve met some people I consider to be lucky, just like a ‘luck stat’ like in a RPG. As a rational person I can lie to myself and say that memory is faulty, we forget data that would contradict our experience, and that we are more prone to seeing coincidences in than not. This doesn’t change the fact that saying the rational answer *feels* like a lie. It feels and looks like luck exists.

    There’s a theology term: preternatural, which refers to something that doesn’t violate divine law, but goes against the patterns of normal phenomenon. High levels of coincidentalism, unexplained foresight, and other factors that can be attributed to luck would all fit in that description. Preternaturalism was historically associated with witchcraft and the devil, perhaps if it had a genetic component, maybe it’s been artificially selected out of the humans. Those who had more preternatural luck were burned at the stake.

    The problem with this is that it basically means “these people are born more compatible with seeing the code of the matrix than you are”, which isn’t very useful. I mean, it is in the sense that you can look up the answer in the back of the book just by asking them, but having the answer isn’t anywhere near as useful as the how and why something happened. If you have no knowledge of the underlying system, and all you have is correlations, once the correlations break down you’re left with less than no knowledge.

    • Sarah says:

      I am a “lucky” person. Like, eerily lucky. I used to think it was Providence. I now think it’s a combination of

      *some actual luck (affluent parents, high IQ)
      *something like “resourcefulness” (a sort of general “survivor quality”, being good at making things work out for myself; my dad’s description was “if somebody dropped you in Greenland, you’d find your way home”)
      *something like “gratitude” (noticing things that go well for me, being able to appreciate extremely positive experiences)
      *the Matthew Effect making good luck and good decisions compound

      • kernly says:

        *some actual luck (affluent parents, high IQ)

        Beauty is more important than either of those. And you’ve got it, unless you faceplanted into a lathe or something recently.

        I wonder how many ugly high IQ people from rich families think of themselves as “lucky.” My guess – not many. Perhaps I am too cynical. But then IMHO the primary “luck” these days is not wealth or beauty but how aggressive your neoplasms are and how many decades it takes them to show up. Don’t count your decade-eggs before they’re laid!

      • Steve Sailer says:

        Also, some of luck is having a sense of which games not to participate in. For example, I am a poor real-time decisionmaker, so I’m a cautious driver, and I have never ever felt the urge to become a pilot because I would likely be an unlucky one.

        John McCain, who lost five airplanes, might be an example of somebody who should have resisted family example and gone in for a different career than naval aviation where bad luck would be less expensive. On the other hand, he’s a tough guy who survived losing five airplanes, so he’s got that going for him.

    • Jonathan Paulson says:

      Whenever I hear about people being “lucky”, I immediately think of this study:

      Wiseman gave both the “lucky” and the “unlucky” people a newspaper and asked them to look through it and tell him how many photographs were inside. He found that on average the unlucky people took two minutes to count all the photographs, whereas the lucky ones determined the number in a few seconds.

      How could the “lucky” people do this? Because they found a message on the second page that read, “Stop counting. There are 43 photographs in this newspaper.” So why didn’t the unlucky people see it? Because they were so intent on counting all the photographs that they missed the message.

      • AJD says:

        I can’t find the original paper this is based on, but I wonder how it distinguishes between the possibilities that the unlucky people didn’t notice the message and that they didn’t trust it.

        • Vaniver says:

          I can’t find the original paper this is based on, but I wonder how it distinguishes between the possibilities that the unlucky people didn’t notice the message and that they didn’t trust it.

          As I recall, it did the obvious thing: they asked them if they saw the message.

  40. kuudes says:

    Re Lesswrong survey: I recently plotted Lesswrong survey 2014 general opinion on logit scale to http://www.leijuvakaupunki.fi/images/box/lw2014pestlogit_correct.png

    The confidence of opinion increases from left to right so that for instance if one puts a probability estimate of 25%, it comes to x = -1 about. To top is the prevalence of the opinion in the survey population so that if 25% of population hold the opinion to be less probable than x, then y ~ -1 about. The succession rule may twist it a bit, or not. Succession rule adds 1 to the occurances and 2 to the total number of observations to correct for 0% and 100%.

    So global warming is held on sum as the most credible proposition, and religion is held generally the least credible position by the lesswrong survey population. There seems to be a cluster of l religion, supernatural and god, of which god is held most credible and religion is held least credible.

  41. Sarah says:

    An *apparent* “general factor of correctness” will arise if there are such things as broadly applicable principles.

    For example: physicists, for good or for ill (and in my opinion mostly but not always for good) frequently attempt to explain things in other fields. So do economists. Frequently, what they’re doing is taking some mathematical tool (power laws, statistical mechanics, etc) and applying it to new things. If certain tools have broad applicability, then someone who uses them everywhere will have an overall tendency to be right when “expert consensus” is wrong.

    In a more subtle and subjective sense, it’s possible that certain *mental tactics* are broadly applicable. Anything from simple stuff like “check the data” and “consider that you might be wrong” to aesthetic sensibilities about “elegant” solutions. If there are certain tools of correct thinking, then people who use them will come up with better ideas across the board.

    Some tools of correct thinking are common among educated and conscientious people. (“Check the facts”, “listen to people who disagree with you instead of just insulting them”, etc.) Some are rare, and may not even be articulated yet. It may be that the secret of being Ramanujan is “oh, I just Fwibble”, and the problem is that nobody has yet specified *how to Fwibble.*

    I don’t think there’s any evidence as yet that there’s *one* tool rather than many. “The scientific method” is a pretty generally applicable tool; so is “try statistics on it.” I’ve known a few people who are *weirdly* good at getting correct intuitions on technical topics they know nothing about, and they may be using some nonverbal, “intuitive” part of their brains in a “generally correct” manner.

    • Adam says:

      This reminds me of how, for instance, high-energy physicists came to dominate derivatives trading. It isn’t a general factor of correctness. It’s that applied math works a lot better than intuition, even very finely tuned intuition and a great business and finance education, at more things than just physics.

    • Steve Sailer says:

      Speaking of “try statistics on it,” my impression is that there remain large potential gains among the general population and even among academic elites in further propagating basic insights such as regression toward the mean, a concept that was weirdly late in scientific/philosophic history in being enunciated: Galton finally drew sustained attention to it in the late 19th Century.

      I can recall not being told until my second year of MBA school in 1982 in the most advanced marketing modeling course offered at UCLA that a finding that A correlates with B could be due to:

      – A causes B
      – B causes A
      – C causes both A and B
      – Coincidence
      – Error or bias
      – Fraud, etc.

      Having that kind of checklist burned into my brain has proven very helpful over the years.

      I think since then the educated public has gotten a little better about understanding this, but still has a ways to go.

      In general, a lot of people are obsessed with “law” rather than with “tendency,” and generally turn off their brains as soon as they notice an example proving something isn’t a Law.

    • J. Quinton says:

      This reminds me of the test I took to get into linguist school in the Air Force. Apparently I scored high enough on the ASVAB that I was offered to take the DLAB, which is a test that tests how well you can learn a language by giving you a set time period to learn a fake language.

      The rationale for offering the DLAB seems to be based on the idea of a general factor of correctness.

  42. Anon. says:

    I think the Duhem-Quine thesis (AKA confirmation holism) is a useful tool here. Theories are not tested in isolation; any test of a theory is also a test of its assumptions “all the way down”. A confirmation of one piece is actually a confirmation of the entire “web of belief”, and therefore strengthens all other pieces as well.

    The General Factor of Correctness emerges not as a property of people, like IQ, but of systems of belief.

  43. Jonathan Paulson says:

    “Outside the laboratory” doesn’t seem very paradoxical to me. It seems like exactly what you would expect with many different professions that require largely disjoint sets of knowledge and skills. In particular, the argument of “Outside the laboratory” seems like evidence that what you need to be a good e.g. biologist is depth in biology, not in a more general skill like epistemology or rationality.

    The Telock result is really intriguing; I have no idea how it could be true. Is there a plausible explanation?

  44. Sarah says:

    Basically, this is a question about “to what extent can ideas generalize”, or “how simple is the world”, or “is all knowledge just domain knowledge.”

    The human brain is evidence against the strongest possible statements of the “all knowledge is domain knowledge” thesis. We have *one*, rather small, organ which is responsible for everything humans know. We seem to have a repeating *pattern* in cortical architecture, which makes some researchers suspect that there’s a “single cortical algorithm.” If knowing about A really told you nothing about B, then we’d need specialized “hardware” for every task. We don’t. The universe is at least *somewhat* intelligible. In a very weak sense, having a human brain is a “general factor of correctness.”

    I think of this in sort of a PCA sense — are there “principal components” that explain a lot? Can you capture a lot of information about the world in a “sparse” or “compressed” way?

    The answer is quantitative rather than qualitative. I’ll find myself in arguments with radical empiricists saying “no, really, not everything is a special case, you *can* generalize, THE HUMAN MIND IS A THING” but it would also be true to argue with radical rationalists that “no, really, not everything is just an application of your general theory, the universe isn’t as friendly as that, I fucking *dare* you to try your theory on this problem.”

    The world is *pretty* simple but not *perfectly* simple. If you want to quantify that, you need to talk about actual Kolmogorov complexity or power spectrum or whatever.

    • kernly says:

      We seem to have a repeating *pattern* in cortical architecture, which makes some researchers suspect that there’s a “single cortical algorithm.” …If knowing about A really told you nothing about B, then we’d need specialized “hardware” for every task. We don’t.

      We’ve certainly got some specialized “hardware.” Some parts are much more important than others, and different parts do different things. And the universe is somewhat intelligible… But some parts of it are much more intelligible than others. And we often make things intelligible, or communicable, with analogies to the sorts of things our brains handle well.

      The world is *pretty* simple but not *perfectly* simple.

      Some things (most things?) can be dealt with practically when defined simply, and that class of things contains everything we have knowledge of. Nothing is actually simple. Some things are simple to deal with when we’re careful not to set our standards too high.

      • Steve Sailer says:

        The glass is usually part full and part empty.

        That’s been one of my mantras ever since I laboriously made my way through Arthur Jensen’s “The g Factor” in 1998.

        It’s not a very exciting revelation that with most propositions in human affairs you can legitimately talk about how it’s partly true and partly false, but it’s a useful notion to keep in mind.

  45. JoPo says:

    One possible revision of the “always agree with expert consensus on everything”:

    “Always agree with expert consensus on everything *except* when it is likely skewed by signaling.” In which case it is always interesting to see which Blue Tribe experts are willing to make concessions to the Red Tribe and vice versa. So “Weigh more heavily the opinions of experts who are willing to send negative tribal signals.”

    Corollary: “Weigh more heavily the opinions of non-angry experts.” There are a surprising number of angry experts.

    • Steve Sailer says:

      A major problem we have today is that there are certain forms of bias you aren’t supposed to notice. For example, during the 1970s sociobiology wars, you saw academics named Gould, Lewontin, Kamin, and Rose being very angry at academics named Wilson, Hamilton, Smith, Dawkins, and Williams, and at their scientific forerunners with similar surnames.

      Of course, there were exceptions such as Trivers, but still …

  46. sarah says:

    Perhaps related is the SciCast project, which tried to answer a range of prediction questions about science and technology by crowdsourcing the answers (like the IARPA project cited here). It’s also funded by IARPA. They’re not open for submissions right now, but every so often they’re looking for new questions to answer, and votes/predictions on those questions. See more at scicast.org.

    “Unlike other forecasting sites, SciCast can create relationships between forecast questions that may have an influence on each other. For example, we may publish one question about the volume of Arctic sea ice, and another about sea surface temperature. Forecasters can later link the two questions together, and make their forecasts for ice volume depend on sea temperature. Once they are correlated, SciCast will instantly adjust ice forecasts whenever the temperature forecast changes!”

  47. For those interested in the Superforecasters, we are having a conference and meetup in London on the 24th October which will give answers to some of the questions raised here https://www.eventbrite.co.uk/myevent?eid=17800174802

  48. Jordan D. says:

    Let’s see-

    1) Yes, but-

    I’m prepared to draw an inference in favor of the second site based on that fact, but not a very strong inference. It would color my perceptions only in the exact situation you posit, where I have no other exposure to the data. Even knowing the positions of the parties would lead me to make my own guesses based on past experience, and this data wouldn’t be enough to overcome my own self-assurance.

    2) Maybe.

    I don’t want to make a special snowflake answer here, but I find this question harder than the others. If someone predicts that bigfoot exists AND did 9/11, that’s very powerful evidence that they have access to information which I don’t. Ordinarily, I would see that kind of information as uncorrelated with the otherwise-distinct assumptions you need to make to locate the Ark underneath EPCOTT… but the fact that Bigfoot did 9/11 tells me that my model of reality is disasterously wrong anyway.

    So ordinarily I’d say ‘This man has shown himself to be very right about past things, but the chances of THIS prediction being right are so incredibly low that I can’t update to positive probability’. And then an invisible garage dragon would eat me or something because bigfoot did 9/11 and none of my priors are trustworthy.

    So I probably would go to Florida, under the suspicion that I’m actually insane or something, and neither finding nor not-finding the Ark of the Covenant would make me calm about the whole situation.

    3. I don’t favor either from that information.

    But I think that question could still prove your point in another way. Like, if the anti-Schmoeists happen to be somewhat more correct about an obscure point of precolonial history, I don’t consider that literally no information, but I do think it’s more likely to be noise than signal. Beyond the obvious questions of what kind of reasoning and scholarship are signaled by historical knowledge as opposed to economics, this feels likely to be the artifact of another correlation. Maybe most Schmoeists are Communists and most anti-Schmoeists are facists, and something about those political viewpoints makes each more prone to motivated reasoning in favor or against pre-Clovis settlements. (This does not, of course, disprove the possibility that a preference for communism or facism is connected to General Correctness)

    But if anti-Schmoeists are mostly in favor of pre-Clovis theories AND support a more nuclear-heavy grid AND oppose artificial electric deregulation legislation, I’m more likely to listen to them. And I think that’s pretty much your position anyway.

    (…is what I would like to say, but economics is a field full of motivated reasoning landmines. Even trying to avoid them, I’d probably end up supporting whichever theory led to the results I liked best.)

    4. Oh whoops I answered 3 before I read this. Yes, I would feel a little better about the Schmoeists then.

    But I would still be worried that all of these things come from a ‘history and culture correctness’ mechanism which is only loosely affiliated with economics.

  49. DavidS says:

    This might be more about over-confidence, but I’m sure I saw something which argued (from some survey/study) that the best people at making predictions in a field were those in related fields, compared both to specialists and more distant people. So people whose life’s work was studying Iranian politics made worse predictions about Iran than people whose life work was studying Egyptian politics, and vice versa, although both were better than those who studied Jane Austen. I think the thing I read argued that the problem was that the specialists tended to put too much weight on the details only they knew which were often less important than the obvious stuff. E.g. they would focus on the personality of a new Minister or the impact of a recent protest whereas others looked at very broad trends.

    More generally, my gut feeling about this is that within intelligent people, the two things that lead to them being wrong is either
    a) not taking the ‘intellectual responsibility’ to actually think about things themselves, and instead just defer to expert/tribe position. I associate this with ‘comfortable’ establishment types, and it’s probably more frequent.
    b) being so convinced of their own insight that they over-estimate how much weight to give to their own arguments/insights over others. I associate this with an autodidact tendency – at a personal level I’ve seen it in people who were very bright but didn’t go to universities/jobs where they met intellectual peers. This is obviously more frequent with rationalists/contrarians.

    So I guess that tending to be right might be steering between these: people who actually think about things for themselves, but don’t overvalue their own unique insight.

    As a side point, I use ‘What Scott Alexander thinks about things’ as a short-cut to guessing what’s true about things I’m not informed about. With a few exceptions where you seem to have a stronger than usual “personal” take on it – specifically ‘Social Justice’ stuff.

  50. moridinamael says:

    For me, this recalls how the usage of the word “bias” has shifted.

    Around here I think we’re all on board with the idea that we’re not perfect reasoners encumbered by weird neural glitches, we’re actually just a bag of modules and specific algorithms which interoperate to sometimes yield correct results.

    Lightning strikes a tree and creates and great fire which wipes out all the local flora and fauna. The leader of the tribe holds a council to determine what should be done.

    Bandmate 1 suggests that the band should relocate far away from trees so this never happens again.

    Bandmate 2 advocates for making more sacrifices to the Lightning God to avoid further punishment.

    Bandmate 3 points out that this type of thing happens very rarely and that doing nothing at all is the easiest option.

    The first suggestion will technically solve the problem but is an overreaction. Having no other information, I would predict (with low confidence) that Bandmate 1 would also promote avoiding all berries because one time they got sick after eating some berries, to cease hunting wildebeest because one time their cousin was gored, and to set a night watchman outside the cave every night because one night three years ago a bear raided some food stores. I might loosely refer to Bandmate 1 as anxious. Their plans for the future are skewed because their probability estimates are all skewed in a mutually correlated way in favor of absolutely guaranteeing security and safety.

    The second suggestion rests on a poor model of the world. However, other than wasting whatever resources are involving in making a sacrifice, it actually addresses the problem just as well as the third suggestion of “do nothing” except it makes the band feel like it’s doing something. Bandmate 2 may also believe that the recent drought was caused by the appearance of a handful of recent bad omens. In other words, Bandmate 2 is the incorrect contrarian cluster, the conspiracy theorist who manages to always be wrong in unfalsifiable ways.

    The third suggestion optimally conserves resources and recognizes the futility of taking action, but such a phlegmatic attitude is prone to go catastrophically wrong every once in a while. What if there really is a Thunder God, after all, and the decision to do nothing actively dooms the band? Where Bandmate 1 is anxious, Bandmate 3 may be neurologically biased in favor of conservative predictions, believing the world to be a pretty safe place at a gut level. But just because Bandmate 3 was right this time doesn’t mean Bandmate 3 is a superpredictor. Bandmate 3 may also be prone to insisting that the game will come back because they always come back, that the rains will come soon because droughts are unusual, that hunting wildebeest is safe because they’ve personally never seen anyone gored.

    So that was my off the cuff typology of why people are usually bad predictors for psychological reasons. I would love to see a psychological analysis of the superpredictors in the Good Judgement Project – do they have a unique disposition? Is that correlated?

    • I’m am Bandmate 4, and I recommend moving away from the area where all the flora and fauna are dead.

      • moridinamael says:

        Bonus round: a week after your band vacates the area, a meteor strike destroys everything that remained. How should the band interpret this information? How WILL the band interpret this information?

    • onyomi says:

      The efficacy of option 2 may help explain the evolutionary success of religious thinking: in many situations, the best course of action may be to do nothing substantive, yet take relatively low-cost actions which make you feel you are doing something. Compared to taking potentially high-cost, high risk substantive action, or doing nothing but living in dread, option 2 may actually be the best.

      • houseboatonstyx says:

        While making the sacrifice at the obvious location, ie the remains of the tree, Bandmate 2 notices something flickering. When he tries to pick it up, it burns him, and he shouts “The Lightning God has sent a message. We must all gather here right now!!”

        When they do, Bandmate 5 thinks “Hey, that stuff might be useful.” Several months later, Bandmate 2 says “Let’s make sacrifices to the Lightning God who gave us this gift.”

  51. Matt M says:

    “If they can beat the experts in those fields, then I start really wondering what their position on the tax rate is and who they’re going to vote for for President.”

    May I ask why? What exactly are you implying here?

    I feel like to a certain extent, we’re confusing “the ability to be correct about what really happened” with “the ability to forecast how popular opinion will move.” A “generally correct” person could predict who will be elected President, but does that mean this candidate is somehow superior to other candidates? Does that make their position on tax rates the best position? What exactly would you do with such information if you had it?

    Like, to take the Bigfoot 9/11 example, let’s say he phrases his comments very specifically, and the guy says: “Ten years from now, everyone will acknowledge that Bigfoot caused 9/11.” Let’s say that he is correct about this, ten years pass, and for whatever reason (perceived evidence, or maybe just the rhetorical ability of the anti-bigfoot crowd), 95% of surveyed Americans do in fact believe that Bigfoot caused 9/11.

    Does this mean that the original predictor was right? Well sure, he was right about how opinion would change. But does this mean Bigfoot *actually caused* 9/11? Well no, not necessarily. If, one year later, we stumble upon some sort of “smoking gun” solid evidence that shows Bigfoot didn’t actually cause 9/11, but most people refuse to change their opinion on the issue, was the original prediction right or wrong? To what extent has this man shown the ability to be “generally correct?” How could I utilize his abilities for anything other than evil (like say, a politician who has no deeply held positions, and only wants to win elections, and could really benefit from having someone on his staff who can accurately forecast public opinion 10 years into the future)?

    • onyomi says:

      This is a good point, though part of the weakness of the whole enterprise is that expert consensus or overwhelming public opinion are the only yardsticks for determining correctness. So, if someone predicts “10 years from now, 95% of experts will think Bigfoot caused 9-11,” that prediction will, in ten years, be indistinguishable from “Bigfoot did, in fact, cause 9-11,” in terms of judging that person’s “correctness factor.”

      • Matt M says:

        Exactly.

        But if the prediction is “Bigfoot did in fact cause 9/11”, at Year 10, when 95% of experts agree, this person will be judged as accurate.

        But if new evidence is found at Year 11, and all the experts change their position, this person would then be judged as inaccurate.

        How “correct” you are about something depends on both the evidence and the expert consensus (these are often, but not always, correlated…) *at any given moment in time* Someone who appears to be “generally correct” today could be “generally incorrect” tomorrow if popular and/or expert opinion moves on a few key issues.

        • onyomi says:

          I think this is why Scott’s example of some archaeological question which is currently say, 50-50 split, but which one can imagine being pretty decisively answered by some new discovery. Only if you were right before the new discovery do you get the correctness points. In other words, there probably are good criteria for judging this, but they may be fewer uncontroversial criteria than we’d like or expect.

  52. Vilgot says:

    Not that I’m reading less wrong a lot, but it surprises me that I’ve never seen less wrong people (explicitly) discuss the work of Keith Stanovic. I’d recommend his book “Rationality And The Reflective Mind” for some very interesting reflections regarding intelligence and general rationality, from the perspective of a philosophically inclined researcher in biases, heuristics, and decision making. I think he’s brilliant. Currently reading another book he wrote “The Robots Rebellion”. You guys should check him out! He seems right up your alley.

  53. Glossy says:

    If ability to evaluate evidence and come to accurate conclusions across a broad range of fields relies on some skill other than brute-forcing it with domain knowledge and IQ, some skill that looks like “rationality” broadly defined, then cultivating that skill starts to look like a pretty good idea.

    There’s no question in my mind that on average men are more rational than women. And that this is independent of IQ and education level. Men are more object, abstraction and fact-oriented and women are more people-oriented. Women can only really be interested in individuals. Not necessarily individuals they know personally. They ARE interested in celebrities. Unlike women men can find physical objects, facts and abstractions intensely interesting. When women are interested in objects (clothes let’s say) it is strictly for the impression that those objects will make on the people they know. They’re not interested in these objects in themselves.

    Obviously nerds are more fact, abstraction and object-oriented and less people-oriented than other men. Does that mean that nerds are more rational than other men if we control for IQ and education level? Maybe. If there is a difference, it’s definitely smaller than the male-female difference in rationality though.

    The people who do well in the interpersonal world operate subconsciously in it, through intuition. If you ask them how they do it, they would not be able to tell you. Conscious reasoning (“if A is true, then B must be false”, etc.) is more common and useful in the fact, object and abstraction worlds.

    People can be charmed, cajoled, guilt-tripped. Facts are implacable. The skills one needs to do well with people are very different crom the skills one needs to do well with cold, hard facts. If there is a rationality factor independent of IQ and education, then I would guess that it’s related to this people-objects spectrum of mental orientation.

    There is a theory that people of northern European background are more rational (Finns are an extreme I guess) than others because they lived on isolated homesteads for millenia. The northern climate could not support high population density for famrers.

    If you live in a big village, you have to be political to survive. You have to be good at influencing people. If you live on an isolated homestead, your struggle for survival is mostly conducted against the inanimate forces of nature. This could have made such people more object-oriented and less people-oriented than others.

    • “There’s no question in my mind that on average men are more rational than women. And that this is independent of IQ and education level. Men are more object, abstraction and fact-oriented and women are more people-oriented. Women can only really be interested in individuals. Not necessarily individuals they know personally. They ARE interested in celebrities. Unlike women men can find physical objects, facts and abstractions intensely interesting. When women are interested in objects (clothes let’s say) it is strictly for the impression that those objects will make on the people they know. They’re not interested in these objects in themselves.”

      It’s interesting that you start with a probabilistic statement, and then go to absolutes (“Women can only really be interested in individuals.”) which imply a degree of telepathy you haven’t got. This is not a good argument for your rationality.

      • Glossy says:

        All the women I’ve known could only be interested in individuals. The degree of the ability to be intersted in objects, facts and abstractions that I’ve observed among women does not vary. It’s like the ability to get pregnant in men.

        It’s typical that out of the several points I made in the above comment you chose to dispute the one that almost certainly offends you personally.

        Men get personally offended all the time. But some men have the ability to sometimes rise above that and to consider things in the abstract instead. I have never known any women who showed any sign of such an ability.

        How do I know that no man has ever gotten pregnant? I don’t. I’ve never known or read about any men who have. But if I wanted to make an effort to express myself in the strictest way possible, I would avoid absolutes when talking about male pregancy.

        Making that kind of effort is not always worthwhile though. Actually, it’s a waste of time most of the time. If I was always worried that someone who was personally offended by one of my points would nitpick them in this fashion, everything I wrote would become unreadably verbose and would take several times longer to write. And the points I made wouldn’t get better for it.

        • Deiseach says:

          “All the women I’ve known” does not encompass the attitudes, interests or views of “All the women I don’t know”, much less “All the women currently alive on the earth, or who have ever existed”.

        • ddreytes says:

          The point was reasonable and grounded in the specifics of what you were saying. You did proceed really quickly from probabilistic statements to absolute statements about the internal processes of a very large group of people. Don’t be daft. One doesn’t have to be personally offended to notice that – really it seems to me that you’re the first one here to talk in terms of offense & political pre-commitments.

          And the analogy to pregnancy is not, I think, well-formed. If nothing else, it’s much easier to determine whether or not someone is pregnant than it is to classify their mental processes. And we have access to a vastly larger body of evidence in the one case than in the other (your personal subjective observation vs the entire history of the human race).

          To be clear here, I’m not saying your argument is either true or untrue. But I don’t think you can assign such a high degree of certainty to it. I don’t think it’s good to assign high degrees of certainty on the basis of “I have never known any woman who did this.” & it bothers me that your immediate response is to start talking as though the only reason anyone could disagree with you is because they personally take offense.

          • Glossy says:

            “And we have access to a vastly larger body of evidence in the one case than in the other”

            This is not true. The history of science, mathematics and other fields that require rationality is evidence. Female contributions to them have always been negligible. Public stereotypes are evidence as well. Folk wisdom about human nature is pretty much infalliable. By the way, if you disagree with that, please cite a case where you think it’s wrong.

            “If nothing else, it’s much easier to determine whether or not someone is pregnant than it is to classify their mental processes.

            Preganacy is not apparent for months after it starts. Irrelevant nitpick? Sure, but you’re defending someone else’s irrelevant nitpick here – the one about my use of absolutes when comparisons would have been more factual. Please try to avoid throwing stones from glass houses.

            And on that note, where did you suddendly get all that certainty of yours? Much more difficult to determine? I thought you were against certainty. Much? How much? More? Why not less? And based on what evidence?

            I admit that this is not how real disucssions of real issues ever look like, but if you want to play this game, I can play it too.

          • ddreytes says:

            I would posit that the number of individuals involved in the history of mathematics, science, and other fields that require rationality is significantly less than the number of human beings in recorded history

            I mean, you can argue about precisely how much evidence there is for the rationality / gender hypothesis, but I think no matter how much the exact number is, it’s going to be hugely off from the amount of evidence for the no-male-pregnancy hypothesis. I just don’t think it’s a good analogy.

          • Glossy says:

            Not hugely off. 0% vs 0.1% or 0.5% or whatever it might be. If you want real statistics, there’s Murray’s Human Accompishment.

            Most stats-gathering efforts, including Murray’s, have an error rate. What would the errors look like? People (both men and women) getting credit for stuff they didn’t do or the current thinking in the field about the relative importance of various contributions being wrong.

            What is the likelihood that the error rate of Murray’s method is higher than the share of female contributions that he recorded? That likelihood isn’t low at all. If his error rate is 5%, it would probably tower above the female contribution rate as determined by his method.

          • Science says:

            Public stereotypes are evidence as well. Folk wisdom about human nature is pretty much infalliable.

            I guess it’s a big internet and I shouldn’t be surprised that somewhere, someone actually believes vox populi, vox dei, but I didn’t expect to find such a person here.

            For reference the full quote is “Nec audiendi qui solent dicere, Vox populi, vox Dei, quum tumultuositas vulgi semper insaniae proxima sit.”

          • Deiseach says:

            Folk wisdom about human nature is pretty much infallible.

            So, Glossy, what colour are your eyes? Which foot do you dig with? Oh, we know all about the likes of you and what you’re like and what you do and can’t or won’t do…

          • Protagoras says:

            @Glossy, This does give me a weird feeling of deja vu to an essay by one of my least favorite philosophers, the late and I hope mostly unlamented David C. Stove. He wrote an essay arguing for the intellectual inferiority of women. Among the terrible arguments he deployed was the one you give, citing the lack of contributions from women. Like you, he seemed blissfully unaware of the fact that recent historians who have looked for contributions from women have found quite a lot of them; there seems to be a depressing pattern to stories about female accomplishments, where even if they were recognized in their own time (as happened more often than one who hasn’t actually studied the history might think), later generations forget them, or more likely just forget that women were involved.

          • RCF says:

            Calling this a “nitpick” is absurd, and suggests a serious lack in General Factor of Reasonableness. And if we’re arguing from personal experience, no black man I have ever met has been in the NBA.

          • Doug S. says:

            Folk wisdom about human nature is pretty much infalliable.

            Folk wisdom about human nature differs among cultures. Ancient Greek and Roman writers said that women want sex more often than men. Today in the United States, the folk wisdom holds that men want sex more often than women. Contradictory folk wisdom can’t be infallible.

          • Anatoly says:

            Protagoras: I’m surprised that you attack Stove’s essay so harshly. I also thought of it while reading this thread of comments, but more by way of contrast than similarity. Stove’s arguments, I think, deserve to be taken seriously, whereas – to paraphrase – “I have never encountered a woman interested in anything abstract, and it’s fair to say that within a margin of error no women ever are”… does not.

            I disagree that women are inferior to men intellectually, but I also think that such a thing, however distasteful to me it might be, is not apriori impossible either logically or biologically. It makes sense therefore to confront the strongest possible argument for it. I found such an argument in Stove’s essay. I disagree that it is as terrible as you seem to think. I see a number of flaws in it, and ultimately find it unpersuasive, but it also clarifies and sharpens the issue and successfully rebuts several “naive-lazy”, so to say, arguments for the other side. If I nmever read it and thought hard about it, my own opinion would be less informed.

            You’re making a fair point re: women’s historical contribution, but perhaps you’re carrying it too far. Recent historical writing about women’s contributions in the past is subject to biases and fashions just as any historical writing is. This isn’t to say that all such writing is “PC” and politicized and untrue, etc. I just don’t think this point rebuts Stove conclusively.

            To sum up: I personally have found Stove’s essay to be the strongest argument I read against women’s intellectual equality with men, and one that I had to treat seriously, and fairly, rather than dismiss out of hand. I don’t think it’d be better if it didn’t exist or I never read it.

    • J. Quinton says:

      “There’s no question in my mind that on average men are more rational than women. And that this is independent of IQ and education level. Men are more object, abstraction and fact-oriented and women are more people-oriented.”

      This assumes that rationality equals ability for abstraction, and more subtly that irrationality is dealing with people. As far as I know, and I think a lot of the community here might agree, rationality is about winning consistently. If being able to know people and what they want helps you attain your goals more consistently than ability for abstraction, then women (in your formulation) would be more rational than men.

  54. SUT says:

    Hypothesis: General correctness is the ability to compensate for the inherent bias of politicization that plays into every field’s consensus.

    Even on the extreme end of the spectrum of ‘no-politics’: e.g. the Clovis question – there are reputations and careers built on one an answer. Then there are the preferred conclusions that native people want: namely that they are descended from the original settlers. This is what I gather from about the issue from DNA USA, Brian Sykes.

    Another example will help to illustrate how to be Generally-Correct about archealogy: Piltdown Man.

    Without using your scientific knowledge, just your cultural knowledge do you think the following will hold up: The year is 1912 and an evolutionary link between ape and man has been found on a British Isle! Thus lending credibility to a separate evolutionary lineage for Europeans.

    I think the problem for many experts, is that they aren’t able to go “meta” on an issue like this and see the issue in context of the biases of the day.

    • Deiseach says:

      Oh, I think Piltdown Man is my favourite hoax! Though I do feel sorry for Teilhard de Chardin – this scandal obviously didn’t help him when he was getting in trouble with the Congregation for the Doctrine of the Faith over his “Cosmic Christ” ideas 🙂 (I’ve never believed he was the hoaxer, because frankly he never struck me as having that kind of sense of humour, or much of any kind of one.)

      It’s also a big part of why I’m so sceptical about evolutionary psychology explanations, or at least the pop versions of the same. See how fast there were diagrams and “artist’s impressions” and hypothetical reconstructions of the mental, social and civilizational levels of Piltdown Man, who turned out in the end not to exist at all? A lot of evo-psych seems to me to take the same approach: well, here is this thing in modern human behaviour that needs to be accounted for. So first we’ll assume it’s always been this way, or at least part of humanity for hundreds of thousands of years. So to survive this long, it must have had some evolutionary advantage, else it would have died out with its possessors. So we posit an explanation for why men are promiscuous, or women like pink, or people have some kind of unreasonable bias against eating raw dead raccoon that’s been scraped off the road after stewing in the sun for three days.

  55. kernly says:

    Making good guesses is much, much less important than nailing down certainties. Maybes carry much less water than sureties. Advancements come from taking what used to be uncertain environments and making them certain. I am deeply skeptical of the notion that getting better at dealing with uncertainty is a critically important endeavor. We’ve all got brains that evolved to deal with uncertain environments. Sure, some might be better at it, some might be worse. But it just isn’t that big a deal. You don’t get from the stone age to now with better and better guesses. You get here, and to a better future, by accumulating certainties. Some fields don’t seem to make for productive certainty-mining. Long term political forecasting, for example. That doesn’t make such subjects less important, but it very much does make it so that knowledge is less productive there.

    Let’s take any task that produces value. Say, making tortillas. I’ve gotten OK at that over the last few weeks. So I’ll get two people behind me when I make them – some random guy, and a Probability Estimation Genius. – and I ask them various questions. Is this the correct amount of flower? Have I added too much water? Is this the correct amount of salt? Is this the length of time I should knead the dough? Is the pan too hot? Probability Estimation Genius will make better guesses, and will be much better at estimating how often he will be wrong than the random guy. But the probable result of following his advice and the advice of the random guy is the same – shitty, worthless product. In order to get something valuable, you need to know what you’re doing. If you don’t KNOW how the dough should look/feel/taste, if you don’t KNOW how hot the pan should be, etc, you’re not going to produce value. The value comes when you figure that stuff out and don’t need to guess anymore.

    There’s an argument to be made that Probability Estimation Genius is still better off, because he is going to figure out the right way to do things quicker. Perhaps. But how much better off? How much time do you save, how much performance is gained, by being better at guessing than your peers? I would say it is heavily dependent on what kind of environment you’re in. But going back to my earlier point, I think that environments where it’s terribly hard to nail things down, where guesses are very often the best you can do, are much less productive than environments where you can methodically nail down one thing, then the next, then the next.

    Speculative markets would be the prime example of the first kind of environment, engineering the prime example of the second. Probability Estimating Genius will count coup in the stock market, make a big profit off of their abilities. Someone with great deductive prowess, and an excellent memory, will probably do better in the market than your typical shmuck, but he won’t approach PEG’s profits. But who will be the better engineer? And which field’s advancement yields more fruits for humanity?

    Porque ne los dos? Why not be a great deductive mind, and a great estimator of probabilities? Well, I guess my question is, which is more productive – gathering and feeding more facts and arguments into your deductive engine, or getting better at guessing? I don’t think the answer is obvious, but my feeling and working assumption is that the former is more productive than the latter for the same reason that advancements in engineering seem to yield more fruit than advancements in speculative techniques. What you can nail down is more important than what you can’t, because you simply can do more with it. And if you want to move the needle on something that can’t be nailed down, which everyone does, my answer is to drill down to some part of it that can be nailed down, and work within that.

    • Josh says:

      This! I’m a software developer, and I’ve been working with a super-bright less experienced guy, and the main thing I’ve been trying to teach him to up his game is STOP GUESSING!! Or rather, know when to switch between inductive and deductive reasoning because if you’re in a situation where you need to be deductive and you’re using inductive, you will go in circles forever.

  56. Glossy says:

    An extremely people-oriented person could pick the right side in a scientific controversy by intuiting which side’s proponents are BSers moved by ulterior motives. To be able to choose the right side in a scientific or technical field through first principles or by examining the available evidence it would help to be object, fact and abstraction-oriented.

    • LCL says:

      I like the people-oriented view. Extend it to encompass a good intuitive sense of human cognition patterns, and you can can judge not just “who seems like a motivated BSer” but also “who seems like a rigorous thinker and who seems like a sloppy or hasty one.” That might get you most of the way to the right side of the controversy, just via knowing who to listen to.

      The only issue is that people in the field can also tell who’s a BSer and who’s a rigorous thinker. Probably better than you can because they’ll catch the BS or rigor of technical points and you’ll miss it. That’s a big part of how consensus forms. So I don’t know how likely it would be to beat consensus with such an approach.

      I guess you’d be looking for someone with such developed “people sense” that they are superior judges of BS or rigor despite being ignorant to judge technical points. Like they can infer it from writing style or organization or word choice or facial cues or others’ reactions or something non-field-specific.

      That level of people sense would be pretty amazing. But I wouldn’t be totally surprised to find it. Any gene or meme that boosted intuition about the trustworthiness of information from other people would have been the subject of huge selection pressure since the invention of language. We might by now harbor some pretty amazing capabilities in that regard.

  57. MartinW says:

    It’s interesting to read that old article from Eliezer again. He correctly identifies the problem that in order to identify the “correct contrarian cluster” in the first place, you need to have some objective way to verify the correctness of a contrarian claim.

    Scott, in today’s post, identifies the same problem and proposes to look at claims which used to be controversial, but have since been resolved, thanks to new evidence, to the satisfaction of the experts in the relevant field. So e.g. if we take it as a given that the existence of pre-Clovis settlements is considered uncontroversially true today, then we can use that to find people who predicted that outcome when it was still a controversial claim, and look at their predictions in areas where the jury is still out.

    However, Eliezer addresses the same issue by giving three examples of things which he considers “slam dunks”, none of which is in fact generally accepted as uncontroversially true today! His three examples are atheism, the many-worlds interpretation and P-zombies. I think it’s fair to say that none of these can be fairly described as something which used to be controversial but where one side has clearly won the battle since then.

    (Probably most people on this site, including me, will agree that atheism is true with 99.9999999999% certainty, but if you believe it’s a settled question then I want to introduce you to a few billion people who would disagree quite strongly. And obviously there are lots of quantum physics experts who disagree with Eliezer’s stance on MWI. Not sure what percentage of mainstream philosophers would side with him on P-zombies, but it’s not really the kind of thing where new evidence might prove that a formerly-controversial position turned out to be correct.)

    So, Scott is saying: if someone has a track record of correctly making contrarian predictions which are later vindicated by new evidence, you should pay extra attention to what they say about topics where the experts aren’t sure yet. Eliezer is saying: if you agree with me on a topic where I am very certain that I am right, even though half of the world’s population has a different opinion, then I will assume that you are correct on other topics (where I do not have enough knowledge to directly judge the evidence by myself) as well.

    That’s a rather important difference.

    • LTP says:

      As for p-zombies, the Phil Papers survery of anglophone philosophers shows that p-zombies are controversial and not a slam dunk (scroll to the very bottom).

      ~47% say p-zombies are conceivable but not metaphysically possible
      ~25% say p-zombies are inconceivable
      ~18% say p-zombies are metaphysically possible
      (the rest fall under “other”)

      Like most big philosophical questions, there is division and no clear “slam dunk” either way, as much as Yudkowsky probably wishes there was.

      The other thing to note about Yudkowsky’s examples is that none of them make empirically testable predictions about the world, and it’s unclear how they could be resolved by empirical evidence, so I’m not even sure if they’re relevant to the issue in the rest of the post. P-zombies and atheism are philosophical positions, while MWI is an aesthetic interpretation of some mathematical models of quantum mechanics that is not empirically testable (if I understand it correctly).

      • Protagoras says:

        So according to the survey, 72% of anglophone philosophers say p-zombies are metaphysically impossible. Given the generally contrarian nature of philosophers, I’d say that’s actually pretty impressive (and many of the 10% in the “other” category were surely people who basically also think p-zombies are metaphysically impossible but wanted to vastly exaggerate their quibbles in order to be special snowflakes).

      • jaimeastorga2000 says:

        Atheism does make empirical predictions about the world, as Yudkowsky points out in “Religion’s Claim to be Non-Disprovable” and “Beyond the Reach of God”. In fact, there is a whole book called Why Won’t God Heal Amputees? which uses this as its main argument against Christianity.

        • LTP says:

          Well, the existence of a Christian God may lead to empirical predictions, but the existence of God in a philosophical sense, where no claims about God being active in the world at all are made, is *not* empirically verifiable.

          • RCF says:

            Well, sure, if you strip the word “God” of all practical meaning, then God’s existence is non-falsifiable. But then if someone believes that it is reasonable for their worldview to contain a term than does not refer to any meaningful concept, then while we can’t conclude that they hold an empirically false position, we can conclude that their thinking is rather suspect.

        • Jaskologist says:

          I don’t actually see empirical predictions in either link.Link 2 is just a long-winded problem of evil. Link 1 is a more novel argument, but not a prediction so much as a description, and a not terribly accurate one at that. Besides this gem

          In not one single passage of the Old Testament will you find anyone talking about a transcendent wonder at the complexity of the universe.

          which is precisely what I had in mind when I mentioned very basic factual errors, the basic gist seems to be that the OT “routinely” has big showy miracles while the NT sticks to small-time, little stuff. But that’s not really an accurate description. The OT describes a quick blast of very large-scale, showy miracles, all of which happened in the space of Moses’ lifetime.

          These stick out to us because mental biases, etc, but that’s not how most of the books go. Kings and Chronicles, which cover vastly more history, are very light on the miracles, most of which take the form of “God was with us, so we won the battle.” Multiple kings come and go without any supernatural event worth writing down. Even the great Elijah only get 14 recorded miracles, many of them minor, natural events, only witnessed by one person, or better classified as predictions than miracles. (Elisha gets twice that, but they include things like “made an ax head float.”) Granted, Elijah did indeed call down fire from heaven, but this is notable precisely because it was so rare. Did the great King David have any miracles at all? Jeremiah doesn’t; he gets stuck doing performance art. There are none in Ruth, or Ester, or Nehemiah.*

          Jesus, on the other hand, does miracles left and right, often before crowds, and the public execution followed by resurrection wasn’t all that subtle. The apostles also get numerous miracles ascribed to them. The pattern Eliezer wants to claim isn’t there.

          *Epistemic status: off the top of my head. I haven’t reread them recently to double-check

          • Adam says:

            I don’t know about other religions, but Christianity does at least make one testable prediction: the second coming. If someday the world ends and Jesus isn’t there, no rapture, whoever is still around to get turned into paperclips will finally know.

    • I’m amused that there are atheists who believe the simulation hypothesis is plausible. You don’t quite get a triple omni God (omnibenevolence is unlikely), but two out of three ain’t bad.

      • Wrong Species says:

        The idea of a superpowerful creator(whether that’s a god, alien or programmer) isn’t inherently implausible, it’s just that there isn’t any evidence of it(from an atheist POV).

        • John Schilling says:

          The same was true of, e.g. pre-Clovis Americans, not too long ago. Or the existence of Antarctica. Or an alien supercomputer simulating our entire perceived universe.

          Disbelieving in any of these things, may be rational. Asserting with extremely high confidence that they do not exist, rather less so.

          And in the case of atheism vs. the simulation hypothesis, the simulation hypothesis is a subset of theism. Whoever is running the supercomputer that simulates our universe, is our god by most non-sectarian definitions. So if someone assesses p(simulation) > p(theism), it seems likely they are being biased by the framing of the questions.

      • stargirl says:

        Even if we are in a simulation the people running the simulation are probably not all powerful or all knowing. Merely very, very powerful and very knowing.

        Its not clear they could predict the future. Humans cannot always predict how our own machine learning algorithms will behave. In addition its unclear the people running the sim can get the sim to behave however they want. I cannot get excel to do all the things I want it to :).

        I agree with the gist of your post though.

        • RCF says:

          If they’re simulating the entire wavefunction in accordance with MWI, it’s not clear to me that they necessarily have any meaningful information about the universe. Could you, looking at a wavefunction, tell that there is a branch in which there are conscious beings?

      • rsaarelm says:

        Not sure how you’re getting omniscience either. Someone running a simulation of the universe never having noticed that the Earth has formed inside it and has had some mildly interesting stuff clump up on the surface seems perfectly plausible.

    • John Schilling says:

      Probably most people on this site, including me, will agree that atheism is true with 99.9999999999% certainty

      Assigning P=0.999999999999 to anything this side of “cogito ergo sum”, is either an act of faith, signaling, or a major calibration error. No human being I know of has anything remotely like enough accumulated experience with reality to assign even ten nines to the hypothesis, “my perceptions correspond to an objective material reality”; how are you getting two nines beyond that for any subordinate detail regarding the possible reality?

      • MartinW says:

        Calibration error. Maybe some signaling, too.

      • RCF says:

        So, if you were to pick a random string, you would assign a greater than 10^-10 probability to the possibility that it will be my private encryption key?

        According to Solomonoff Induction, any hypothesis that takes more than 34 bits to specify should be assigned a probability less than 10^-10.

        • John Schilling says:

          As with the “fair” coin that comes up heads in twenty consecutive flips, you always have to consider the possibility that the “random” number, isn’t. Amusingly for your choice of examples, that’s particularly important to keep in mind in cryptography.

          Do I have actual psychic powers, operating subconsciously? Unlikely, but not 1E-10 unlikely, and if I’m picking numbers while contemplating your private key, that’s what I’m going to get.

          Did I happen to see your private key at some point and consign it to some obscure but reliable corner of my memory? I don’t think so, but in any context where I’d actually go through with this exercise, maybe not 1E-10 unlikely.

          Am I in fact the Lord God Almighty, Creator of Heaven, Earth, and this Entire Solipsistic Universe? P > 1E-10, in which case your private key is whatever I damn well say it is. And I may be in need of a smart, open-minded rationalist psychologist, even if an imaginary one…

          • Froolow says:

            I’d be happy to bet with you at odds you would consider astronomically good that you can’t guess my twenty digit random key (numbers, punctuation, upper and lowercase letters).

            If your claim is that the probability of this guess is somewhere north of 1E-10, then my paying £10bn on a £1 wager should be better-than-fair odds for you (that is correct, right? I get a bit confused with American and UK billions).

            Since I sadly don’t actually have £10bn, we might have to change the rules of the bet slightly (you take as many guesses as you need to guess my key, whereupon I pay you £100, but I charge you a fraction of a penny each guess).

            If you’re prepared to take this bet I’d be more than happy to set it up with another Codexian to act as an escrow

          • John Schilling says:

            Unfortunately, the contrarian hypotheses here mostly involve my “random guesses” not actually being random. So the slight rules change where we bin a few million of my guesses and linearly add probabilities, doesn’t work. If I have latent psychic powers just waiting for the right opportunity, I’ll get your key in one or a very few guesses, and I’ll get it whether it is twenty or two hundred digits. It’s a one-shot wager no matter how many digits I generate or how I chunk them.

            So the actual odds, independent of the number of guesses, are your payout divided by the sum of my wager and the transaction cost of setting up the deal, with the latter probably in the tens of dollars equivalent. For the nominal billion-to-one odds, this requires you being willing to put up tens of billions of dollars and me being willing to trust you.

            Oh, and if you ever actually find yourself with tens of billions of dollars and a proposition that risks all of that against a small gain, it doesn’t matter how surely the laws of physics or mathematics or reason itself say you can’t lose – with P>>>1E-10, you’re being conned by someone smarter than you.

          • Froolow says:

            @John Schilling

            That seems fair enough, but I still think your beliefs about probability are such that we can find a way to make a bet both of us think has a positive expected outcome. Part of my problem is that your beliefs about probability seem slightly nonstandard, so I can’t be sure what you actually think. My model of your model is: “All laws of probability hold, so all propositions have a (true) probability of occurring with something between 0 and 1 likelihood, inclusive. However, since humans can never even approach that level of certainty, at the extreme ends a human could only ever make claims with between 1E-10 and 1 – 1E-10 certainty. Thus a bet which Froolow would only take at 1E-11 odds, I would take at 1E-10 odds”. Is that roughly right?

            If so, then perhaps we could change the bet so that we say, “Each n bit random key has a (very low) probability of being the root code to the entire universe, granting the reader Godlike powers. Those powers include, by specification, the ability to know any other random key the wielder desires. Given any particular random key, I think this probability is basically 0 – certainly I would bet I would be wrong fewer than one in a trillion times. However if I’ve understood you correctly, although you too think this probability is basically 0, you would be prepared to bet you will only be wrong one in ten billion times. I think we would both agree that – if it is actually true in the first place that the universe is a simulation with a root code embedded in n bit random keys – that the probability of any particular random key being the root code is independent of the probability of any other random key we might have tried (perhaps we should stay away from 64/128/256 bit keys, because that assumption might not be true for the keys commonly used in crypto).

            Consequently we *can* linearly add the probabilities that, “This particular random key is the root code to the universe”, and can make a bet similar to the one I suggest above which both of us think is positive expected value, but that which both of us could actually pay out if we lose.

            I kind of feel if we can’t find a bet where our different intuitions about probability demand that we place different bets, we actually *don’t* have different intuitions about probability, and you perhaps shouldn’t have been so confident when correcting MartinW – if your belief that any bet under 1e-10 odds is a suckers’ bet never causes you to take any action different to me, then it seems more like the belief that parallel lines do or don’t meet at infinity, which is to say a mathematical curiosity rather than a genuine belief about the world.

    • Jaskologist says:

      I think that’s a big part of the reason people are making a big deal of the article. The underlying idea has some merit, but the actual items EY claims are slam-dunks and says we should judge truth claims by are crazy. And Scott is right, this does cut to the heart of Rationality. Rationality claims to be able to make you a better thinker, and better able to gauge what level of certainty you should have about certain claims.

      So you delve into the writings of its primary prophet, and learn that he declares with total certitude that MWI is a slam-dunk. But then you check with people who study quantum physics, and they say it looks like he has the kind of understanding of QM that you’d get from the intro class-and also makes all the classic mistakes that you would expect from a beginner. The term “Dunning-Kruger” is used.

      And, well, you don’t know much physics yourself, so who can say? But you do know some things about religion, which EY harps on constantly. And there, you notice that again he makes very basic factual errors, the type that should be obvious to anybody familiar with the source material. Again, he declares these errors as slam-dunk facts.

      As for p-zombies… well, can you bring yourself to care enough about p-zombies to even decide what would constitute a definitive resolution to that problem? Is that really the make-or-break issue?

      At that point, using his own test, you have to ask to ask, “why should I expect Rationality to work for me? It doesn’t look like it worked for you.”

      • LTP says:

        On p-zombies, while many philosophers agree with Eliezer, his actual argument against p-zombies isn’t itself very good or conclusive. So for him to say it it is a slam-dunk with only a link to himself harms his credibility.

        (Side note: I’m not super well studied on the issue at the moment, but my understanding is that while p-zombies themselves are a seemingly silly thought experiment, one’s opinions on that thought experiment corresponds with a certain set of views that has profound implications of the philosophy of the mind and the philosophy of cognitive science. Still, I don’t see any resolution to it in the near term, so I agree it’s probably not something worth talking about unless you are really interested in philosophy, and certainly isn’t relevant given the rest of the contrarian cluster post)

    • Deiseach says:

      Probably most people on this site, including me, will agree that atheism is true with 99.9999999999% certainty, but if you believe it’s a settled question then I want to introduce you to a few billion people who would disagree quite strongly.

      Probably that depends on your definition of “most”. There’s certainly me, and I think a couple of others as well, hanging around the joint who are theists at the very least (and horrible rotten stinkin’ flat-out actual believers at the worst). I don’t know if that’s one sleeping dog we should let lie and not ask who is and who isn’t 🙂

      • MartinW says:

        Indeed, I certainly wasn’t claiming that there are no theists here at all. But I’d be extremely surprised if they were a majority, which is what “most” means.

        It seems reasonable to assume that the demographics here are similar to those of Lesswrong, and according to the 2014 survey that group consists of 80% atheists (and another 10% agnostics). So “most people here will agree that atheism is true” would seem to be a safe claim, although I have already admitted that I should have released my finger from the ‘9’ key a little earlier.

        • RCF says:

          I think that “most” means more than just “majority”, although 80% certainly qualifies.

        • Deiseach says:

          I’d imagine most people on here are some flavour of agnostic/atheist, but the 99.9999999999999 certainty there is nothing else out there, Mulder? I don’t know – after all, we seem to be worryingly familiar with Bigfoot, its habits and locations and preferences 🙂

          • Glen Raphael says:

            Atheism isn’t “certainty there is nothing else out there”. Rather, it’s certainty that if there WERE “something else out there” it wouldn’t be the thing Christians call “God”…because the Christian concept of “God” is less likely to exist than the Christian concept of Santa Claus.

            Any “God” that /actually existed/ would have specific traits we could speak sensibly about. It wouldn’t be defined to possess mutually-contradictory attributes. It wouldn’t require “faith”. It wouldn’t be an exercise in wish-fulfillment. If we called the new thing “God” we’d have to invent a retronym to describe that other thing called “God” that ancient people imagined and told tall tales about.

          • John Schilling says:

            So, atheism really is just “Anti-Christianity”. Got it.

            I can guess what you are trying to say here, and I don’t think you really understand the breadth of Christian thought re: possible conceptions of “God”. But do you understand how it looks when you frame it this way?

          • Glen Raphael says:

            Saying “I’m an atheist” can’t mean “I don’t believe in EVERY thing that anybody ever called ‘God'” because: Spinoza. If you choose to define God as “the universe” – and some have done that – then “God” “exists” by definition because the universe exists. So to say “God” DOESN’T exist, you have to have something kind of specific in mind; the concept can’t be entirely open-ended.

            What *I* have in mind is the judeo-christianesque concept of a God that is omnipotent, omniscient, omnibenevolent, “created the universe”, and cares whether humanity exists. I fail to have a belief that a thing resembling that exists, but that doesn’t mean something else couldn’t exist that some people might choose to call “God”. (Again: Spinoza.)

            Since the idea of God I’m most familiar with is basically Santa Claus with all the silliness knobs turned up past 11, that’s something I’m pretty comfortable disbelieving in. I know I disbelieve in that God, but whether or not I disbelieve in other possible Gods would have to depend on how they’re defined.

            Does that help?

          • houseboatonstyx says:

            @ Glen

            Intending to smart-ass your first comment, I typed —

            “Specifying ‘the Christian God’ would leave the field open for” /insert examples/

            — then very casually went looking for some equally colorful examples and couldn’t find any comparable ones in English sources. (I rejected those that sounded like they had been translated by missionaries.) The only comparable I found was from a Buddhist* disputing the concept.

            Hindus have Brahma, Vishnu, and Shiva, (Creator, Preserver, and Destroyer) who are interested in humans. But when an English source went up a level to a One God, suddenly it becomes as abstract as the higher concepts in Christianity.

            What *I* have in mind is the judeo-christianesque concept of a God that is omnipotent, omniscient, omnibenevolent, “created the universe”, and cares whether humanity exists.

            Still I stumbled over ‘judeo-christianesque’. Perhaps ‘Jehovah-type’ or ‘Jehovah-level’ would fit that sentence, but the following might make your meaning more clear:

            ‘the concept of a Jehovah-level God that is’

            * http://www.budsas.org/ebud/ebdha068.htm

      • keranih says:

        Oh, come on, let’s ask. Are you a believer?

        It’ll be fun, opening up the debate about if saying “no” in a hostile environment (or simply not speaking up) is a rational choice, an ethically sound choice, or just a choice that would fog the data to the point of being unuseable.

        (For the record – I do my best to follow the Man, and I am part of a Roman Catholic congregation.)

        • Bugmaster says:

          It’ll be fun, opening up the debate about if saying “no” in a hostile environment (or simply not speaking up) is a rational choice…

          From what I’ve seen, in most places on the Internet the decision to stay quiet and go with the group consensus is not merely a “rational choice”, but rather, “the only choice that makes any sense at all, what is wrong with you, do you really want to get fired from your job, SWATted, and blacklisted from everywhere ?”. Scott wrote a post on that whole topic just a few days ago.

          • Nita says:

            So, in most places on the Internet you get SWATted and fired for disagreement with the popular opinion? Wow, we must be using two very different Internets.

          • Katherine says:

            There are opinions regarding which that is true. Do you really think that belief in gods is one of them? On SSC?

          • Deiseach says:

            I actually wouldn’t be worried about admitting to religious belief on here (well, it’s too late to chicken out now, isn’t it?)

            But there are definitely other places on the Internet where, because I am a traditional Catholic (or indeed any kind of Catholic or Christian), I wouldn’t say boo to a goose about it because although I don’t have to worry they’d try to get me fired from my job etc., I know the level of vitriol and abuse would not be worth it. And I don’t mean explicitly atheist sites, either.

  58. Glossy says:

    Some people must be more willing to accept unpleasant facts than others. The truth is often unpleasant. By that logic cultures and individuals that like flattery and sappy melodrama should be less rational than those that hate them.

    I think that public stereotypes are the most valuable source of information in sociology. It’s the wisdom of crowds. Millions of observations coalescing into conclusions the way gazillions of water molecules coalesce into rivers.

    The stereotype of men being more rational than women is ancient and universal. And yes, women like sappy melodrama more than men.

    • stillnotking says:

      Liking sappy melodrama doesn’t necessarily mean you think real life is a sappy melodrama. As for women liking it more than men, well, Star Wars.

    • PSJ says:

      This seems to be the second time in the same comment block that you’ve talked about the comparative rationality of men and women. In both posts, you also make little more than the vaguest gestures towards addressing the topic of the main post. The first time, somebody challenged you to defend your claims, which you did by referencing “all the women I know” as good evidence.

      This combination of facts leads me to strongly believe that you have no intention to add to the discussion by saying things that are kind, true, and necessary, but are rather looking for a soapbox.

      Moving on to actual substance, even if your conclusions were true, they seem to be more aligned with discussion about a “General Factor of Correctness” rather than a specifically contrarian measure of correctness, so I’m not sure that would be particularly relevant to the discussion.

      Nevertheless, I’m not sure your argument follows particularly well. Some people are less likely to accept facts because they are unpleasant. I agree with you there. Others are more likely to accept facts because they are unpleasant (see:doomsday predictors, conspiracy theorists, me in middle school). Generally, however, people are tuned to prefer pleasant facts in most senses of the word “pleasant.” Confirmation bias, self-serving bias; I’m sure you are familiar. However, it seems like a jump to go from there to “people who like sappy melodrama are less rational than those who hate them” and from there to go to “women are less rational” (admittedly, I’m not sure if you are arguing that direction or if you are taking women are less rational as fact and using that to explain a penchant for sappiness).

      In fact, research suggests that (at least in romantic situations), men are more likely to exhibit self-serving bias. (Source) So if we were to accept a tendency to prefer pleasant facts as evidence of less “rationality,” it seems that you are not on the steadiest ground in suggesting that women are less “rational.” In fact, you, as a man, might consider whether your belief that men are more rational might itself be a wonderful example of self-serving bias!

      More research suggesting that men are more prone than women to a number of irrational biases

      I’m not trying to say that women are necessarily more rational than men, but simply that the converse is far from proven. Your assertion that “public stereotypes are the most valuable source of information in sociology” has an inherent problem. Millions of observations, all affected by bias, don’t necessarily lead to truth. Stereotypes about women and other races as recently as one or two hundred years ago were exaggerated, often to the point of postulating that one group or another was inherently incapable or rational thought/understanding politics/performing well in universities. These stereotypes were shown to be wildly inaccurate. Why are you so confident that modern stereotypes are particularly more accurate.

      You talk about “ancient and universal,” but the simple fact is that stereotypes change significantly in relatively short periods of time.

      • Doug S. says:

        These stereotypes were shown to be wildly inaccurate.

        I’m not entirely sure. Given the circumstances that these people found themselves in, the stereotypes may have accurately described behavior. For example, when ancient Greek and Roman writers described the shortcomings of women, the “women” they were describing were basically teenage girls: the average age of first marriage for men was about 30, while the average age of first marriage for women was around 13. The only women they’d encounter who weren’t substantially younger than them would be their mothers or high-class prostitutes. Even today, thirteen year old girls are not particularly known for their rationality.

        As for old racial stereotypes, “laziness” seems like fairly rational behavior given the incentive structures facing plantation slaves…

        • Peter J. says:

          I agree! My language was over-broad.

          I should have said something like, “the attribution of contingent properties in a group as necessary properties of that group has repeatedly shown to be untrue”

      • Steve Sailer says:

        How many stereotypes are wrong at the directional level rather than just at the magnitude level? I’ve found several stereotypes over the years that were backward from reality, but then I look for them.

  59. shemtealeaf says:

    Scott,

    A slight tangent, but I’m curious how you analyzed the calibration skill from the Less Wrong survey data. I read your analysis that you posted along with the survey results, and I didn’t really agree with your assessment (I believe there were a few people in the comment thread there who shared my thoughts).

    Wouldn’t you expect even correctly calibrated people to be overconfident on hard questions and underconfident on easy questions? For instance, if I ask an obscure question about someone in the bible, most people will have a low confidence in their answer. However, if it turns out that the answer is actually Jesus, a lot of people will guess correctly anyway and look underconfident. I would expect this to show even well-calibrated people as underconfident on any question where the correct answer is among the most common things that someone would guess even if they didn’t actually know the answer. Conversely, if I ask a question that seems straightforward but actually contains some hidden assumptions or has an unexpected answer, people will get it wrong and appear overconfident.

    On the Less Wrong survey, I think the Obama, Norse God, and maybe the cell biology questions all fit the model of “one of the first answers that comes to mind as a guess turns out to be correct”. The planet density question has an unexpected answer (at least to me), and the computer games question contained hidden assumptions.

    Also, on a more general level, is there any evidence that ‘ability to assign confidence in trivia questions’ is well-correlated with ‘ability to assign confidence in correctly analyzing complicated information’?

    • RCF says:

      The computer game question was bizarre. Angry Birds has more downloads than Minecraft. I guess one can still ask “How well calibrated are people to whether they will be able to guess what Scott thinks is the most popular computer game?”, but it seems like a strange thing to ask about. Will the next survey ask people to calibrate how confident they are about what Scott’s favorite color is?

  60. Wrong Species says:

    I think it’s important to separate the scientific theories from other predictions. Our schmoeist and anti-schmoeist may be able to look at the facts and come up with a reasonable belief on pre-clovis culture but there is not any scientific way that we know of to predicting international politics.

  61. Jaskologist says:

    This is one of those cases where we should be spending less time at the theoretical level and more time with actual examples. Take a peek through history. It should offer many, many people who turned out to be surprisingly correct. What other conclusions does this lead to? I think that’s what people are getting at when they bring up Newton or Pauling. If you really believe in this, you should be applying it to those guys, not just the topics Eliezer points to.

    (And historically, Eliezer’s requirement of atheism would not have served you well in truth-seeking. That would have meant ignoring most of the scientific fathers. You would have been Yes on Lysenkoism, and No on Big Bang. Also No on Bayes and his theorem. But this is hardly a valid sampling of historical personages.)

    • Deiseach says:

      And historically, Eliezer’s requirement of atheism would not have served you well in truth-seeking. That would have meant ignoring most of the scientific fathers. You would have been Yes on Lysenkoism, and No on Big Bang.

      I think that was one of the initial objections to the Big Bang theory; it sounded much too much like the one-off creation by God in religion and mythology, which offended a lot of people on the grounds that “This is letting a Creator back into scientific discourse by the back door!”. Added to that that it was propounded by a Belgian Roman Catholic priest, and it looked much too much like some religious jiggery-pokery going on 🙂

    • Jordan D. says:

      That could either be an objection to the notion per se or an objection to its use, though. Part of the problem I have with the four discussion questions is that they lampshade the plausible existence (well, or the intuitive-ness, anyway) of a General Factor, but highlight uses for it only when you have literally no other data to work with.

      I would take the objections you raise as evidence that if there is a General Factor, it’s pretty weak evidence by itself. I mean, I get that this is what the whole post is about- how to evaluate experts in a divided field where you don’t have the time, understanding or information to take a good look at it yourself. It’s just that I can’t think of any situations where I have so little information beyond ‘secondary beliefs of experts on both sides’ that this effect would push me to believe one way or another.

      (Actually I can think of one case- quantum mechanics. I have no domain knowledge and there’s not much chance that I’ll gain any. There I’m happy to believe people I consider smart for other reasons, but that’s primarily because my opinion on quantum mechanics will not change my life even one iota.)

  62. Troy says:

    The fourth problem: is there a difference between correctness and probability calibration? Suppose that Alice says that there’s a 90% chance the Greek economy will implode, and Bob has the same information but says there’s only an 80% chance. Here it might be tempting to say that one of either Alice or Bob is miscalibrated – either Alice is overconfident or Bob is underconfident. But suppose Alice says that there’s a 90% chance the Greek economy will implode, and Bob has the same information but says there’s only a 10% chance that it will. Now we’re more likely to interpret this in terms of them just disagreeing. But I don’t know enough about probability theory to put my finger on whether there’s a true qualitative difference.

    I don’t think I understand this paragraph. First, I would say that any two people who have the same evidence and assign different probabilities to P are disagreeing. I don’t see how calibration comes into it; disagreement seems compatible with them miscalibrating or not miscalibrating. Second, calibration, inasmuch as I understand it, is a property of a system of beliefs, not a single one. If I believe 10 things to .7 confidence and 7 are right, that set of belief is well-calibrated. But there doesn’t seem to be any sense to saying that my .7 belief that P in and of itself is well-calibrated — unless we just look at whether P is true or not, in which case calibration reduces to accuracy (closeness to truth, where truth = 1 and falsehood = 0).

  63. > Good Judgement Project […] average people

    I opine that there is a strong selection effect in the GJP, and that the participants are by no means average. They required a college degree! That’s “everyone is above average”, right there, by at least a standard deviation. And even at that there were more PhDs and master’s degrees than you would expect by picking at random from the college-educated population.

    > [correlations in LW survey data]

    Small p-values or not, these correlations are so tiny as to be uninteresting.

    • Steve Sailer says:

      Right, Good Judgment Project super forecasters tend to very bright, very hard working people who have the time to follow a lot of obscure foreign affairs topics.

  64. John Sidles says:

    Postulate  Persons exhibiting an exceptionally high “General Factor Of Correctness” will exhibit an exceptionally low incidence of the Out of the FOG forum’s “Top 100 Traits of Personality-Disordered Individuals” (PD traits)

    Top 100 Traits
    of Personality-Disordered Individuals

    Out of the Fog web-forum

    One common criticism of [the following] list of traits is that they seem so “normal” — more like traits of an unpleasant person than traits of a mentally ill person.

    This is no accident. Personality disordered people are normal people. Approximately 1 in 11 people meet the diagnostic criteria for having a personality disorder.

    Personality-disordered people don’t fit the stereotypical models for people with mental illnesses but their behaviors can be just as destructive.

    These descriptions are offered in the hope that non-personality-disordered family members, caregivers and loved-ones might recognize some similarities to their own situation and discover that they are not alone.

    (001)  Abuse-cycling …
    (002)  Alienation …
    (003)  “Always” and “never” assertions …
    (004)  Anger …
     — — — —
    (098)  Triggering …
    (099)  Tunnel vision …
    (100)  Verbal abuse …

    Rationale  PD traits are notoriously stable in the face of attempts to alter them … no matter whether the means of alteration are pharmaceutic, psychotherapeutic, or the exercise of ordinary free will. These PD traits are “sticky” in the sense that, for sadly many people who are sincerely motivated to change, avoiding relapse is exceedingly difficult.

    In essence, PD cognitive ecologies are self-adaptive … perhaps this is why PD traits are so distressingly common?

    Conversely, persons exhibiting a paucity of PD traits plausibly possess cognitive skills that are effectively and adaptively “not wrong” in regard to real-world problems.

    Conclusion  Societies may be well-advised to choose leaders who do not exhibit PD traits, on the grounds that these leaders are more likely to exhibit an exceptionally high “General Factor Of Correctness.”

    Uh-oh.

  65. ddreytes says:

    My completely un-thought-through guess is that the source of any General Factor of Correctness is probably going to come from experience & judgment in evaluating sources and arguments as reliable or not, and in the ability to quickly build and correct mental models for things. It’s appealing because it’s not domain-specific and therefore generalizable, without being identical to genius or some kind of extraordinary mysterious mental power.

    Of course I could also be privileging my particular mental processes here.

  66. Quixote says:

    I think that maybe what this way of thinking about this subject misses, is that many seemingly contested or closely contested questions are not actually closely contested when looked at by a disinterested party. The field of Frowlapology has believed X since the 1800s based on then brilliant work by a pioneering Frowlapologist. Later Frowlapologists have intellectually grown up learning X and with all their professors and professional colleagues believing theory X.
    Time passes and weird things that don’t quite fit theory X gradually pile up and eventually some young iconoclast proposes theory Y that ties everything up nicely. But everyone who spent 40 years of their professional career as an X theorist rejects Y and so do their grad students if they know what’s good for them.
    Then a reasonably intelligent individual from another field looks at Frowlapology and says, hmm Y looks better to me. The individual could do the same in many fields, because the secret sauce that causes them to be correct is disinterest and detachment. Not being from the field what seems like a hard question in the field is actually an easy question. Such a person could amass quite a “correct contrarian” score but wouldn’t have any real advantage on questions which were actually hard questions.

  67. Consider the following scenario: There is a school of thought that makes a theoretical prediction based on what appear to be good reasons. For some reason, the evidence to back up said prediction does not seem to be forthcoming, which is cited by people disagreeing with it. To make matters worse, there appeared to be some evidence, but it turned out to be unreliable. Time passes … and something resembling evidence at long last shows up. On the other hand, it’s much less than the people who originally issued the prediction had in mind.

    How skeptical should we be about the prediction? In a related question, what is the track record of earlier predictions that fit the pattern?

    I can think of several predictions that fit the above pattern. One of them is believed by the Left. Another is believed by the Right. I am disinclined to take either that seriously. On the other hand, there are other predictions that I am inclined to take seriously that also fit the pattern.

  68. Vulture says:

    That Good Judgement Project thing reminds me of an old long-distance scam I once read about (which some people might still practice):

    So, let’s say you’re a bookie who offers bets on 5-horse races. Your first step, in that case, should be to find, let’s say, 5^4 marks (there’s one born every minute, so this shouldn’t be particularly hard). Make sure they don’t know each other, either. Now, you send out a bunch of letters to these marks; the first 5^3 marks get a letter saying “Here’s some friendly advice: Horse A will win the big horse race tomorrow, and I suggest you place some money on him. Signed, Iam A. Kahneman”. The next 5^3 marks will get a slightly different letter, which reads “Here’s some friendly advice: Horse B will win the big horse race tomorrow…” and so on. If you do this for each horse, sending a prediction of its victory to exactly 1/5 of your hapless marks, then the next day there’ll be exactly 5^3 marks kicking themselves that they didn’t follow the advice in your letter.

    Of those 5^3 marks, next time there’s a horse race, 5^2 of them will get a letter advising them to bet on Horse A, 5^2 of them will get a letter advising them to bet on Horse B…

    Eventually, you’ll have exactly 5 suckers who’ve gotten accurate predictions from you about 3 horse races in a row, and who will be perfectly happy to take a big, expensive bet from your associates on the next horse you predict.

    • Nornagest says:

      Classic scam, but the hard part these days would be getting 5^4 people to read their unsolicited mail.

  69. Ishaan says:

    Armchair prediction: When holding things such as general knowledge constant, it will come down to making accurate, gut level evaluations of a hypothesis’s parsimony.

  70. Decius says:

    Can “belives that the anthropogenic component of climate change is currently unknown” or other [disagrees with experts in the field] beliefs be listed in either column before the expert consensus changes due to new evidence? Or should we look at people’s Brier score and make predictions off of that?

  71. Albatross says:

    In example 2, I don’t dig. A broken clock is right twice a day. In examples 3 and 4 I am persuaded, moreso in 4 but I’m biased because I’m an Anthropology major (also Business).

    In example 1, I’m sure I’m reading too much into it, but because the Green party rarely wins elections and because I strongly associate young earth creationists with the Red Tribe I’d be somewhat persuaded by a red tribe person picking a Blue party win. Now, if the creationist is a member of the Blue party I take it back, but I tend to favor picks where people predict against their interests. Some exceptions apply, but obviously I much more persuaded on Clovis dates by a scholar who previously argued for a more recent date than someone who argued for an old date all along.

    I mean, example two is a classic contrarian who picks up credit whenever an unlikely consensus emerges. Thus I’m biased against the near psychic early adopters and the bandwagon. Show me instead the invested expert who changed their mind relatively early and see what else they are contra on. This gives us a person who is capable of understanding the consensus and also capable of disagreement with it. Bigfoot Epcot guy is first, but I perfer instead the people who heard his theory and did the research.

  72. Anatoly says:

    I think it’s often implied that people we are evaluating as candidates for High General Correctness will approach questions in different fields in about the same way and with the same methodology. Whereas in practice there’s an incredible amount of motivated reasoning that probably often overshadows whatever benefit the Correctness is giving them.

    Moreover, since we’re looking at contrarian claims, specifically, the amount of motivated reasoning is even higher.

    Say we have a superpredictor named Sue, who in the past went against the expert opinion on two different topics in two different fields and ended up being right. We now examine other contrarian claims Sue has. One of them is “the claims of a small religious sect my family belongs to are literally true”. This is definitely an against-the-experts contrarian claim. How much more credence are we about to give it due to Sue’s past prediction successes? I guess not much. Assuming there is General Correctness, Sue’s past successes are evidence that she has an unusually high value of it, but still we expect that benefit to be absolutely swamped by motivated reasoning in the case of her family religion. But most contrarian claims Sue holds – and expresses an opinion publicly on – are probably more like “family religion” than “I have coolly and dispassionately examined the various pros and cons on this issue that’s not at all dear to my heart, and reached the following conclusion”. And, more importantly, we don’t know which are which.

  73. CJB says:

    Ok, so here’s my suggestion:

    We take a millennium prize problem’s solution, and keep it hidden. Then, we take a general population of non-mathematicians and sit them at a table- in front of them are two pieces of paper covered in whatever set of arcane symbols represent the solution to, say the P=NP problem. They don’t even know if P=NP is true or not- they just know there are several sheets of paper with variant answers.

    Or better yet, multiple pieces of paper- half of which say ‘yes it does” half of which say “no it doesn’t”. Only one piece of paper contains the true solution.

    Heck, you can do this with stuff that’s KNOWN. There’s a million obscure but true theorems out there. Pick five at random, pick the most plausible looking wrong ones, and see if people can reliably suss them out.

    The world is filled with obscure-but-proven knowledge- you can easily set it up so each person gets a different AREA- the first one gets an obscure question of CS, another one gets a question about protein formation, and so on. The chances that people would’ve read this particular paper published in the Romanian Journal of Astrophysics are very low.

    As a matter of fact- I’m pretty sure you could kick that off with an undergrad’s senior thesis. THEY don’t have to know anything about protein formation, what they have to know is that a certain protein observably does a certain thing, as stated in paper A, which demonstrated paper B to be wrong, and see if people reliably choose A over B.

  74. Steve Sailer says:

    DNA data is a rapidly developing field, so we can look back to see who was right and who was wrong about new developments in population genetics.

    One of the more prescient books of recent times was Cochran & Harpending’s “The 10,000 Year Explosion” that predicted, among much else, that modern humans would be discovered to be a little bit Neanderthal.

    http://isteve.blogspot.com/2009/02/my-review-of-10000-year-explosion.html

    You could go back to look up reviews of that book.

  75. Steve Sailer says:

    Here’s Paul Krugman’s 1996 takedown of Stephen Jay Gould:

    http://web.mit.edu/krugman/www/evolute.html

    I try not to have opinions on macroeconomics, since it ought to involve doing a lot of mental work that I am not in the mood to do. But anytime I’m in the mood to be snippy about Krugman, I try to remind myself that he dropped in briefly as an amateur to a field I know more about and quickly saw through the most overrated reputation.

  76. Steve Sailer says:

    I think reactions to ex-Harvard President Larry Summers’ controversial speech in 2005 might correlated well with a General Factor of Correctness related, at least, to human beings:

    http://isteve.blogspot.com/2005/03/larry-summers-math.html

    • PSJ says:

      I’m not sure this is a problem on your end, but every time I try to click through on a link on your blog, I end up on an unrelated advertising page. Do you know why that could be?

  77. Kromer says:

    If we are trying identify characteristics and modes of thinking that allow individuals to excel at decision-making under uncertainty, a good complement to Tetlock-style experiments might be to analyze successful players at a game like Poker.

    Fundamentally all you’re doing in poker is predicting the future. Every time a player takes an action at the table (betting and sizing the bet/calling/folding) he is making a move that he believes maximizes his expected value given the remaining cards to be dealt and given other players’ likely response to his action. If you’re putting money into the pot, it’s because you believe either your cards are giving you higher equity than the other players cards or youre causing other players to fold cards with higher equity than your own.

    There are a lot of controls inherent to the poker that resolve shortcomings in experiments like the GJP:
    – Specifying the question for prediction. The only goal in poker is to make money. No worries about too narrowly/widely defining the problem and conditions for a correct answer.
    – Incentives. Again, very few people come to a poker table without the primary goal of winning money. Motivated reasoning, signalling, and politics are not really in play.
    -Sample size. The long-run winners can be established easily, with millions of online hands played.

    Ive seen a few common traits among players that are long-term winners, and unlike chess, IQ does not appear to be all-important. Calculating your own equity (aka implied pot odds) is trivial, but it is a foundational skill that must be learned. Reading ‘tells’ etc is largely nonsense from movies.

    The real skill is inferring a probability distribution (aka range) of hands your opponents are holding based on their betting action (the lines they take) at the table. This happens via a deductive process that incorporates information about the other player’s skill level, and tendencies you’ve seen among similar players in the past. To make these decisions, a player needs to draw on all his past experience– millions of hands– and find some general patterns that can be mapped to the current situation. There are some game-theory layers on top of this, but mapping those patterns is the foundation. And to internalize the correct maps requires a lot of reflection on past experiences, being very careful not to generalize from exceptional cases.

    TL/DR – the best poker players have data banks of generalizations that are more correct than other players, and they use these as inputs into a probabilistic decision-making model. Players seem to accumulate the patterns via relentlessly scrutinizing past information as honestly as possible.

    The big question, I suppose, do any of the skills required to win an incentivized a zero-sum game against other individuals generalize and map to other domains? I don’t really know.. but as a way of thinking and structuring problems, Ive seen a lot of parallels in my professional career.

    • Steve Sailer says:

      How much of success as poker is getting in games with bad players?

      Nate Silver talks about he made a living as a Las Vegas poker player in 2005-2006 by exploiting tourists who were in over their heads. Then the tourist players disappeared at the end of 2006 and only pros were left, so he started losing money and quit in the spring of 2007.

      Interestingly, Silver could have used this experience to make an even more profitable bet in the financial markets: against mortgages, but he missed the connection between the popping of the housing bubble in California, Nevada, and Arizona at the end of 2006 and the disappearance of cash-rich tourists to fleece. When he wrote a book about his career in 2012, he still hadn’t noticed the connection.

      • RCF says:

        He mainly played online poker, not Las Vegas. And he attributed the change to the site he originally played at blocking Americans, leading him to moving to another site with tougher players.

        A copy of his book is hosted by Gwern Branwen (I believe this is the same person who posts as “gwern” here, but not 100% sure). I don’t know what the copyright status is. https://plus.google.com/103530621949492999968/posts/SEUx3tuyFka

        • Steve Sailer says:

          Silver still doesn’t get how the poker bubble was tied to the housing bubble. The “fish” he exploited tended to be people who were getting rich, or so they imagined, off the housing bubble. (For example, my barber in L.A. started spending about 25-30 days per year in Las Vegas about 2004 playing poker.) Silver blames those horrible Republican Congressmen regulating online gambling for the fish disappearing from the poker world at the end of the 2006, but that was when the housing bubble started to pop.

        • Steve Sailer says:

          Here’s a macroeconomic history of the poker bubble:

          https://mises.org/library/monetary-origins-poker-bubble

  78. Steve Sailer says:

    One of my readers is a Super Forecaster in the Good Judgment Project, and kindly shared his (perhaps overly modest) insights:

    As Tetlock’s team keeps saying, doing well in this weird competition involves more than sheer luck. (I suppose that’s their biggest finding to date and they are doing all kinds of silly psychometric tests on us to see what they can correlate it to). Two examples:

    – In the first year, I finished high in my “experimental condition” that had over 100 participants. All forecasts were individual in this condition. Top predictors from each group became “supers,” others were allowed to keep going as usual. Majority, I imagine, dropped out because it truly takes a lot of time. A few others who were near the top but didn’t make it to “supers” did well enough next year to achieve the “super” status. Even if they “competed” within a pool of several thousand.

    – Last year, a particular group of “supers” beat everyone in the other groups by a largish margin. Today, this same team still has the best score even if “supers” competition is now among eight groups.

    And yes, the “supers” consistently beat everyone else, but I think it has a lot to with self-selection for folks willing to google on regular basis information pertaining to completely weird stuff like this:

    “Will China seize control of the Second Thomas Shoal before 1 January 2014 if the Philippines structurally reinforces the BRP Sierra Madre beforehand?” (The answer is supposed to come as probability and can be updated daily if desired.)

    As you can imagine, it requires more or less the same mentality as the one demonstrated by those tireless Wikipedia editors.

    http://isteve.blogspot.com/2013/12/tetlocks-good-judgment-project.html

    • Scott Alexander says:

      That explains why they beat Joe Q. Random, but not why they beat CIA analysts.

      • C.S. says:

        Could it be that CIA analysts, consciously or unconsciously tailor their predictions in order to make them more palatable for their bosses?

      • Erl says:

        Possible mechanism: Super Forecasters are who you’re looking to hire when you go hunting for CIA analysts. They like making geopolitical predictions, they’re highly motivated by being right (in the absence of other rewards) and they’re willing to put in the legwork to do so (and good at said legwork). However, when you put out an “apply to the CIA” shingle, you get a lot of other folks: people with political or state department ambitions, people who read one too many James Bond novel, etc. etc.

        • Steve Sailer says:

          The CIA isn’t that great of a job: the pay is good enough for government work and the security investigation is so slow that strong job applicants often wind up going to work somewhere else rather than wait around.

          For example, physicist Greg Cochran applied to work at the CIA around 1980 or so, but the security background check took six months, so he went to work for Hughes Aircraft instead. The CIA could have used him: on 10/14/2002, Cochran publicly explained why Saddam Hussein couldn’t have an operative nuclear weapons program:

          http://www.jerrypournelle.com/archives2/archives2mail/mail227.html

          • Deiseach says:

            Correctly explaining why Hussein couldn’t have Weapons of Mass Destruction would be an extra reason not to hire him because where a government wants to start a war or carry out some action deemed to be in its interest, it will do so – the administration doesn’t want the truth, it wants something that will support the line it is spinning: see the “sexed-up” documents* of the September Dossier which was produced to support Tony Blair’s case for allying with the Americans to prosecute the war in Iraq, and the Matrix-Churchill affair, where a British engineering company prosecuted for supplying materiél to Iraq were shown to have done so with the knowledge and advice of the British government of the time – that of John Major, which preceded Tony Blair’s government.

            * BBC defence correspondent Andrew Gilligan filed a report for BBC Radio 4’s Today programme in which he stated that an unnamed source, a senior British official, had told him that the September Dossier had been “sexed up”, and that the intelligence agencies were concerned about some highly dubious information contained within it—specifically the claim that Saddam Hussein could deploy weapons of mass destruction within 45 minutes of an order.

          • Airgap says:

            Come on. USG was perfectly capable of ignoring all the reasons CIA provided for why there probably weren’t WMDs . They didn’t need to prevent CIA coming up with those reasons in the first place.

      • Albatross says:

        I used to manage a team of financial analysts and I wanted to apply at CIA, but they don’t have a local office. The “Central” part weakens their pool.

        Also, my best analysts exhibited borderline autism spectrum behaviors and took longer to get promoted than less talented peers. They get cranky when ordered by superiors to adjust the analysis in ways they know are incorrect. Amazing speed and accuracy. But their peers are jealous and their superiors are threatened. Cassandra was right about everything and everyone hated her. The best CIA analysts are sure to be despised and marginalized.

      • Deiseach says:

        A CIA or Foreign Office analyst will be an expert on Latin America or Russia or China. They will know all there is to know from A-Z about the current administration, who’s in, who’s out, who’s making deals with which oligarch.

        But they’ll have no idea who Angela Merkel is or who’s the Prime Minister of Great Britain, because that’s not their area. A disinterested amateur who is coming at the question from a position of total ignorance and so has to start from scratch on “Where the hell is the Second Thomas Shoal and why would the Philippines care?” will probably include a lot of other sources and other data that an expert analyst would reject because it’s not relevant to their field.

        And since they’re not starting with any prejudices (“Look, I know Putin wouldn’t do that, because I read his school report from Fourth Class”) and don’t have any reputation to win or lose in the field, they can be a lot more open-minded and willing to consider ‘the unthinkable’ than the experts.

        That doesn’t mean they’ve got mysterious “Correctness” powers, it simply means they put in the work and use the brains and talents they already possess and most crucially are not wearing any blinkers because they don’t know what they’re not supposed to be looking at.

        • Steve Sailer says:

          The superforecasters Tetlock found weren’t coming from complete ignorance. They are people who have professional or hobbyist reasons for following world affairs closely for a long time.

          • Deiseach says:

            So that’s shifting us even further away from the “economists with the right view on physics” model; now we’re not talking about experts versus average joes, we’re talking about experts versus interested amateurs.

            That certainly makes it very interesting if the interested amateurs did better than average, overall, against the experts, but it still doesn’t help us towards a General Factor of Correctness that can be applied across a wide range, where the best economist can be trusted to pick the best option in global geopolitics after coming in to study the question for a while.

      • Airgap says:

        The difference between Joe and the CIA is that some of the CIA’s classified information is bullshit intentionally fed to them by their enemies to mislead them. Joe has bullshit fed to him too by the media, but media bullshit isn’t really tailored to effectively mislead reasonably smart folks like Joe, but to produce the right impression in a large proportion of folks a stddev or two dumber than him. The CIA has fairly smart and motivated people trying pretty hard to fool them, and these attempts are pitched at analyst-IQ level. An intelligence officer lives in a wilderness of mirrors, as Angleton put it.

  79. Anr-X says:

    If I were put on the spot to provide a guess (for the Good Judgment Project and such), my guess would be that this is a ‘neural networks’ thing.
    As in – when you make computer neural network you are trying to make the interface between ‘take in a lot of info’ and ‘output an answer’. The interface is doing a whole bunch of math things to appropriately weight and add all the info together and such. The better it is, the better you result should be, consistently.

    People, in a sense, already are neural networks. So I would expect that for those people who outperform the experts, that’s where they’re winning. (The question then might be, but why aren’t these the kinds of people who are hired as experts in the first place. The most likely answer seems to me to be that the expert jobs are not actually ‘make these decisions al day and we hire the people who do it best’, they involve other things like investigating questions, dealing with people about it, etc, and thus attract employees based on that, instead).

    So I would expect that people with good generic neural networks do well in a lot of area if the questions are formally put before them.

    However, I wouldn’t necessarily expect them to do well in ‘things they just go around believing’, since unless they have reason, they’re not necessarily likely to be doing all the ‘taking in input’ stuff like they are, presumably, in the study.

    The other problem is that I’m not sure to what extent this would be a trainable thing…

  80. Steve Sailer says:

    My general impression is that all truths are connected to other truths, so people who refuse to admit certain truths tend to intellectually hamstring themselves in other areas.

  81. multiheaded says:

    They found that the same small group of people consistently outperformed everyone else in a way incompatible with chance. These people were not necessarily very well-educated and didn’t have much domain-specific knowledge in international relations – the one profiled on NPR was a pharmacist who said she “didn’t know a lot about international affairs [and] hadn’t taken much math in school” – but they were reportedly able to outperform professional CIA analysts armed with extra classified information by as much as 30%.

    Wait, so… that thing in Consider Phlebas wasn’t a completely arbitrary excuse of a plot element to make it seem like the Culture is really really concerned about humans having more dignity than housepets? It was based in something real?

    duuuuuuude

    • Nornagest says:

      I would be astonished if Iain Banks knew about this when he was writing that. The book was released in 1987, which is a couple years after Tetlock seems to have started working on judgment, but I wouldn’t expect him to have produced any substantial results by that date. The popular press only took it up in the late 2000s.

  82. RCF says:

    Interpretations are not correct or incorrect. They’re interpretations.

    Re: first problem: obviously, the correct thing to do is to weight positions by the entropy.

  83. Troy says:

    GFC as Good Judgment: 6 predictions about GFC

    1. Scott’s definition of GFC is too strong. If we are looking for a “mysterious quality totally separate from all of these things,” which allows people with insufficient information to “beat the experts in those fields,” then we will be looking for a long time. Deiseach has already jumped on this. We won’t find GFC which fills in for knowledge gaps.

    2. But over the next decade, we will find such a thing as GFC. Intuitively, think of the person you know with like 5 horrible exes. That person is awesome at choosing losing prospects, like the Wharton study’s Harbingers are with products. This person has poor judgment, and I’ve known many people who quickly displayed poor judgment in one instance, and (predictably) went on to display poor judgment in many more instances. On the flip side, I’ve known people who consistently make good choices about jobs, relationships, trips, education, exercise modalities, and many sticky situations. Most people would say these people have “good judgment”. I trust such a person to move into unfamiliar areas and come out doing well.

    3. It doesn’t make much sense to demand people make accurate predictions in areas where they lack necessary information (see number 1) – but a good test is whether an expert draws better conclusions than other experts, and if that person can then make better decisions than average in “normal” human activities. In addition, if a person with above-average good judgment in everyday life goes on to acquire expertise, that person on average will do better than other experts in that domain.

    4. I think a person with good judgment in one domain is more likely to have that trait in many domains – the super forecasters will have good judgment in several other domains of life. But I bet some more deeply-rooted and emotional areas, such as religion, won’t be affected. In other words, you can have unusually good judgment without rethinking religion, and can be religious but also have unusually good judgment.

    5. I also predict that education and intelligence will be very weakly correlated with GFC, education especially so. This is because education is a traditional undertaking, as Scott discussed, and does not force a person to develop the careful thinking and alert emotions which contribute to good judgment.

    6. Last, I predict that even though education will turn out to have little to nothing to do with unusually good judgment, elite academic performers will on average possess high GFC. So most Ph.D.s will not have better judgment on average, but on average scientists with higher numbers of prestigious awards will.

  84. Steve Sailer says:

    I’d add that one skill that’s perhaps more relevant to annual Tetlock’s Good Judgement tournament than to the real world is understanding that there’s a bias in favor of estimating that something won’t happen: You have two ways of winning by saying X won’t happen — either X will never happen, or X won’t happen in the 12 month framework of the contest — and only one way of winning if you say something will happen: X both has to happen and X has to happen soon. As I wrote in 2013:

    Now that I think about it, I wouldn’t be surprised if a fair amount of competence in this tournament derives from having a sense of just how long it takes for stuff to happen. Since the game looks at typically annual time frames so that it can determine winners and losers in a reasonable amount of time, I bet a lot of losers have a tendency to say, “Yeah, that will probably happen” without estimating how long it could take for it to happen.

    For example, say there is a question that asks if the coalition government in Britain or Germany or wherever will come undone. In the long run, the answer is surely Yes. But, will it happen within the next year? Powerful people often are pretty talented at kicking the can down the road for another year.

    Even if you read well-informed writers on a particular topic, your reading may bias you toward assuming something is going to happen soon. For example, consider the question of whether the division of the island of Cyprus will last. In the long run, perhaps not. On the other hand, the short run is now four decades old.

    If you read articles about the Cyprus situation, the authors have a natural bias to argue that this topic of their expertise is less boring than it sounds because Real Soon Now, something is going to happen, so you should pay attention to what they have to say.

  85. Steve Sailer says:

    Another aspect is that the long range forecasting is often best done by currently unreliable individuals. A classic example is Rousseau, who had a remarkable hot streak of anticipating how people in the future would think differently than they had been doing in the first half of the 18th Century. But you really wouldn’t have wanted to trust Rousseau’s judgement about anything if you actually had to deal with him.

    Burke was another Enlightenment thinker who anticipated some of 19th Century thought. In 1790 he correctly predicted much of the next decade of the almost unprecedented French Revolution, up through military dictatorship. But that was his brilliant peak, and he started to crack under the strain after that, becoming more agitated and paranoid.

  86. Steve Sailer says:

    I would also want to distinguish between “good judgement” and, say, brilliant insight or genius innovation or whatever. Practically every town in America has at least one senior businessperson who has a long track record of making mostly good investments in local businesses and real estate. He probably didn’t come up with the biggest breakthrough moneymaking idea in the history of the town — that was probably due to somebody more manic, somebody with his own personal Reality Distortion Field — but everybody knows this fellow has been right a lot more often than he has been wrong.

  87. Darek says:

    Maybe this is obvious or stupid, but I’ll write it anyway.

    A big portion of whether our predictions are correct or not depends on how well our model of the world matches the reality. Of course, there will be luck or perhaps having, in some magic way, an inside info on things, etc. Let’s dismiss these as noise (we would like to know how correct people are on average, for all sorts of questions).

    Now, the quality of our mental model comes mainly from
       1) being able to spot interactions,
       2) being able to correctly weight interactions.
    If we were to ask only well-thought-through questions, then (1) does not matter much, as people usually agree if some interaction exists (e.g. perhaps it is negligible, but it still exists). The real challenge is (2).

    I have no idea how well people do on (2) and if some perform on average better than others, but it seems to me that it is quite similar to calibration. Given a context, the effects of some interactions are much stronger than others, and to arrive at a correct prediction we would like to know if one of them dominates the others, or maybe some two cancel each other, and so on.

    In this, I believe that (2) can be trained, in particular, perhaps experts are frequently right in their domain, because they have “calibrated” their interaction-weights in the right context. In such case it seems that they should be also frequently right in other domains that have similar interaction-weights and might be blatantly wrong if the real weights are much different (e.g., on the opposite side with respect to the weighs of an average person).

    To give an example, suppose I’m going to throw a ball and I want to predict where it will fall. I have to take into account its mass, my strength, air resistance, wind, etc. We will all agree that air resistance will cause the ball to fall closer, but how it relates to, say, wind? Moreover, even if I am an expert in predicting where a ball will fall, it might be hard to guess where a Frisbee will land.

    It means that to find GFC we would have to ask (a broad range of) questions that would somehow represent the distribution of real interaction-weights. How do we do that – I have no idea.
    And yet, there is another factor, which is how do we update our interaction-weights, because it is just common sense to gain some intuition about the context/domain first (if it is possible) before making predictions. Maybe there is a whole meta level (and meta-GFC?) that deals with this, I would guess that people that are able to update their interaction-weights fast might have quite high GFC.

    To give an example, I might be bad at predicting where a Frisbee will land, but perhaps I can make a simple experiment first, see how ball-predictions differ from Frisbee-predictions and update my interaction-weights accordingly. Still, that won’t help me that much if I don’t know about the lift.

    Summarizing, I think the GFC exists, but it might behave much different than what we expect. In particular, the real interactions might be too complex for non-area-expert humans, or even humans at all, to model in any meaningful way – it might happen that no easy test will be able to pick that up. On the other hand, in some constrained setting (e.g., a narrow area where the interactions are not too complex) we could find _some_ factor of correctness, but due to specificity of interaction-weights of that particular setting, it might be totally useless for finding the General Factor of Correctness.

    That’s enough of my musings, perhaps you will find this relevant 😉

  88. vV_Vv says:

    We could get more interesting results by analyzing only people’s deviations from expert consensus. If you agree with the consensus about everything, you don’t get to play. If you disagree with the consensus about some things, then you get positive points when you’re right and negative points when you’re wrong.

    I don’t see why you would want to do that. If the goal is to find people who are often able to make accurate predictions, and if experts are indeed often right, then people who are often make accurate predictions should often agree with experts.

    This is why Eliezer very reasonably talks about a correct contrarian cluster instead of a correct cluster in general.

    EY has a bunch of contrarian beliefs that wants you to believe, hence there is no surprise that he tries to downplay the “agree with the experts” heuristics.

    The third problem: can we differentiate positive from negative selection?

    Sure, just do the relevant statistics.

    Note that this is also an issue for the general factor of intelligence: are all the correlations between IQ and life outcomes caused by people with severe mental retardation having very poor outcomes? Probably not, since we observe noticeably different average IQ by profession, but it was a plausible hypothesis.

    Suppose that Alice says that there’s a 90% chance the Greek economy will implode, and Bob has the same information but says there’s only an 80% chance. Here it might be tempting to say that one of either Alice or Bob is miscalibrated – either Alice is overconfident or Bob is underconfident. But suppose Alice says that there’s a 90% chance the Greek economy will implode, and Bob has the same information but says there’s only a 10% chance that it will. Now we’re more likely to interpret this in terms of them just disagreeing.

    I don’t understand what qualitative difference you are seeing here.

    If ability to evaluate evidence and come to accurate conclusions across a broad range of fields relies on some skill other than brute-forcing it with domain knowledge and IQ, some skill that looks like “rationality” broadly defined, then cultivating that skill starts to look like a pretty good idea.

    Yes, but I’m worried of of stuff like this:

    If, outside of their specialist field, some particular scientist is just as susceptible as anyone else to wacky ideas, then they probably never did understand why the scientific rules work. Maybe they can parrot back a bit of Popperian falsificationism; but they don’t understand on a deep level, the algebraic level of probability theory, the causal level of cognition-as-machinery.

    Kids, don’t parrot back Popperian falsificationism, instead parrot back a bit of the algebraic level of probability theory, the causal level of cognition-as-machinery, something something Bayes theorem, something something Solomonoff induction, and you’ll earn your True Rationalist™ card which is a free pass to disregard expert knowledge and pontificate on subjects you have no expertise on whenever you feel like it.

    • MartinW says:

      I don’t see why you would want to do that. If the goal is to find people who are often able to make accurate predictions, and if experts are indeed often right, then people who are often make accurate predictions should often agree with experts.

      Yes, but in that case you don’t get any new information from them. “Trust the opinions of people who trust the experts” simply reduces to “trust the experts”. If Alice always trusts the experts, and you already know what the expert consensus is on a given topic, then you know what Alice will think about that topic without even having to ask her.

      On the other hand, if Alice usually believes the experts, but sometimes she deviates from the expert consensus, and whenever she does that she invariably turns out to be right once the issue has been settled thanks to new evidence coming in, then it would make sense to pay a lot of attention the next time she takes a contrarian position.

      • vV_Vv says:

        On the other hand, if Alice usually believes the experts, but sometimes she deviates from the expert consensus, and whenever she does that she invariably turns out to be right once the issue has been settled thanks to new evidence coming in, then it would make sense to pay a lot of attention the next time she takes a contrarian position.

        Therefore, you should evaluate Alice based on often she is right, irrespective on whether she agrees with the experts.

        • MartinW says:

          Therefore, you should evaluate Alice based on often she is right, irrespective on whether she agrees with the experts.

          Suppose that Alice has made 100 predictions in the past, and she turned out to be correct on 95 of them. Pretty good score, right?

          Now, I look into it a little bit deeper, and it turns out that in 90 of the 95 cases where she was right, she was just agreeing with mainstream consensus, and in the 10 cases where she took a contrarian stance (or maybe there wasn’t a clear consensus among the experts in that area), she was wrong half the time.

          Alice has just made another prediction, and it disagrees with mainstream consensus. How seriously should I take her?

          There’s that old jape about how someone’s work is both true and original, but unfortunately the parts that are true are not original, and the parts that are original aren’t true..

          • James Picone says:

            Picking 50% of the true contrarian ideas is actually pretty good. Most different-from-expert-consensus ideas are wrong to very wrong, even the ones that are popular enough that an expert in another field is likely to pick them up.

            That’s an effect that should probably be considered here, incidentally. Even if you’ve found someone good at picking true contrarian positions, the prior on a contrarian position being false should be high enough that that new evidence shouldn’t get you to considering it’s true. Maybe consider that contrarian idea a candidate.

            Going the other way – finding someone who agrees with contrarian positions that you also agree with, and then assigning higher weight to other elements of their worldview – sounds like an excellent way to build a tribal identity and set of beliefs.

            Kind of hypocritical of me to take that position, though, I guess, what with being one of those engineers who thinks philosophy is mostly solved problems and silly semantic games. 😛

          • vV_Vv says:

            Assuming that Alice is a bot that has made 100,000 predictions rather than 100, so that you can make meaningful statistics over them, and assuming that success rates will remain stable, and assuming that you can accurately assess expert consensus (at least as good as Alice can) and assuming that you know nothing about that prediction other than it disagree with consensus, then you should conclude that Alice has a 50% probability of being right.

            This is a general consequence of the fact that you should condition on all the evidence you have. You could have as well conditioned on the prediction domain, its political content, what Alice ate for breakfast, and so on. Why single out contrarianism?

  89. I have been following your blog for a while now and your post The General Factor of Correctness was of particular interest to me. You see I am busy on a book entitled The Smart Vote which tries to get at such a factor using IQ as a proxy. More precisely I try to elaborate on the direction in which opinions on controversial topics as one moves from low to high on this factor.
    My blog is http://garthzietsman.blogspot.com/ and the posting which explains the concept of the Smart Vote is http://garthzietsman.blogspot.com/2011/10/smart-vote-concept.html
    Actually I think IQ is more than a proxy for a General Factor of Correctness. The g factor is extremely general for a start (see more in the Smart Vote Concept post or the work of Linda Gottfredsen.) One could in principle construct a fairly accurate IQ test purely from belief and lifestyle items – provided you included sufficient varied items. I’m pretty sure a GFC as you define it i.e. correctness where you have differed from expert opinion, would have a fairly high positive correlation with IQ. We know for example that there are individual differences in the cognitive biases identified by Kahneman, Tversky, etc and that performance on these items do correlate with one another in such a way as to suggest a general factor and that this general factor is correlated with IQ. However not all the items are correlated with IQ. So I’m inclined to think there is a rationality dispositional or personality factor that adds to the role of IQ.
    Just a couple of comments.
    Firstly IQ isn’t independent of ability to predict accurately in Tetlock and co’s subjects. They clearly say their top predictors have higher IQs then their more run of the mill predictors. Their sample is definitely restricted in range with respect to both IQ and predictive ability so the actual correlation is appreciably higher than they quote. They did say they would share data with me once they had published so I can check have the Smart Vote does on their prediction tasks.
    Secondly even if a GFC does exist the correlation between being right on one particular question and being right on another is likely to be low. One would need to look at performance on at least 30-40 questions before you start trusting their opinion on new questions. Personally I wouldn’t trust the opinion of single individual’s but would look at the difference in the opinions of largish samples of high and low scorers on a 30 plus item correctness test – like the Smart Vote does.
    Thirdly Bryan Caplan showed that high IQ is a better guide for thinking like an economist than education (and mentioned a similar finding for expertise on poisons) but the correlation with IQ didn’t agree with the economist-average dude divide on every single economics question. Unfortunately I don’t know whether the smart or the economists are right in those cases. Furthermore there are areas where the Smart Vote disagrees with expert opinion on almost every question e.g. theology. I would say that agreement with the Smart Vote is a good indication that there is something to an intellectual discipline.
    Fourthly if mental illness accounts for a large fraction of those that score very low on a multi-item correctness test we might see at least two major factors instead of a single general factor. I also think one could spot and exclude such subjects. On the other hand if the ability to be correct (or hyper-rational) is in some sense the same as being hyper-sane then I think the deluded will be a feature rather than a bug.
    Fifthly I do think the ability to calibrate well is relevant to correctness but is not all there is to it. I suspect calibration ability is positively correlated to the ability to avoid cognitive biases in general.
    Finally the other questions you mention – sports outcomes, industry trends, scientific findings – are every bit as interesting as the best answer to controversial questions and I believe the Smart Vote will be informative on them too. I plan to try it out. I did try on sports events once but finding any Mensa members with much interest in sports proved difficult.
    On your four questions.
    1) I think the information should shift your priors toward the Green Party slightly.
    2) The evidence is that people who got one very big counterintuitive call correct tend to be less correct in general so probably one should be even less inclined to go dig than you were before but I would probably raise my previous extremely low prior to very low and compare the expected outcome to the anticipated costs.
    3) I think the information should shift one’s priors very slightly toward the Schmoeist position because a single question is hardly a reliable guide to the General Factor of Correctness i.e. it’s g loading would be low.
    4) With ten questions one is approaching something like a reliable test and a 9 to 1 ratio is a very big difference so here one should shift one’s priors toward the Schmoeist position quite a lot.
    Regards
    Garth

  90. Protagoras says:

    Like others above, I’m not that impressed with an ability to outperform CIA analysts; I’m pretty skeptical of the value of the CIA, and intelligence agencies generally. Employ people to lie and keep secrets, and it is pretty much guaranteed that they will lie to you about how much they’re accomplishing, and keep their failures secret. Operating in an environment like that is unlikely to improve someone’s connection to the truth.

  91. Troy says:

    Several people have mentioned that a problem with a general factor of correctness is that people who have taken the time and energy to learn about one field will generally not have taken the time and energy to learn about another. It seems, then, that perhaps what we want is to see whether people who have taken the time and energy to learn about controversial topics and are right about one such topic tend to be right about others. For example, most philosophers know many of the basic positions and arguments in a number of areas of philosophy. Does being right about, say, metaphysics make one more likely to be right about ethics?

    Of course, if this is what we’re interested in then it looks difficult to measure for reasons already discussed, because controversial questions are controversial. We’d need to find a field in which experts have opinions on a bunch of controversial topics, and then some of those questions become “settled,” and then we can check who was right on those, as Scott suggested. It doesn’t seem like there are many such fields around.

    • Steve Sailer says:

      Population genetics, behavioral genetics, and related fields are ones where there is an ongoing huge leap forward in technology, so a lot of old theories can now be tested fairly definitively, with more data pouring in all the time.

    • Nornagest says:

      A huge shakeup in taxonomy started a few years ago when cheap DNA analysis became available, so that might make a good source of controls. While taxonomy’s a contentious field, though, it’s also a hedgehog-heavy one, which might limit its usefulness in this context.

  92. Steve Sailer says:

    I would add that an important part of Good Judgement is the willingness and ability to perform reality checks on ideas that you like. For example, Malcolm Gladwell has promoted quite a few pretty good ideas over the years. For example, he was well ahead of the curve with the idea that football as a mass high school sport is in major long term jeopardy due to headbanging.

    On the other hand, he has also promoted a lot of clearly wrong ideas because he’s not very willing to subject his ideas to simple reality tests. Moreover, not very many outsiders were willing to do reality checks on his ideas either. Finally his reputation took a long term hit around 2009 in his exchange in the New York Times with Steven Pinker over Gladwell’s repeated contention that NFL teams are, in effect, no better than random at drafting college quarterbacks.

    But before this humiliation, only a tiny number of critics had identified Gladwell’s flaw, suggesting to me that it’s a quite common one.

    And yet, remembering to perform several reality checks on your favorite ideas seems like something people could be taught better to remember to do. So I could see a simple way to improve Good Judgement overall: inculcate a self-criticial attitude, an urge to catch yourself in a flaw before somebody else catches you in it.

  93. TrivialGravitas says:

    Yudowski seems to be continuing his incredibly aggravating habit of talking about critical thinking without ever reading the critical thinking science. Scientists are bad at other fields because you learn to think critically in just one field at a time. I can’t say I really understand why that is, but cross domain critical thinking is really hard. Alarmism at scientists being bad at other sciences is unfounded (except to the extent that scientists become convinced they are experts at other fields, looking at you medical science).

  94. Leo says:

    I read the CFAR link about credence. Yes, people tend to be overconfident. And people who are very rational and well calibrated in the lab seem to throw that away in their daily lives. But consider the alternative. Imagine the paralysis inducing potential of considering all the beliefs that inform your day to day choices as having a 60% probability, rather than a 95% percent probability. Overconfidence allows us to get things done. In my work I’ve seen overconfidence compensate for actual ability more than I’m comfortable admitting.
    If anything I’d like to more overconfident. Do or die. Go big or go home. I’m (97%) sure natural selection favours overconfidence, especially in men.
    Overconfidence outside the lab is something we should stop and be grateful for every so often.

  95. Josh says:

    I would expect a big factor in correctness is whether or not a belief is part of your personal identity. If my sense of self is based on X being true, my opinion on X being true is pretty much worthless. So I would expect that for a given individual, there would be no correlation between how right they are on beliefs that are part of their personal or cultural identity to how right they are on beliefs about things they have purely an intellectual interest in.

    So I’m bearish on any correctness prediction algorithm that doesn’t differentiate between ego-attached beliefs and non-ego-attached beliefs….

  96. thedufer says:

    > If they can beat the experts in those fields, then I start really wondering what their position on the tax rate is and who they’re going to vote for for President.

    It seems to me that what you really care about is who they think is going to win the race for president; who a generally correct person is going to vote for has no real bearing on the future, and ought to be orthogonal to who they think will win. Unless you have reason to believe there’s use in knowing who the “correct” presidential candidate is, even if that knowledge has no bearing on the election?

  97. Chuck Garvey says:

    My answer to discussion question 3 is “very little” – I still don’t know whether Schmoe is true. This provokes an interesting observation.

    I believe that there’s a nonzero Correct Contrarian Cluster in people’s private opinions. I’m thoroughly unconvinced that it has any good mapping to their professional opinions. In general, I don’t think that people (possibly excepting rationalists and philosophers) arrive at the professional views using the same habits or toolkits they use in their everyday life. Mechanical mastery and dedication can advance a person to the peak of Hayekian economics or quantum mechanics, but that’s not a guide for life.

    The Correct Contrarian Cluster is almost by definition a pattern in non-expert opinions. We’re looking for an effect that spans unrelated fields and opposes conventional wisdom, so we can’t possibly find it using the best expert views on every topic. A scientist may have a correct, cutting-edge belief because of their extreme familiarity with either the data or the detailed views regarding a topic, but our Correct Contrarians can’t do the same.

    As such, I would say that someone’s professional correctness is actually worse evidence for their location in the CCC than their personal correctness.

  98. Brian says:

    OK, my answers:

    1. Maybe move ever so slightly in favor of the Green party. Leaving aside that neither guy is exactly predicting a landslide and therefore shouldn’t radically alter my assumed belief that it’s a tossup, I could imagine there being SOME correlation between being a Young Earth Creationist and not being able to successfully and dispassionately juggle poll data.

    2. Hell yes. These are two very specific beliefs that, at the time I’m on the bus, seem nearly impossible. If it turns out that the Rambling Bus Guy nails one of them it’s hard to believe that he got there randomly, like monkeys typing until they churn out Hamlet. Sounds like he’s got the inside scoop.

    3. Not much. Where the disbelief in evolution in question 1 might make me question the pollsters ability to correctly interpret data, being correct on one controversial issue, by itself, doesn’t seem to mean much.

    4. Now we’re getting there. It’s a really interesting question. But sure. With a big test size (the entire economics profession) and a demonstrated ability to perform significantly better across ten separate questions, I could imagine that correlates well with success in evaluation of economics. Unless there’s some reason to think that Schoemists are typically much more interested in archaeology than anti-Schoemists or something.

  99. Greg says:

    I think we try to do this in specific domains all the time.

    For example, when selecting which money manager to invest in, you try to tell which managers have made correct market predictions to such a degree / in such a manner that they’re likely to do so persistently in the future. Moreover, you “benchmark” the managers against some consensus-like option, like buying a broad market index ETF. You also consider survivorship bias, and a bunch of other stuff.

    Or, when choosing a doctor, at least in principle, one thing you consider is whether the doctor has made correct medical predictions to such a degree / in such a manner that they’re likely to do so persistently in the future. We probably want predictions better than the consensus-like baseline of “consult Web MD and do whatever it says.” Is there a medical factor of correctness?

    Or, more generally, supervised machine learning algorithms are looking for patterns that will replicate on blind data, better than simple baselines. Motivated by ML, you could replace “correctness factor” with the more general “baseline-beating predictive model” – and this model may just be “listen to expert X.” There are even ML techniques called “mixtures of experts”, which may expand a bit on that idea.

    So, do baseline-beating predictive models exist in specific domains? Sure, unless the system is literally total noise. Is it possible to find such models? That depends on the signal-to-noise ratio, and whether your model-choosing method can find signal given the amount of data available.

    Is there anything different about a general case, that is applied to all decisions rather than just financial or medical decisions? I don’t see why not. Can we find such models? In principle, sure – but the same caveats apply as when trying to find a domain-specific model, and it’s not like it’s easy to pick an actually good money manager or doctor. If you only have a few decision data points, and thousands of possible experts to choose among: well, statistical techniques won’t be of much help. But maybe you can use some magic human intuition and pick a winner!

  100. yellowish fish says:

    this is perhaps not so relevant but it is factual…I have a job where I get to do a lot of cold reading (not “auditioning from scripts” but “telling strangers what they are”) and if you are from Australia I get no points for saying “you’re from Sydney”…Perth or Brisbane are more impressive if I have to just guess

  101. Airgap says:

    1. Five Thirty Eight is down because I took it down because fuck Silver. Instead of looking for other sites, I watch Robocop again. Watching it does not change my opinion of who will win the election. This is the point.

    2. I assume that the man is a second Unabomber, and has agreed to suspend his bombing campaign if the TV station will publish his 9/11 conspiracy theory. I also head to Florida with an excavator. And several bottles of whisky.

    3. I recognize that, regardless of the relative merits or truths of the theories, anti-Schmoeism is the hard one with all the math and shit, and Schmoeist professors just have more time on their hands to familiarize themselves with archaeology, and more inclination to do so because they’re not so heavily selected for narrow mathematical prowess. There’s really no basis to shift my beliefs one way or the other. Which is good because I basically regard Schmoeist economics as being for sissies.

    4. Why do you have time to know all this other shit? Get back to work!

  102. Douglas Knight says:

    Most people do not hold a lot of contrarian beliefs, so the correlations do not tell you much about the individual. One person who holds a wide range of contrarian beliefs is Ron Maimon (Quora, Stack Exchange). It is from him that I learned about the Soviet theory of abiotic oil.

    Another thing I first heard from Maimon is that Hellenistic science was fucking awesome. This isn’t exactly contrarian. Indeed, it is the consensus of historians of science, yet it seems obscure. People criticize Aristotle’s science as if it were the culmination of Greek science, despite having heard the names Archimedes and Euclid. It seems to me that the Hellenistic Age is generally neglected, perhaps because its humanistic writing was crap (though its visual art was also awesome). I think it is also neglected in histories of the world, which I find more mysterious. Maimon goes farther, to the contrarian position that science didn’t merely slow down after the Hellenistic Age, but regressed, never to be matched by the ancient world. That Galen, Hero, and Ptolemy were not the pinnacle of ancient science, but a pale shadow of the lost work of their forebears, a position pursued by Lucio Russo.

    ━━━━━━━━━

    I don’t know if Freeman Dyson has any public contrarian views, but he has many times complained that science is too conformist. He has supported many contrarians, such as by writing the forward to Gold’s book,* but I don’t think those are endorsements of their theories. It might be a useful exercise to compile a list of his similar endorsements, but I’m not sure what to do with it.

    * Gold’s Deep Hot Biosphere is closely related to his belief in abiotic oil, but is potentially interesting if you just believe in abiotic methane, which I think is the current consensus. Maybe even just methane elsewhere in the solar system.

  103. Douglas Knight says:

    There are two things we could try to do here. We could try to use such correlations to evaluate individuals or we could use them to evaluate hypotheses. I suspect that the latter is more useful because it gives larger samples, so that the signal can overtake the noise.

    If Don has just introduced a new hypothesis and is the only one who holds it, there is not much difference between the credibility of Don and the credibility of the claim. Don probably does not hold enough contrarian beliefs for them to contribute to his credibility, let alone his theory’s credibility (though they may be enough to discredit him). But for an older theory with lots of backers, lots of weak evidence might accumulate from the other endorsements of the backers.

  104. Austin says:

    “A general factor of correctness” makes a lot of sense when trying to talk about an oracle or black box that is known to use similar methods to predict information in all circumstances it encounters. However, people exhibit the property of being extremely good at compartmentalizing how they think about various things, and not letting their ability to think well about one subject improve their thinking on subjects that have more social importance to them.

    Consider a profession that ought to systematically attract/produce people with a high General Correctness Factor: managing hedge funds.

    To be a successful hedge fund manager somebody has to have an uncanny ability to be correct while disagreeing with the experts and the consensus opinions. Good hedge fund managers do things like decide that the SEC or the SBA have completely failed to detect fraud that the fund manager thinks is blatantly obvious in public disclosures, or predicting which recent pharmaceuticals developments will love up to their hype, how diplomacy will influence a country’s credit rating and what impact that change in credit rating will have on the economy, etc. There are certain hedge fund managers that pick dozens of stocks a year based on complex judgments about fields outside of any expertise they actually have.

    And if you try to figure out who you should vote for in the next election by looking for people who have demonstrated general ability to be correct by consistently outperforming the experts at predicting what will happen in a bunch of different industries and political negotiations, and how those events will influence markets — you’ll quickly find that there is no more correlation among their political opinions than what you would expect from crude demographic information. (They tend to be socially liberal relative to the average American.) Especially surprising, since if we were assuming they all had one skill in common that they should be better at than anyone else it would be something related to understanding finances and economics, is the disparity among their beliefs about fiscal policy and economics — with most of them adhering to some slightly more nuanced version of some mainstream platform (liberal, conservative, libertarian — I don’t know of any socialists in this group, but again, that’s better explained away by selection bias than other things.)

    Why? One reason is that people reason about investing decisions completely differently than how they reason about politics. Politics is a signalling game and a ritual of moral posturing. The internal oracle that people consult with respect to political decisions, is for most people, one that has a whole lot more to do with their self image than with taking everything they know and using it to synthesize the most accurate predictions they possibly can. I would expect that some people are better at decoupling what they want to believe about the world from what the evidence tells them in general, but that this skill tends to be domain-specific. Most people are bad at predicting foreign policy, in part, because most people are rooting for some particular viewpoint on how foreign policy works to be correct. People who can somewhat follow what’s going on in the world without getting too attached to any standard posturing related to foreign affairs ought to be better at predicting foreign affairs than the “experts” in that field, because “experts” in foreign affairs tended to be people who are actually highly motivated to push a particular agenda related to foreign policy rather than people who have figured out how to think about it objectively. Having this ability with respect to foreign policy does not necessarily translate well to any other particular domain. Practically everybody stakes their identity in something. I have a blind spot with respect to technology because I am personally very ideologically invested in the Linux ideology (not to be confused with the GNU ideology). It is blatantly obvious to me that the most pragmatically-approached open source initiatives will always outperform closed source initiatives and those open source zealot things that sacrifice pragmatism for purity. Often they do. Often they don’t. In advance it always seems to me that the one that agrees with my particular approach will win.

    This is a circuitous way of saying that correct contrarian position shouldn’t be expected to cluster globally for any person or socially for any group. When beliefs cluster socially, it’s usually an indicator of people staking their identity on those beliefs and continually signalling to each other about those beliefs. Having some amount of social bonds and signalling to people who share them is important to everybody except sociopaths and psychopaths. So pretty much everybody can be expected to have a few blind spots where they systematically under perform their normal capacity to form accurate beliefs. For some people, it’s politics, for some, it’s morality, others, religion, other computers, etc.

    Trying to cluster around being super-rational or super-correct is just an in-group doing in-group things. It’s an admiral goal, but it results in the creation of rationality blindspots, not their elimination.

    (To be clear, I’m not arguing that there should not be correlations between individuals or social groups and correctness, merely that this additional correctness factor doesn’t seem like a useful thing to postulate about humans, and especially doesn’t seem like a good thing to postulate about social groups. For sure, there are some social groups that are on average more intelligent than others, and some that tend to be more well-informed, more willing to admit mistakes, update their beliefs, etc. Smart, well-informed people who update their beliefs in response to new information will be more correct than smart, well-informed people who believe in sticking to their guns (and much more often than unintelligent, ignorant people who believe in sticking to theirs). But that sort of thing is hardly deserving of comment.)

  105. Just for the record… my responses to your questions:

    1. Yes — I think Green is SLIGHTLY more likely to win. But it’s a very small difference. I discounted the first site’s reliability slightly because of the author’s independent belief in a Young Earth.

    2. Yes — Despite the man’s crazy demeanor, Bigfoot causing 9-11 is a ridiculously improbable fact and one that no one really predicts. This person could have just been crazy, but they’ve crossed a fairly high bar in that prediction, and I give at least some weight to the chance that they also knew something about EPCOT.

    3. Curiously, this does NOT cause me to favor one theory over the other by any appreciable amount. I’m not sure why I discount this prediction so strongly, other than the fact that it appears highly unrelated to economic theories. This appears to be inconsistent with my positions on 1 and 2.

    4. In this case I absolutely favor the Schmoeists. And I want to go poll the Schmoeists on about 100 OTHER topics to see what they say.