Fix Science In Half An Hour

You’ve probably heard about the crisis of replication in psychology. The problem is that replication is an unglamorous business; researchers would much rather do the sexier work of pushing forward knowledge with new results.

So we need to make replications more glamorous.

I propose a reality TV show, Replication Lab!, where every week they try to replicate one of the most famous experiments from the past few years.

It starts with the host explaining the experiment, maybe an interview with a very distinguished elderly professor who talks about how confident he is that his results will hold up. The techs chat with each other as they construct the experimental setup about how they’re doing and how their date last night went and how they’re going to avoid the problems that confounded the original study.

Suspense builds as we see the participants come in. Some human interest stories. He agreed to participate because they offered $30, which he’s going to use to buy a present that will win back his estranged daughter’s love. She joined because she’s right on the border of failing her psych class and needs the extra credit to save her dream of becoming the first person in her family to graduate college.

The experiment itself. The suspense is unbearable. We get a running commentary as everything proceeds. Oh man, look how harsh that guy is being on his Milgram Obedience Experiment, can you believe he would do that? That girl in the control condition seems to be running through her Stroop task at lightning speed – how do you think that’s going to affect our results, kindly-looking bearded scientist attached to the show?

After a tension-building commercial break, we get the results. Everyone is huddled around a computer as the statistician makes the final mouse click, and…oh no, p = .30! Total failure to replicate!

The scene cuts to the distinguished elderly professor’s face as he sees his great discovery going down the toilet. “How do you feel right now?” asks the host, and the professor sputters “I…I’m sure time will vindicate me! I know it!” and then he runs off the set, crying. Our host turns to the kindly-looking bearded scientist attached to the show. “Tell me the truth,” she says “Do you think Dr. Zuckerman’s career is ruined?” “I can’t imagine it wouldn’t be,” says the bearded scientist, shaking his head sadly.

I feel like Mythbusters has probably pretty much exhausted our cultural stock of urban legends by now and could be profitably recruited for this project. I would also accept “Welcome to Replication Lab! With your host, John Ioannidis!”

This entry was posted in Uncategorized and tagged , , , . Bookmark the permalink.

16 Responses to Fix Science In Half An Hour

  1. Pingback: Welcome To Replication Lab! | Random Nuclear Strikes

  2. suntzuanime says:

    I think the lower-hanging fruit might be to get people who run game-shows to turn them into economics experiments. These shows are things that already exist, giving away sizable cash awards, and yet economics experiments are often criticized because the amount of money in question is too low to really motivate the participants. I know there was a show Golden Balls that had a memorable treatment of the Prisoner’s Dilemma, but I’d like to see more stuff like that.

  3. DSimon says:

    Iron Scientist!

    The secret ingredient for today’s competition is: math.

  4. Bruno Coelho says:

    Besides jokes, a incentive system for replication probably would do more good than the attitude ‘trying to be original’.

    • Eli says:

      Yeah, I think it would be much simpler to just allocate some amount of journal publications and some amount of grant funding strictly to replications and null results.

  5. Jinnayah says:

    Replicating Milgram as reality TV was done in 2011. Per href=”http://artsbeat.blogs.nytimes.com/2011/10/28/touch-of-evil-eli-roth-recreates-infamous-experiments-for-discovery-channel/”>NYTimes ArtsBeat weblog post: “Spoiler alert: the modern version wasn’t any more encouraging.”

    It was done in 2007 more scientifically & with published results, also funded by a television network.

    Milgram’s results seem to hold up fairly well, all things considered. A lot of psychologists aren’t so lucky even with re-doing their own experiments.

    Anything to get replication more “air time” would be great, though: “reputable” journals aren’t much interested in them. Daryl Bem says replication of his — sensational — findings of precognition would be pretty key to finding any meaning whatsoever, yet the replications have trouble getting any thanks or publication for it. (Ben Goldacre: “[The next study team] submitted their negative results to the Journal of Personality and Social Psychology, which published Bem’s paper last year, and the journal rejected their paper out of hand. We never, they explained, publish studies that replicate other work.”

  6. Jai says:

    Hypothesis: Comments per article are inversely logarithmically proportional to the level of meta the post operates at.

    A post which about discourse responding to a critique of the worldview underlying a previous post will generate ~500 comments; a post about how comments should would will generate ~100 comments; An object-level idea post will generate ~20 comments.

  7. JPH says:

    Yes. More science as reality tv please.

  8. Craig Gidney says:

    “And now let’s look at the results of our definitive replication, done under exactly the same conditions (*cough*except cameras*cough*) as the original experiment!”

  9. Handle says:

    How about Replication Deathmatch. It combines glamor, prizes, and incentives against the glamor of producing sensational but non-replicative results.

    If you public a paper with a sensation result, and some other researcher is able to show that it can’t be replicated, then you have to switch jobs. The new Assistant Professor is suddenly ‘tenured’ (liable to the same loss of tenure, however), and the old hack is back to grading term papers.

    My prediction is that the chilling effect would reduce the number of non-replicable publications by 95%, so there wouldn’t be a need for anymore more than an occasional annual special to make an example of the worst offender, with the prize of tenure going to the gladiator who demonstrated the the greatest discrepancy between published result and reality.

    Yeah, there’s some details to be ironed out. Go with me here people – it’s called brainstorming.

    • Avantika says:

      It certainly would, but that may not always be a good thing. Failure to replicate does not by itself mean the original study is wrong.

      I work in a biology lab and I sometimes fail to replicate other people’s results after copying them carefully, but I don’t think that means the original authors were cheating or even especially careless. It would take a lot more evidence to make me think that.

      I don’t know anything about psychology experiments, but I imagine there would be even more factors that are outside the researchers’ control.

    • Alejandro says:

      This would give a huge incentive to the second researcher to cheat and make sure somehow that they get a null result. The only way to make sure there is no cheat is to have someone else design the protocol and carefully supervise the experiment. But then why give the huge reward to a researcher who has not been really in control of the research?

  10. Lydy Nickerson says:

    I would watch this with religious ferver.

  11. Anonymous says:

    Replication Lab is a bit of a mouthful. Other title ideas:

    Replicators!
    RepLab
    Sciencebusters
    Scibusters
    Metabusters
    Meta Lab
    MetaSci
    Redux
    Lablab
    Skepcheck

  12. gattsuru says:

    The problem is that replication is an unglamorous business; researchers would much rather do the sexier work of pushing forward knowledge with new results.

    … I think you’re looking at the wrong tail of the problem. Researchers already do a /lot/ of unglamorous business — also known as pretty much all of their work before post-doc research, in many cases. If you could reliably get null results or disproval of a previous study published that’d be quite sexy on the resume. Andrew Wakefield may be better known than Brian Deer, but not by /that/ much. Presuming both that a large number of normal trials produce circular-binned data and that a large number of replication trials would show null results or disproval, it’s not like it’d even be less efficient.

    I’d look more to journal publishers believing their credibility is harmed less by ignoring the matter, than by publishing corrections.