Highlights From The Comments On My IRB Nightmare

Many people took My IRB Nightmare as an opportunity to share their own IRB stories. From an emergency medicine doctor, via my inbox:

Thanks for the great post about IRBs. I lived the same absurd nightmare in 2015-2016, as an attending, and it’s amazing how your experience matches my own, despite my being in Canada.

One of our residents had an idea for an extremely simple physiological study of COPD exacerbations, where she’d basically look at the patient and monitor his RR, saturation, exhaled CO2 temporal changes during initial treatment. Just as you were, I was really naive back in 2015, and expected we wouldn’t even a consent form, since she didn’t even have to *talk* to the patients, much less perform any intervention. Boy I was wrong ! The IRB, of course, insisted on a two-page consent form discussing risks and benefits of the intervention, and many other forms. I had to help her file over 300 pages (!) of various forms. Just as in your case, we had to abandon the study when, two years after the first contact with the IRB, they suggested hilarious “adjustments” to the study protocol “in order to mitigate possible risks”.

From baj2235 on the subreddit:

Currently working in a brand new lab, so one would think I’d have a lot to do. Instead, thus far my job has consisted of sitting in an empty room coming up with increasingly unlikely hypotheses that will probably never be tested because our IRB hasn’t approve our NOU (Notice of use) forms. For those who don’t know, NOUs are essentially 15 page forms that say “We study these this, and we promise to be super responsible while studying it.” We have 4 currently awaiting approval, submitted in May. The reason they aren’t approved yet? The IRB hasn’t met since June, and likely won’t meet again this month because of frickin’ Harvey. Which in essence means the fine American taxpayer has essentially been paying me to sit in a room and twiddle my thumbs for the past 3-months because I can’t even grow E. coli without a frickin’ NOU.

From Garrett in the comments:

Oh, dear! I’ve actually been through this. I work in tech, but volunteer in EMS. As a part of wanting to advance the profession of EMS I figured I’d take on a small study. It would be a retrospective study about how well paramedics could recognize diabetic ketoacidosis (DKA) and Hyperosmolar hyperglycemic state (HHS) in comparison to ER doctors. […]

I had to do the “I am not a Nazi” training as well. In order to pass that, I had to be able to recite the FDA form number used as a part of new implantable medical device investigations. I wasn’t looking at a new device. I wasn’t looking at an old device. I was going to look at pairs of medical records and go “who correctly identified the problem?” […]

It’s now ~5 years after IRB and because of all of the headaches of getting the data to someone who isn’t faculty or a doctor, and who doesn’t have a $100k+ grant, I still don’t have my data. I need to send another email. I’m sure we can get an IRB extension with a few more trees sacrificed.

From Katie on Facebook:

I used to work at an fMRI research center and also had to take the Don’t Be a Nazi course!

My favorite story about the annoying IRB regulations is how they insisted on an HCG (pregnancy) test for our volunteers, despite the fact that MRI has no known adverse effect on pregnancy. So, fine, extra caution against an unknown but possible risk, sure.

But they insisted on a *blood test* done days in advance instead of five minute urine dip stick test that *actual doctors offices* would use. You know what doesn’t have risks? Peeing in a cup. And what does have risks of fainting, infection, collapsing a vein, etc? A blood draw.

Of course, we had an extra consent form for them to sign, about the risks of the blood draw the IRB was helpfully insisting on.

From Hirsin on Hacker News:

My freshman year of college I proposed a study to our hospitals IRB to strap small lasers to three week old infants in an effort to measure concentrations of a chemical in their blood. The most frustrating part was not the arcane insistence on ink and bolded study names, but the hardline insistence that it was impossible (illegal) to test the device before getting IRB approval – even on ourselves. Meaning that without any calibration or testing, our initial study would likely come back with poor results or be a dud, but we couldn’t find out until we filled out all the paperwork.

What is our country coming to when you can’t even attach lasers to babies anymore?

Some of the other stories were kind of cute. Dahud in the comments:

I’ve had exactly one interaction with an IRB – in 6th grade. My science fair project involved studying the health risks of Communion as performed in the Episcopal church. (For those unfamiliar, a priest lifts a silver chalice of port wine to your lips, you take a small sip, and the priest wipes the site with a linen cloth and rotates the chalice.)

Thing was, the science fair was being held by a Baptist University. The IRB was really not fond of the whole wine thing. They wanted me to use grape juice instead, in the Baptist fashion. I, as a minor, shouldn’t be allowed anywhere near the corrupting influence of the communion wine that I had partaken of last Sunday.

Of course, the use of communion wine was essential to the study, so we reached a compromise. I would thoroughly document all the sample collection and preparation procedures, and let someone of age carry out the experiment while I waited in the hall.

And of course James Miller is still James Miller:

Several forms I have to sign to do things at my college ask if what I will be doing will expose anyone to radiation. Although I’m an economist, this has caused me to think of experiments I could do with radiation such as secretly exposing a large number of students to radiation and seeing, years later, if it influences their income.

Along with these, a lot of other people were broadly sympathetic but thought that if I knew how to play the system a little better, or was somewhere a little more research-focused, things might have gone better for me. Virgil in the comments:

FWIW, I’m a graduate student in the Social Sciences. Our IRBs have the same rules on paper, but we get around it by using generic versions of applications with the critical info swapped out, or just ignoring them altogether. Though we don’t have to face audits, so…I’ve found that usually if you make one or two glaring errors in the application on purpose, the IRB will be happy to inform you of those and approve it when you correct them. They just want to feel powerful / like they’re making a difference, so if you oblige them they will usually let you through with no further hassle.

From Eternaltraveler in the comments:

Most of the bureaucracy you experienced is institutional and not regulatory. I have done research both in an institutional setting (turn around time at UC Berkeley=5 months to obtain ethics approval and countless hours sucking up to self important bureaucrats who think it’s their sacred duty to grind potentially life saving research to a halt over trivia they themselves know is meaningless), and as an entrepreneur and PI at a biotech startup (turn around time for outsourced IRB=5 days with reasonable and informed questions related to participants well being), where we also do quite a bit more than ask questions. FYI the kind of research I did at UC Berkeley that took 5 months for approval has absolutely no regulatory requirements outside of it.

And from PM_ME_YOUR_FRAME on the subreddit (who I might hunt down and beg to be my research advisor if I ever do anything like this again):

Amateur. What you do is you sweet talk the clinicians into using their medical judgement to adopt the form as part of their routine clinical practice and get them to include it as part of the patient’s medical records. Later… you approach the IRB for a retrospective chart review study and get blessed with waived consent. Bonus: very likely to also get expedited review.

And this really thorough comment from friendlygrantadmit:

I’m not an expert in IRB (although that’s kind of my point–getting to that), but I think your headaches were largely institutional rather than dictated by government fiat. Let me explain…

I used to be the grant administrator for a regional university while my husband was a postdoc at the large research university 20 miles away. Aside from fiscal stuff, I was the grants office, and the grants office was me. However, there was an IRB of longstanding duration, so I never had to do much other than connect faculty whose work might involve human subjects with the IRB Chair. I think I was technically a non-voting member or something, but no one expected me to attend meetings.

This was in the process of changing when I left the university because my husband’s postdoc ended and we moved. It was a subject that generated much bitterness among the small cadre of faculty involved. Because I was on my way out, I never made it my business to worry about nascent IRB woes. My understanding was that they had difficulty getting people to serve on the IRB because it was an unpaid position, but as the university expanded, they were going to need more and different types of expertise represented on the IRB. I can’t be more specific than that without basically naming the university, at which I was very happy and with which I have no quarrel. I never heard any horror stories about our IRB, and I would have been the first point person to hear the them, so I presume it was fairly easy to work with.

Anyway, the IRB auditing stuff you outline is just insane. The institutional regulations pertaining to the audits were probably what generated the mind-numbing and arcane complexity of your institution’s IRB. Add in finicky personalities and you have a recipe for endless hassle as described.

So here’s the other thing to bear in mind: almost everyone in research administration is self-trained. I think there are a few programs (probably mostly online), but it’s the sort of field that people stumble into from related fields. You learn on the job and via newsletters, conferences, and listservs. You also listen to your share of mind-numbing government webinars. But almost everyone–usually including the federal program officers, who are usually experts in their field but who aren’t necessarily experts in their own particular bureaucracy–is just winging it.

Most research admins are willing to admit the “winging it” factor among themselves. For obvious reasons, however, you want the faculty and/or researchers with whom you interact to respect your professional judgment. This was never a problem at my institution, which is probably one reason I still have a high opinion of it and its administration, but I heard plenty (PLENTY) of stories of bigshot faculty pulling rank to have the rules and regulations bent or broken in their favor because GRANT MONEY, usually with success. So of course you’re not going to confess that you don’t really have a clue what you’re doing; you’re just puzzling over these regulations like so many tea leaves and trying to make a reasonable judgment based on your status as a reasonably well-educated and fair-minded human being.

What this means in practice is almost zero uniformity in the field. Your IRB from hell story wasn’t even remotely shocking to me. Other commenters’ IRB from just-fine-ville stories are also far from shocking. Since so few people really understand what the regulations mean or how to interpret them, let alone how to protect against government bogeymen yelling at you failing to follow them, there is a wild profusion of institutional approaches to research administration, and this includes huge variations in concern for the more fine-grained regulatory details. It is really hard to find someone to lead a grants or research administration office who has expertise in all the varied fields of compliance now required. It’s hard to find someone with the expertise in any of the particular fields, to be honest.

There is one area in which this is not so much true, and that is financial regulations. Why? Well, for one thing, they’re not all that tricky–I could read and interpret them with far greater confidence than many other regs, despite having a humanities background. The other reason is that despite their comparative transparency, they were very, very widely flouted until the government started auditing large research institutions around 15-ish years ago.

I have a short story related to that, too–basically, when my husband started grad school, we would frequently go out to dinner with his lab group and advisor. The whole tab, including my dinner and that of any other SOs and all alcoholic beverages (which can’t be paid for with grant funds aside from narrow research-related exceptions), would be charged to whichever research grant because it was a working meal. I found it mildly surprising, but I certainly wasn’t going to argue.

Then the university got audited and fined millions of dollars for violations such as these and Found Religion vis-à-vis grant expenditures.

With regards to your story, I’m guessing that part of the reason the IRB is such a big deal is that human subjects research is the main type of research, so they are really, really worried about their exposure to any IRB lapses. However, it sounds like they are fairly provincial in that they aren’t connected to what more major research institutions are doing or how they handle these issues, which is always a mistake. Even if you don’t think some other institution’s approach is going to work for you, it’s good to know about as many different approaches as you can to know that you’re not some insane outlier as your IRB seems to be. As others have noted, it also sounds like that IRB has become the fiefdom of some fairly difficult personalities.

I already know how extensive, thorough, and helpful training pertaining to IRB regs is, which is not very. I remain deeply curious about the qualifications and training of your obviously well-intentioned “auditor.” My guess is she inherited her procedures from someone else and is carefully following whatever checklist was laid down so as not to expose herself to accusations of sloppiness or lack of thoroughness … but that is only a guess.

Even though I hate hearing stories like yours–there is obviously no excuse for essentially trying to thwart any and all human subjects research the way your IRB did–I am sympathetic to the need for some regulations, and not just because of Nazis and the Tuskeegee Syphilis experiments. I’m sympathetic because lack of oversight basically gives big name researchers carte blanche to ignore regulations they find inconvenient because the institutional preference, barring opposing headwinds, will always be to keep researchers happy.

Some people thought I was being too flippant, or leaving out parts of the story. Many of them mentioned that the focus on Nazis overshadowed some genuinely horrific all-American research misconduct like the Tuskegee Syphilis Experiment. They emphasized that my personal experience doesn’t overrule all of the really important reasons IRBs exist. For example, tedwick from the subreddit:

So, I wrote out all of the ways in which Scott’s terrible IRB experience was at least in part self-imposed, and how a lot of the post was about stuff that’s pretty straightforward, but it was kind of a snarky comment. Not unlike his post, but you know, whatever. Long story short, I’ve done similar work (arranged a really simple survey looking at dietary behaviors in kids, another IRB-protected group) and had to interface with the IRB frequently. Yep, it can be annoying at times. But the reason they ask people like Scott whether they’re going to try anything funny with prisoners is because sometimes people like Scott are trying something funny with prisoners. Just because Scott swears that he’s not Mengele doesn’t mean that he’s not going to do something dumb a priori. As his experience with expedited review might indicate, sitting down with an IRB officer for maybe 30 minutes would have cleared up a lot of things on both sides.

Is there room for IRB reform? Sure! Let’s make the easy stuff easy, and let’s make sure IRB intervention is on actual substance. I’m with him on this. However, a lot of the stuff Scott is complaining about doesn’t fall into that category (e.g. “why do all the researchers have to be on the IRB!?”). I get that the post was probably cathartic for Scott to write, but there are plenty of great researchers who are able to navigate this stuff without all the drama. “Bureaucracy Bad” is a fine rallying cry and all that, but most of the stuff Scott is complaining about is not all that hard and there for a reason.

And kyleboddy from the comments:

Nazism isn’t the reason IRBs exist. Far worse. American unethical experimentation is, and omitting it is a huge error. Massive and bureaucratic oversight exists because American scientists would stop at nothing to advance the field of science.

The Tuskegee Syphilis Experiment is the landmark case on why ethical training and IRB approval is required. You should know this. This was 100% covered in your ethical training.

I get why IRB approval sucks. My Informed Consent forms get banged all the time. But we’re talking about consent here, often with disadvantaged populations. It pays to be careful.

Last, most researchers who need speed and expedited review go through private IRB organizations now because the bureaucracy of medical/university systems is too much to handle. Our private IRB that we engage with sends back our forms within a week and their fees are reasonable. Their board meets twice per week, not once per month. The market has solved at least this particular issue.

EDIT: Private IRBs do not care about nonsensical stuff like the Principal Investigator having an advanced degree or being someone high of stature. (For example, I am a college dropout and have had multiple IRB studies approved.) Only bureaucratic, publicly-attached ones do. That’s a very reasonable complaint.

A lot of these are good points. And some of what I wrote was definitely unfair snark – I understand they’ve got to ask you whether you plan on removing anyone’s organs; if they don’t ask, how will they know? And maybe linking to Schneider’s book about eliminating the IRB system was a mistake – I just meant to show there was an existing conversation about this. I definitely didn’t mean to trivialize Tuskegee, to say that I am a radical Scheiderian, to act like my single experience damns all IRBs forever, or to claim that IRBs can’t possibly have a useful role to play. I haven’t even begun to wade into the debate between the critics and proponents of the system. The point I wanted to make was that whether or not IRBs are useful for high-risk studies, they’ve crept annoyingly far into low-risk studies – to the detriment of everyone.

Nobody expects any harm from asking your co-worker “How are you this morning?” in conversation. But if I were to turn this into a study – “Diurnal Variability In Well-Being Among Office Workers” – I would need to hire a whole team of managers just to get through the risk paperwork and the consent paperwork and the weekly reports and the team meetings. I can give a patient twice the standard dose of a dangerous medication without justifying myself to anyone. I can confine a patient involuntarily for weeks and face only the most perfunctory legal oversight. But if I want to ask them “How are you this morning?” and make a study out of it, I need to block off my calendar for the next ten years to do the relevant paperwork.

I feel like I’m protesting a police state, and people are responding “Well, you don’t want total anarchy with murder being legal, do you?” No, I don’t. I think there’s a wide range of possibilities between “police state” and “anarchy”. In the same way, I think there’s a wide range of possibilities between “science is totally unregulated” and “scientists have to complete a mountain of paperwork before they can ask someone how their day is going”.

I dare you to tell me we’re at a happy medium right now. Go on, I dare you.

I regret to say this is only getting worse. New NIH policies are increasingly trying to reclassify basic science as “clinical trials”, requiring more paperwork and oversight. For example, under the new regulations, brain scan research – the type where they ask you to think about something while you’re in an fMRI to see which parts of your brain light up – would be a “clinical trial” since it measures “a health-related biomedical or behavioral outcome”. This could require these studies to meet the same high standards as studies giving experimental drugs or new gene therapies. The Science Magazine article quotes a cognitive neuroscientist:

The agency’s widening definition of clinical trials could sweep up a broad array of basic science studies, resulting in wasted resources and public confusion. “The massive amount of dysfunction and paperwork that will result from this decision boggles the mind” and will hobble basic research.

A bunch of researchers from top universities have written a petition trying to delay the changes (if you’ve got an academic affiliation, you might want to check and consider signing yourself). But it’s anyone’s guess whether they’ll succeed. If not, good luck to any brain researcher who doesn’t want to go through everything I did. They’ll need it.

This entry was posted in Uncategorized and tagged , . Bookmark the permalink.

183 Responses to Highlights From The Comments On My IRB Nightmare

  1. onyomi says:

    As someone professionally interested in cognitive approaches to the humanities (i.e. seeing which parts of the brain light up when you listen to song lyrics as opposed to a poetic recitation), I can say that classifying such things as “clinical trials” could almost singlehandedly kill a nascent academic field.

    • Incurian says:

      How come so many of your posts start with “as a member of group x…” I’m not trying to be critical (really, I tend to enjoy your posts and agree with your opinions), but it’s so consistent that I thought maybe you were doing it for a reason.

      • onyomi says:

        I hadn’t really noticed it myself (that is, I didn’t make a conscious decision), but I think you’re right.

        I guess in my mind it functions similarly to Scott’s “epistemic status” warnings, in that it serves to inform a reader of my own ideological biases and areas of expertise/lack of expertise, at least as perceived by me.

        A lot of times it’s probably “as a libertarian…” which just functions to warn people of my ideological biases, or, in a few cases, of my qualification to comment on what libertarians think/critique libertarianism (because I have a lot of experience with it, and if I critique it, I’m critiquing my own “side”).

        A lot of times it’s probably “as an academic…” which serves to indicate I have some firsthand experience of what I speak.

        Other times it’s “I’m not a climate scientist/geneticist/economist, but… (here’s my amateurish take on it).” In this case it’s more of a “take my opinion on this with a grain of salt, but…”

        Of course, I have no control over how seriously anyone takes anything I write, but maybe I feel it’s more intellectually honest/potentially inspiring of confidence to flag things this way. Ideally, my imagined reader might think “I know onyomi typically admits it upfront when he’s just taking a stab in the dark, so I’ll take it more seriously when he claims to know what he’s talking about.”

  2. pipsterate says:

    I admire the habit of publishing comment highlights on the blog, even/especially the critical comments. I don’t have anything to say about this particular topic, but I like seeing this sort of post.

  3. Sniffnoy says:

    I can give a patient twice the standard dose of a dangerous medication without justifying myself to anyone. I can confine a patient involuntarily for weeks and face only the most perfunctory legal oversight. But if I want to ask them “How are you this morning?” and make a study out of it, I need to block off my calendar for the next ten years to do the relevant paperwork.

    This here is key. I’m going to repeat what I said on the previous thread — there’s a bucketing error going on here (of the sort you’ve written about before). Why should we say that giving a questionnaire for research purposes belongs in the same bucket as the infamous Syphilis Experiment, and thus properties from the latter carry over to the former although all they have in common is being research and not at all their risk profile, rather than grouping it with giving that same questionnaire for diagnostic purposes, which has the same risk profile but a different purpose? To every comment that says “But you’re talking about doing research on humans”, I’d just like to respond, “‘Research’ here is a bad category”.

    • Jiro says:

      If you put them in different buckets, someone’s got to decide what bucket it goes in. And if you just allow the researcher to do it, the honest researchers will put their harmless experiments in the harmless bucket, and the reckless researchers who don’t care about patient risk will put their dangerous experiments in the harmless bucket too.

      • Sniffnoy says:

        No, see, you’re still making the same error by focusing on “research” at all. I’m not saying “divide research up into buckets” — although, that would help; you could totally have different forms and requirements for different risk buckets. Doing things that way would still involve an IRB but… OK, look, I’m getting off the point, because my point isn’t “divide research into smaller categories”, it’s cut sideways, dammit. My point isn’t just that giving the questionnaire for research purposes doesn’t belong together with the Tuskegee Experiment, it’s that it does belong together with giving the questionnaire for diagnostic purposes! And that if we’re starting from the assumption that research needs to have its risks justified in such a way, but non-research things don’t even when they’re actually the same thing, something has gone wrong at the very start!

        • Jiro says:

          Using the questionnaire for diagnostic purposes creates different incentives than using it for experimental purposes (or, in general, than using it for their own gain).

          • 6jfvkd8lu7cc says:

            But involuntary detainment also has an incentive problem. And maybe there is a bigger problem there than with screening questionnaires.

            The point is that right now the research goes via path where IRB composing a list of follow-up clarifications (they do need a way to prove they have read the submission, after all) and ruling «OK, low-risk» after receiving the detailed answers is not an expected option. On the other hand, the policies about detaining people against their will and billing them for this detainment apparently are subject to less review than questionnaire-based research.

          • Jiro says:

            That can just mean that involuntary detention should be discouraged rather than that experimentation should be encouraged.

          • 6jfvkd8lu7cc says:

            When misproportion is obviously large, I find it natural to assume that both sides could be improved.

            The rules obviously depend on «research or not» more than on invasivness or risk, probably some research should be discouraged even more (like human trial of a drug that has killed several dogs in animal testing) than now and some less, and probably some treatment approaches should also be discouraged more and some less.

          • Murphy says:

            you keep repeating that again and again. It doesn’t become a better argument from repetition.

            It’s a fully general counterargument.

            changing anything, ever “creates different incentives”, it’s a fully general counterargument to oppose anything without need for sentient thought of any kind.

            Currently the incentives are squarely lining up massively on the side of “lets leave lots of vulnerable and poor people to suffer horribly because it’s hard to find ways to stop that suffering even if it involves basically no risk because research is evil-by-default”. That’s the real current situation and a common denominator amongst many of the examples. But apparently in your world that’s a good thing because “creates different incentives” is always bad.

            Lots of existing systems involve cruddy incentives, Scott has talked before about the crappy incentive structure around whether to section patients or not. But that’s the current incentive structure screwing people over so since it doesn’t create “different incentives” and most importantly isn’t related to evil-by-default science that cruddy incentive structure is just fine and should be left alone for fear of creating different incentives. Blind conservatism! Yay!

            Sniffnoy is entirely on the mark, you’re bucketing badly, indeed mirroring a public who believe that scientists basically feast on the blood of infants.

            The buckets should be “are we performing an intervention that has the potential to harm someone vulnerable”

            If scott wants to lock someone up for weeks in such a manner that he has close to zero chance of getting any blowback and his institution then gets to bill that individual their entire life savings (but financial incentives don’t count, only evil evil scientific ones) then he’s in the clear and doesn’t need to worry about it.

            The in a vaguely sane world the bucket should be “is there reasonable chance of harm to someone”(even if the thing in question has nothing to do with research) not “might this generate new knowledge”. Yes that would lead to a lot of random non-science things needing more review. You’d need to choose a value/level for “harm” that doesn’t cause everything to grind to halt. But it would be dramatically more sane and wouldn’t rely on science always being assumed evil by default.

            it’s like when the subject of drugs and alcohol comes up and the sane people say something like “ok, society has apparently decided that the acceptable bar is slightly above alcohol and tobacco, lets not go nuts over things that are less dangerous than those” and someone turns up with some variant on “oh we should ignore that bar, any harm level at all justifies banning X, even if it’s safer ” because their moral system isn’t based on anything coherent, they’ve just decided X is bad always. To them it isn’t about real risk. They don’t care about the justification route to ban it. They just learned that X is evil as kids and thus X should be locked away from everyone.

            Sadly in this case, X is “the gathering of knowledge”.

          • Jiro says:

            It’s a fully general counterargument.

            Pointing out that your way of analyzing something fails to consider some things is fully general insofar as it is possible to fail to consider things for any argument, but that’s not what we normally mean by “fully general”.

            It’s as if I told you you forgot to look for evidence and you claimed that was fully general because you could have forgotten to look for evidence for anything.

            The buckets should be “are we performing an intervention that has the potential to harm someone vulnerable”

            I suggest that the existing buckets are fine, because it’s an observation that human beings conducting experiments behave uniquely poorly. Stronger incentives are required than for humans who are doing things to benefit the patients.

          • Murphy says:

            @Jiro

            I suggest that the existing buckets are fine, because it’s an observation that human beings conducting experiments behave uniquely poorly.

            [citation needed]

            And no, not just, a [these things happened while people were nominally doing science] type response.

            ie: Not playing the same game Scott plays here with cardiologists.

            https://slatestarcodex.com/2015/09/16/cardiologists-and-chinese-robbers/

            Per 100,000 researchers are you more likely to find someone who has killed someone in their care/power thought negligence, neglect, greed or malice when looking at researchers vs if you were to take 100,000 random surgeons, 100,000 random police officers etc? personally my money is on the cops or surgeons coming out worst since both professions far more prone to attracting a large fraction diagnosable psychopaths, involve far less oversight and far more power.

            I contend that it’s more likely similar to plane crashes vs car crashes.
            Plane crashes are international news, car crashes are barely even local news. So anyone who wants to make a list of examples to support “planes are uniquely unsafe” can easily find lists of hundreds of deadly crashes. Of course in reality the drive to the airport is the dangerous part of the trip.

            But movies and various other stories are full of plane crashes so lots of people are left believing they’re “uniquely dangerous” because, don’t’cha know they’ve seen planes coming apart for themsleves with passengers being sucked out through holes in the plane etc, possible several times a week on cable.

            (Then the survivors end up trapped on an island or in some remote location where an evil scientist uses them for evil experiments.)

            A police officer leaving someone to die in their cell after a beating is almost always local news if they’re even caught.

            Research by contrast is almost defined by being out in the open and published where everyone can see so anyone trying to make a list of unethical research can copy-paste with ease.

            The USA alone apparently has over a million people working as researchers. A large fraction of them working in various areas that involve research on humans.

            http://chartsbin.com/view/1124

            If a group in the hundreds of thousands range who publish the details of what they do for all the world to see were genuinely inclined to behave “uniquely poorly” then over the course of a century we’d expect far far far more than the handful of examples that show up in various lists of unethical human research.

            Your entire thesis appears based on poorly thought out gut feelings.

          • Jiro says:

            I’m quite willing to believe that police behave uniquely poorly as well.

          • Murphy says:

            The common theme to groups you worry about seems to be nothing more than quantity of current media attention.

            Care home nurses? foster parents? adoptive parents? teachers? boarding school teachers? orphanages? prison wardens? old folks home admins? surgeons?

            tell me when we get to one you think has less abuses than researchers (who are the most open about what they’re doing of all of them) then we can talk about why none of the others need to put their choices through an IRB.

          • Jiro says:

            There’s a reason why we hear about Nazi gestapo and Nazi experiments and not Nazi orphanages, teachers, and non-experimenting surgeons, and I don’t think it’s just media attention.

          • Murphy says:

            your beliefs are simply factually incorrect.

            Ticking off the list: creepy nazi indoctrination of kids, nazi abuse/murder of the disabled in institutions and nazi abduction of children to place them with Aryan foster parents.

            But you forget all that because it’s not a common theme in the movies you watch.

            it’s much easier to write a story about evil scientists.

          • Jiro says:

            Nazi indoctrination was evil, but it was based on orders from above. Teachers didn’t spontaneously decide to do evil deeds in the same way that the Gestapo could spontaneously decide to beat someone up or Nazi experimenters could spontaneously decide they needed to hurt someone for an experiment.

            And we have a Tuskegee exepriment, but not a Tuskegee Evil Teachers Scenario. There’s a reason for that.

          • Murphy says:

            again. except that the nazis did in fact organize much of their godawful research.

            And we have countless examples of teachers exploiting students both physically and financially and raping and abusing students in their care/power.

            You’re really intent on creating a reference set of 1 that contains only researchers. No matter how incoherent and nonsensical you have to make the criteria. Lay off the Frankenstein reruns.

          • Joyously says:

            Yeah, teachers rape students pretty regularly (compared to scientists doing unethical experiments).

    • actinide meta says:

      The two categories differ in an ethically vital way. When you have forcibly imprisoned someone who has committed no crime, for the supposed purpose of protecting them, the amount of harm and risk you may ethically subject them to * for your own benefit* is zero.

      Scott joked about the question on the form that asked if prisoners would be used in the study. “I’m not a Nazi!” The correct answer to the question was yes.

      • Murphy says:

        Unless the patient is on the ward because a judge has put them there due to criminal activity the answer is no.

        If you had just had heart surgery and, while completely off your head on painkillers post op, tried to climb out the window on a physical health ward the medical staff would likely stop and guide you back to your bed. That doesn’t make you a prisoner.

        If you were thoroughly demented in an old folks home and tried to walk off into traffic the staff would likely guide you back inside. That doesn’t make you a prisoner.

        Parents can keep their kids inside sometimes. Even if the kids feel like they’re being treated like prisoners that doesn’t make them prisoners.

        Sectioned mental health patient is a very different status to prisoner even if neither may be allowed wander out the door. Though many mental health patients are allowed to wander out the door for a few hours a day even while on inpatient if it’s unlikely to get them dead.

        • actinide meta says:

          I’m happy to concede that none of the other people you hold out as examples can consent to experimentation either.

          • Murphy says:

            Which has the side effect that the most vulnerable groups end up being the ones most neglected when it comes to finding solutions to things which are harming them.

            If someone believes that they might be able to reduce the number of elderly demented people getting hurt wandering in front of cars by trialing putting fake bus stops with comfy seats outside carehome doors the response should be “lets see if we can reduce the number of people getting hurt!” not “OH MY GOD THAT’S RESEARCH ON PEOPLE WHO CANNOT CONSENT YOU EVIL MONSTER!”

            The most vulnerable populations are exactly the groups who need research the most.

            Look at 3rd world charity spending. If nobody had done a trial providing charity-funded school supplies and figured out which interventions actually weren’t doing any good to help the kids a lot more 3rd world children would have worse lives due to worthless spending.

            The worst possible reaction however would be “OH MY GOD THAT’S RESEARCH ON PEOPLE WHO CANNOT CONSENT YOU EVIL MONSTER!”

            because it would leave the people who need the help most to rot.

            is either truly zero risk? of course not, the kids could get paper cuts from the charity-provided school books. Of course that’s trivial risk but it’s non-zero. It is however 100% entirely reasonable. Nobody would bat an eye if they just shipped the books and hoped to be doing good.

    • oongawa says:

      Yes! Deliberately letting someone die of syphilis is harmful and should be prevented, regardless of whether it’s for ‘research’ or not. Asking someone if they feel happy or sad is harmless and should be allowed, regardless of whether it’s for ‘research’ or not.

      It’s ironic that ‘research’ is generally considered a good and noble cause, which in the past has allowed people to rationalize horrible acts in its name, which eventually led to the current restrictions. But applying the restrictions to the noble cause instead of the horrible acts is just confused.

      (It’s funny that we *don’t* have such restrictions for less-noble causes. If your goal is just to sell stuff, experiment away!)

  4. shakeddown says:

    I had to do the James Miller training too. As a mathematician who never even did experiments.

  5. Lirio says:

    The most frustrating part was not the arcane insistence on ink and bolded study names, but the hardline insistence that it was impossible (illegal) to test the device before getting IRB approval – even on ourselves. Meaning that without any calibration or testing, our initial study would likely come back with poor results or be a dud, but we couldn’t find out until we filled out all the paperwork.

    This seems like the sort of thing that gets solved by the researches quietly testing the devices on themselves for debugging and calibration purposes, and making no official note anywhere that this in fact happened. As far as the official study is concerned, the lasers were just magically perfectly functional when attached to the babies. That’s at least how I would approach the issue. In the face of such onerous regulation, i would expect a certain tendency to find ways to route around them as much as possible by bending rules and doing things off the books.

    In Scott’s case he could have, for example, probably gotten more people to consent to the study by giving the routine screening test first, then asking for consent to use it in the study afterward. Of course this makes it impossible to delegate it to other people, as they might notice the procedural irregularity, but otherwise it seems fairly easy to get away with.

    Maybe that’s why they make researches watch the “Don’t be a Nazi” video. There’s no better way to not be a Nazi than to abdicate moral responsibility to a bureaucratic body and ensuring all your paperwork is impeccably filled.

    • The Nybbler says:

      Maybe that’s why they make researches watch the “Don’t be a Nazi” video. There’s no better way to not be a Nazi than to abdicate moral responsibility to a bureaucratic body and ensuring all your paperwork is impeccably filled.

      Scott already made that joke last post. It sounds like you haven’t been properly filling out your HRMR-301 Joke Duplication Reduction Effort forms; you wouldn’t happen to be a Nazi, would you?

  6. Eddy says:

    Does anyone have more specific stories / knowledge which might help rationalize IRB behaviour, but which are more recent than Tuskegee?

    We want IRBs to identify dodgy studies (True negatives) and let through fine studies (true positives). All of these stories could basically be summarised as ‘there are far too many false negatives’ – so I’m curious to hear about true negatives (researchers who confidently believed ‘I’m not a Nazi’ and then proposed something dodgy) or false positives (where IRB’s said the study was fine and it ended terribly, with standards then being raised).

    E.g. the emphasis on pen, not pencil in Scott’s story seems understandable if there was one instance where researchers changed responses made in pencil and it lead to disastrous consequences, or people getting sued. Or a story where people we’re asked if they were happy, and one time it made someone depressed, suicidal, and they then tried to sue the researchers or IRB.
    I suspect there’s many involving getting sued.

    • IvanFyodorovich says:

      My understanding is that there was something of a sea change in the 1970s, due to Tuskegee and some other cases. You can find abuses after that but they seem to be pretty rare.

      The only really bad case I can remember in the last twenty years was the death of Jesse Gelsinger in a gene therapy trial in 1999. There’s also the case of Dan Markingson, which highlights the difficulties of conducting clinical trials with the severely mentally ill. That said, two deaths in twenty years has to be weighed against all the lives that could have been saved by research that never happened because IRBs or hospital bureaucracies either scuttled studies for dumb reasons or wore frustrated doctors down to the point where they decided to quit research. I suspect IRBs have killed a lot more than two people.

      • Jiro says:

        That’s two that you’ve heard of. I doubt they’re the only two that happened. And it ignores the ethical violations that didn’t even occur because the prospect of an IRB finding out prevented them. IRBs are like other deterrent measures; it’s like saying that you don’t need to lock your car door because nobody’s tried to steal your car.

        • IvanFyodorovich says:

          I had to do some digging to find the second case (I found it on this comprehensive if poorly organized page). There might be one or two more that I haven’t found, I doubt there’s a hundred. Feel free to prove me wrong.

          I’d further make the point that even in the bad old days before Tuskegee and Willowbrook changed policies, very few people were killed by unethical experimentation in the United States. Tuskegee was terrible, but at worst its death toll ran somewhere in the tens. Contrary to popular belief the subjects weren’t given syphilis by the researchers, they just weren’t given penicillin when it was invented. You can find instances of unethical experiments with single digit death tolls (the list above has some). But the theoretical cap on lives saved by IRBs in the last twenty years is not very high, unless you imagine that we would have behaved worse than scientists in the 1950s and before.

          The point here is that it each added unit of IRB strictness almost certainly kills rather than saves people at this point, and this would be true even if IRB strictness were reduced a lot.

          • Nancy Lebovitz says:

            Imposing misery also counts: MK Ultra

          • bbartlog says:

            Yeah, but I suspect the things they did for MK Ultra were already illegal and that the existence of an IRB would have been just one more minor detail that the CIA would have ignored in pursuit of their goals.

          • hyperboloid says:

            @ IvanFyodorovich

            Contrary to popular belief the subjects weren’t given syphilis by the researchers, they just weren’t given penicillin when it was invented. You

            True, but the same researchers conducted an even more horrific study in Guatemala, where they did directly infect patients.

            @bbartlog
            At least some of the MK ultra experiments were not conducted in secret CIA laboratories, but in ordinary prisons and mental health facilities.

          • ECD says:

            @ IvanFyodorovich

            I don’t have any numbers, but as Garrett M. Peterson points out below, there are additional costs to unethical experimentation.

            My grandfather for instance came back from WW2 and was completely unwilling to go to the doctor again, which radically shortened and worsened his life. It was quite clear that that was the result of some of the things he learned Nazi doctors had done. No amount of argument could convince him that no, American doctors weren’t going to experiment on the nice upper middle class Jewish lawyer, because that had certainly been what he’d believed about German doctors, once upon a time.

            If we’re speculating that additional strictness causes harm by making experimentation more difficult, I think we have to balance that against the good it does in convincing people they can go to the doctor without being experimented upon.

    • Ozy Frantz says:

      In sociological research methods class we talked about a relatively recent (=past few decades) case where an ethnography of a small town was poorly anonymized and the entire small town found out about (for example) who was cheating on whom. (Unfortunately, that doesn’t really involve enough keywords to help me google it, and I might be getting some of the details wrong as it’s been five years.)

      • Murphy says:

        That’s not unique to research.

        I vaguely remember a case a few years back of a satnav company selling “anonymised” traffic data. Turns out that it’s non so anonymous when an unidentified vehicle starts and ends a journey at the same house each day. Ditto on the “why did you spend 2 hours outside that strip club” style information that could be dug out of the data.

        • Ozy Frantz says:

          The author of the original comment didn’t ask for things that were unique to research. The vast majority of possible problems with research, including ones that led to the creation of IRBs, are not unique to research (you can, for example, claim to be treating people’s syphilis when you aren’t without ever publishing a paper on the topic).

          There is kind of a difference between “it is possible to figure out that people go to a strip club by digging into the data” and “you can spend sixteen dollars on a book readily available on Amazon and find out which of your neighbors are fucking.”

          Even if it isn’t research that’s covered by an IRB, I think most things one would want anonymized traffic data for are research.

          • anonymousskimmer says:

            There is kind of a difference between “it is possible to figure out that people go to a strip club by digging into the data” and “you can spend sixteen dollars on a book readily available on Amazon and find out which of your neighbors are fucking.”

            In these networked days there is less of a difference. Often the database (ie. “book on amazon”) will be free.

    • Jacob says:

      While thinking about that question, consider that any answer to your question could be followed up with “See? IRBs are failing at what they’re supposed to do” and no-answer could be followed with “See? We don’t need IRBs”. If a prevention method is working it may look like it’s not needed.

      That said…remember (The Immortal Life of) Henrietta Lacks? Well taking her tissue and using it for research was (and is) “ethical”, but publishing the genome most certainly is not. Yet researchers in 2013 did exactly that: http://www.nature.com/news/deal-done-over-hela-cell-line-1.13511. Imagine the full DNA sequence of your close relative being made fully public without the consent of the relative or anyone in your family.

      • The Nybbler says:

        Imagine the full DNA sequence of your close relative being made fully public without the consent of the relative or anyone in your family.

        Um? OK, now what? What negative consequences accrue to me from this?

        • Nancy Lebovitz says:

          Some people don’t want to know that they’re carrying a gene for something untreatable.

          • The Nybbler says:

            OK, still looking for the negative consequences. Having something happen I don’t want to happen is pretty weak as a “negative consequence”; might as well say that the publication itself is a “negative consequence” if I don’t want it to happen. Besides, the publication doesn’t tell me I have a gene for something untreatable; not only would I have to go digging through the genome to find it, since it’s only a “close relative”, it only gives me probabilities, with the exception of Y-linked disorders (and identical twins).

          • anonymousskimmer says:

            and mitochondrial disorders for the female-line relatives.

        • adder says:

          What comes to mind is insurance. Insurers might be reluctant to insure you if a close relative has certain genetic characteristics.

        • Ozy Frantz says:

          Many people have an intuition that their genomes (and other sensitive medical information) should be private, and for them violating this kind of privacy is a harm.

          • Insofar as the basis for that intuition is the (rational) desire to take advantage of asymmetric information in dealing with others, in particular insurance companies, their loss is balanced by the others’ gain, making it a pecuniary externality.

          • Jiro says:

            Most people don’t care equally about all human beings, so the gain of the insurance company is irrelevant.

  7. dansimonicouldbewrong says:

    If most research results are false, then perhaps it’s a blessing that the total volume of misleading research is reduced by IRB bureaucracy.

    I’m being facetious, of course, but I think the issues are closely related. At bottom, the problem is the entire research community’s lack of accountability for the research they produce. Instead of carefully analyzing research output to determine how to optimize its quality and utility, and then applying funding accordingly, the money suppliers–that is to say, the federal government, to a first approximation–pour money in at the top, then pretty much leave it to researchers and administrators themselves to figure out where the money should go. The result is essentially a large-scale, real-world version of the TV show “survivor”, where a collection of people decide amongst themselves whom to vote off the island and whom to allow to stay, based ostensibly on competitive performance, but in practice more on internal politics.

    An example is the battle between IRBs and researchers over the rigor of IRB review. Who wins that battle, as your commenters make clear, isn’t a matter of merit, reason or even economics, but rather of political power: researchers with pull can get the process loosened, while IRB reviewers with enough institutional backing can get the process made obscenely draconian. In other words, those whose interests are served are those who wield political power within the closed world of research. Meanwhile, the people on the outside can do nothing but pour more money in, and hope and pray that useful research results come out the other side.

    None of this will improve until the widespread mentality of, “research is good, we must have more of it, and the only way to get more of it is to pour yet more money in at the top and let the research community do as it pleases with it” changes. Unless external accountability is imposed, the research world will continue to divert its resources towards bizarre and counterproductive distractions such as the “IRBs from hell”, and we will continue to get research meeting the standard we demand–that is, no standard at all.

  8. dumky2 says:

    Those comments and your remarks make me wonder about the broader question, how to find a “happy medium”?
    The economist in me keeps coming back to: what are the dynamics and forces involved that lead to the current outcome, and how do they compare to alternatives (for instance, based on reputation and tort)?

    • Yeah, I was thinking the same thing. The private IRBs seem good. I don’t know if they’re liable if they approve an unethical study that kills people, but if they are then it seems like competition between them would lead to the happy medium. If you’re too strict, people won’t pay to have their studies approved by you. If you’re not strict enough, you’re on the hook for the next Tuskegee. Seems like most studies should just get a cursory glance and a green stamp because most studies present no danger to anyone.

  9. aethelfrith says:

    I’ve never had to deal with an IRB, but I did once have to take the how-not-to-be-a-Nazi class. I studied theoretical space plasma physics.

    Correction: That was before I changed disciplines. I was an experimentalist at the time. Unethical treatment of electrons was definitely a danger.

    • Jacob says:

      I was a software developer at a research institute that used human subjects. They made every employee take the “class”, including me. It’s like 3 hours of videos which include some reasonably interesting history, so I didn’t mind. Plus it avoids any unnecessary delays/legal headaches if somebody does get involved. Frankly I think it’s a great idea, especially since making somebody watch a video and take a short quiz only costs their time.

      • The Nybbler says:

        On the other hand, I was a software developer at a commercial company which makes medical imaging workstations and CT scanners. No requirements for any classes or regulatory training of any sort.

        As for my time, it’s a finite and irreplaceable resource.

        • Jacob says:

          Well that sounds like a problem, since lives depend on that equipment functioning properly.

          You spend enough of your finite and irreplaceable time commenting on this blog that I recognize your username, so I’m guessing you have enough to spare to learn some medical ethics.

          • The Nybbler says:

            Well that sounds like a problem, since lives depend on that equipment functioning properly.

            Sure do, though that’s not limited to the medical field (for instance, computer-controlled industrial machinery can kill quite easily). Yet somehow despite the lack of such regulation and training, there’s been no bloodbath.

        • beleester says:

          I’m surprised you didn’t at least need HIPPA training if you were working in the medical field.

          • The Nybbler says:

            This was pre-HIPPA, though as far as I know HIPPA imposed no training requirements on those programming medical devices.

          • beleester says:

            Medical devices generally produce some sort of medical data, though, and programmers are likely to see that data while doing their jobs.

            The training is pretty simple – “This is what makes something legally PHI. Don’t look at PHI unless you need it to do your job” – but it’s not so straightforward that you don’t need training.

          • The Nybbler says:

            We didn’t usually look at data with PHI; we had test data made with non-humans (test “phantoms”, also some animals as I recall), plus depersonalized human data.

        • only costs their time

          As for my time, it’s a finite and irreplaceable resource.

          I’m wondering if the difference between those two attitudes may explain a good deal. If the unstated assumption is that people are doing nothing productive with their time anyway a procedure that wastes time doesn’t look that bad, may even be seen as a benefit. The same attitude sees “creating jobs” as an unambiguous good unrelated to what, if anything, those jobs produce.

          A more extreme version is what I think of as “the Devil finds work for idle hands” theory of education. Keep the kids busy with lots of homework and stuff and they won’t have time to do drugs or get pregnant.

  10. sohois says:

    Paul Niehaus’s experience sounds bad, but to be fair to the IRB ‘giving’ covers a lot of possible outcomes. No one is going to be hurt by a bank transfer, but if you introduce physical money there’s some risk there. Papercuts, for example.

    And do we even know if it would be paper money? Coins can be pretty risky. Perhaps the subjects were to be given great sacks of coins to carry around and accrue back damage from all the heavy lifting. Not to mention that you can give someone something without passing it. How about throwing? That’s a form of giving something. If one of my coworkers asked me to give them a pen, I might chuck one across the office to them. How do we know the study didn’t plan to lineup the homeless, get big handfuls of coins, and just fling them at them?

    Sounds like the right amount of caution to me

    • Murphy says:

      When reviewing the possible harm of patients being given a new drug it’s essential to consider the velocity of said drug. If, for example, the researchers give the drug to the patient at 0.9c it could cost the lives of everyone in the surrounding city.

      best add a box on the form to confirm that any items to be given during the research aren’t being administered at relativistic speeds.

      given the potential for millions of deaths from “giving” material objects depending on velocity I’d argue not enough caution is being used.

  11. David W says:

    I can’t help but notice a critical assumption in the justification of IRBs. They assume that researchers will not lie to them!

    Meanwhile, I have trouble imagining someone unethical enough to murder people with syphilis who is not willing to also lie.

    If I were designing, from scratch, an organization to ensure my researchers were behaving ethically, I would want it to gather its own evidence instead of trusting the researcher. I would assume that most ethical people can figure out how to behave ethically, and most unethical people will also lie. Randomly observe the experiment while it’s ongoing, or select a handful of patients to interview after the experiment, or drop by and read through some of the experiment records. I would want the organization to be a third party, preferably composed of non-scientists. Essentially, have the FBI (or a new Federal Bureau of Scientific Ethics) randomly send an agent to wander around asking questions and observing.

    • Jiro says:

      Meanwhile, I have trouble imagining someone unethical enough to murder people with syphilis who is not willing to also lie.

      People are not rational machines. They like to convince themselves that they’re doing good, and that any violation of the rules is just a little thing that shouldn’t matter too much. They’re also not uniformly unethical–the Tuskegee experiment was done to low status black people, and the review board is probably not made up of those.

      • John Schilling says:

        If you create an environment in which IRBs are viewed as obstructionist bureaucrats, then all of the people who want to convince themselves that they are doing good will incidentally have convinced themselves that lying to the IRB is a good thing, a little violation that shouldn’t matter too much.

    • 6jfvkd8lu7cc says:

      Ah, so that’s the reason for all this process. Make sure everyone who is lying gets overwhelmed by mere volume and accidentally admits their evil plans on the fifth batch of forms.

      On the other hand, the first round (detailed description plus a checklist of things not to do) would be still useful even when assuming that people lie and doing random inspections — many methodologies would benefit from someone asking «wait, what?»

      • Murphy says:

        Question 345, page 200, volume 4: are you planning to destroy any major population centres?

        yes, I MEAN NO!

        HA, GOT YOU!

  12. JohnWittle says:

    Hey.

    I am JohnWhale of johnwhale.tumblr.com

    You might remember my post (https://johnwhale.tumblr.com/post/137379311027/grants-aka-using-government-money-to-do-good) which got sent around the EA community, exhorting people to apply for grants. I am an evaluator of grant-funded projects, and all the projects that I evaluate are actively harmful and evil. I thought a conpetent person could come in and utterly dominate the industry, make everyone else look bad, make grantmakers realize they’re being scammed, and fix everything, in like a year.

    I was wrong. As part of the reaction to the post, I met up with some folk in my region, and we decided to actually try to go through the process of getting a grant, with me as adviser and man-on-the-inside. (Imagine if Scott had a subvertive collaborator inside the IRB; shouldn’t it have been easy?)

    18 months later, we have quite definitively failed. And I realized something. I was only seeing the absolute far end of things, the projects that actually managed to make it all the way to the end to get evaluated. And that was heavily distorting my view of how easy iit would be. Because the IRB post reminded me so strongly of my experience trying to get a grant in the hands of some actual competent people…

    I still have absolutely no idea how one would actually get a grant. It amazes me that there are actually some projects that come out the other end for me to evaluate.

    (Also, Jack-Rustier, if you’re out there; sorry I got you up to the point of almost being able to actually apply, then abandoned you. But the sudden appearance of a bunch of rationalists in my own region were a far better prospect, since it gave me actual influence over the prospect of them being accepted)

    • Nabil ad Dajjal says:

      I just got a relatively small NIH grant accepted, which netted me a small bonus, and I’m about to take my institution’s training program for grant writing.

      Based on my very limited experience, it seems likely that the problem you had was a lack of institutional affiliation. I can’t even imagine getting a grant approved as a random guy.

      All grant applications go to our oddly named grant office, which checks them over and makes the final “go / no go” decision of whether to submit them as is or send them back with a list of recommended changes. Since the institution takes 60% of all grant awards as facilities and administration costs, they have a strong incentive to make sure that we get every grant we apply for.

      • JohnWittle says:

        We had institutional sponsorship, in the form of a church-based community organization

        Trust me, I know the process inside and out. I just had no idea how stupid/apathetic/unreasoning the people involved were

        Congrats on your grant; from the sound of it it sounds like you had a team of people who all already knew what they were doing and had gotten grants before. Our whole thing was that it needed to be empirically determined whether it was even possible to deduce or otherwise learn this process.

        We learned that it was not. Much like Scott did.

        As far as my advice for others went, I had been sort of anticipating that EA Global would serve as the institutional support, which is why I targeted the post at the effective altruism community

        • Friendlygrantadmin says:

          So what I mostly do now, and what I did before being a research administrator, is write grant proposals. What led you to the conclusion that all the proposals you were getting were actively evil and harmful? Sincerely curious.

          I am aware of many dysfunctional things in the nonprofit world, and there are many organizations to which I would never donate money. (I don’t usually consult for them because I’m currently in a position to be choosy about what work I take on.) However, I can’t imagine you weren’t getting any proposals for helpful things unless the foundation/organization/proposal guidelines were very peculiar. I’m sure you got plenty of proposals that didn’t align with whatever guidelines you have or had, but that’s because many small organizations are clueless about grant-seeking.

          In re: a church-based community organization being your sponsor, you probably found out the hard way that many funders exclude applications from religious organizations.

          • Douglas Knight says:

            His conclusion is here and here (see also commentary here)

          • Douglas Knight says:

            unless the foundation/organization/proposal guidelines were very peculiar

            The problem is that there are secret guidelines. You yourself indicate this:

            probably found out the hard way that many funders exclude applications from religious organizations.

          • Friendlygrantadmin says:

            I’m kind of confused by the context of the links. There are terrible grant writers out there, but I can honestly say I’ve never encountered anything so perplexing nor stupid as that described. There *are* some ethically dubious “evaluation” firms out there that pitch themselves as grant writers to school districts and the like in exchange for being named the evaluator for very inflated fees. Neither I nor other reputable grant writers approve of this practice, but it arose largely as an unintended side effect of funders’ demands for accountability.

            There aren’t usually “hidden” guidelines, but people often don’t look in the right place to determine their eligibility. If all else fails, you can pull a funder’s 990 from the Foundation Center to see if it accepts unsolicited proposals and what the guidelines/exclusions are.

          • JohnWittle says:

            @Friendlygrantadmin

            Maybe it’s only education that is this way?

            The linked LessWrong post was less about my problems with the world of grants, and more about my problems with the world of educational grants specifically. GearUp continues to slog along with all sorts of statistics being reported to the DoE, when I know for a fact that it is literally impossible for any of the statistics to be known, because I have root access to the GearUp database, and it is full of crap like students with a race of 1, or entire states worth of data missing, or suspiciously, implausibly round numbers…

            It’s a fact that at present, the efficacy of GearUp is unknowable, because the actual people implementing the various projects don’t even know they’re supposed to report data back to the higherups. Billions and billions of dollars are getting spent. The grant comes with very specific rules on what the money can be spent on, and yet those rules are openly flouted. Why, the task I’m assigned right now is, we just ggot an expense report from a school system that invoiced GearUp to pay for bussing the highschool lacrosse team to a nearby city for a game, and I have to sort it out and figure out if it was malice or just ignorance. I talked to the GearUp coordinator for the school involved, and they said their principal told them to try to use the funds to pay for as many different things as possible. Since the town the lacrosse game was held in had a university, and some of the students on the team were in the top track, they reported it as a ‘college visit’, and made sure to park the buses within line of sight of the university. This is, like, absolutely typical, and it’s just what I’m working on today.

            And that’s nowhere near the worst of it. Hell, the only reason my state’s GearUp program has any accountability at all is because the firm I work for exists, so there’s *somebody* in the grant ecosystem who actually knows what is going on. Places like South Dakota were less lucky; there, the gearup manager embezzled millions of dollars in funds over just a few years. When they applied for the grant, their application was literally copy-pasted from Maine’s application. They did not even bother to change the names of the school districts, so that there were a bunch of Maine schools discussed in the South Dakota grant application. Nobody even noticed, not for almost 5 years, not until tthey started looking into the embezzlement scandal. (The gearup coordinator ended up killing himself and his family rather than face the music, see: http://www.argusleader.com/story/news/politics/2017/03/15/witness-westerhuis-paid-bonuses-mid-central-employees-aiii-funds/99184876/)

            I would bet at 4:1 odds that a similar scandal is occurring in at least half of the states with GearUp money, but that nobody has found out yet. Hell, the only reason South Dakota Scandal was discovered was because the funds were supposed to help the Native Americans specifically, and the NAs were politically aware enough to notice something was going wrong and had a loud enough voice that people took notice.

            And GearUp is just one of many national educational grants currently ongoing. They’re all this bad.

            And that’s just the national programs! The programs run by nonprofits average about the same amount of badness, but with much higher variance (though, in my experience, not enough variance to actually push any of them up into the ‘net-positive outcomes’ zone)

            Now, I don’t do nearly as much with non-education grants. But my occasional run-in with the odd DOHHS or NSF or DOD grant tells me they’re pretty much exactly as bad. Hell, the one military grant I worked on… *checking the date on the NDA*, yeah we’re good. The one DoD grant I worked on was supposed to get more americans into cybersecurity, since the military is hesitant to outsource critical security systems and is extremely concerned about our education system’s inability to teach network security.

            The grant program they ended up approving was supposed to take learning-disabled middle school students and use scratch to teach them programming. Learning-disabled, you ask? Well, somebody watched Rain Man, and so they knew that mentally retarded folk were extremely good with numbers and stuff. That was the basis of the grant.

            Oh, but, it turned out they could get more money from the VA as well, if they included disabled veterans. So the actual project ended up being a bunch of grizzly ‘nam vets missing limbs, put into the same room with a bunch of literally-incapable-of-communicating mentally disabled children. Then they grabbed a secretary to do tthe actual teaching. She didn’t know anything about programming, she could barely remember how to email an attachment.

            And that was the project that the DoD awarded the grant to.

            So, if your argument is that maybe non-education grants are different… I’m skeptical.

          • Friendlygrantadmin says:

            So I struggle with how to respond to this, because education grants are actually among my specialties. I’m just going to say my experience in the field has been very different from yours. I’m not denying incompetence and graft exist, but they’re not the whole picture; the malfeasance I’ve seen has definitely been pretty minor in the grand scheme of things. Perhaps I’ve been unusually lucky in my employers, clients, and contacts, but I doubt it. I don’t think all the federal education grant programs are worthwhile, but I thought some good ones were scrapped and/or greatly pared back in the federal budget crunch of 2010/2011. My impression has always been that the less-competitive pots of money are more problematic for obvious reasons.

          • JohnWittle says:

            Really?

            Can you point me in the direction of an effective grant-funded project?

            I’m not exactly defying you, I just… well, I’d like to eliminate the space of hypotheses that involve you yourself being too incompetent to notice how ineffective grant-funded projects are, before I begin changing my worldview. Because we constantly meet new grantmaker admins who have worked in the field for decades without realizing that all of the data that ever crossed their desk was made up by teachers who couldn’t be bothered to actually get the real scores out of the filing cabinet and so instead just reported what their memory thinks the scores could have been

            Or administrators who hear about our latest training on the difference between goals and strategies (if you’re tutoring kids in reading, then your goal should be “increase the reading scores by x%”, not “serve x kids”), and think we must be scammers because holy crap who could possibly need that pointed out to them, then they audit one of the trainings and are stunned by the number of trainees who break down in tears with some statement along the lines of “oh my god, I’ve been trying to understand this stuff for decades, and only nnow does it finally make sense”

            In other words, my impression is that the vast majority of people in the system have absolutely no idea how bad the system is, and I think it’s more likely that you are in that category than that I just happen to have never seen an effective grant-funded program by chance

          • Friendlygrantadmin says:

            I mean … I know teachers need lots of hand-holding and effective program administration is paramount? I *have* encountered administrators (and plenty of them) who didn’t understand that the goal would not be “to get more computers” but “help children acquire 21st Century technology skills as demonstrated by XYZ.”

            On a related note, I am not a big fan of “educational technology,” which is a far bigger boondoggle than most grant programs.

            Very early on in my career, it became clear to me that precisely because so few educators really understand grants, it was paramount to make my proposals clear, simple, and easy to follow, with built-in timelines and very clear evaluation procedures whether these were required by the funder or not. One of the first major federal grant awards I won was a huge headache to administer because I didn’t really understand program evaluation and threw in every form of evaluation under the sun to make sure I was covering all the bases … but you know, we all learned from it, and in the future, I avoided imposing anything so onerous on people who are already overburdened with paperwork requirements. I also learned the value of working with a good external evaluator as much as possible while writing a proposal.

            I guess I’m more optimistic because I’ve worked with people who have been willing to learn from these and other mistakes and make good-faith efforts to understand their grant-funded programs in order to more effectively implement them. I’m also wondering if your knowledge of funders is slightly out of date. Government program officers I’ve encountered are *very* aware of how things are on the ground and desperate to provide any help they can so that the programs they run (on which their jobs usually depend) are done right and don’t make them look stupid. Of course, the legislation on which grant programs are based is usually outside of program officers’ control (and … not always helpful).

            U.S. Department of Education grant programs of which I generally approve include the Magnet Schools Assistance Program (because I am pro-magnet and anti-charter and have seen quite a lot of the different ways magnets vs. charters impact communities), the Elementary and Secondary Schools Counseling program, all of the iterations of the Fulbright-Hays program, and the now-defunct Recreational Programs program, which provided recreational opportunities for individuals with disabilities.

            I wrote a successful Recreational Programs proposal over a decade ago now; it was a three-year grant, but I know the organization in question is still doing many of the things they began doing through the grant to help children with severe special needs have more opportunities for meaningful and inclusive recreation. The funding went away, but the partnerships and capacity haven’t. Of course, the population in question is those with severe special needs, so it would be very difficult to rigorously quantify substantial positive outcomes or cost effectiveness. I therefore doubt you’ll be persuaded it was worthwhile.

            I’m pretty sure I made clear that I don’t think all federal education grant funding is worthwhile. (For instance, I am SO GLAD the passthrough E2T2 program is defunct–I used to review for that one on a state level, and it was kind of hilarious to see how people would tie themselves in knots trying to pretend they were going after the money for some reason other than “we want the free computers.”) I’m not going to list all the programs I disapprove of–I suspect the list would be somewhat longer than the list of programs I’m enthusiastic about. However, once funding has been appropriated, someone is going to get it. There’s no shame in pursuing such funding, especially if you know that the organization with which you are working will make at least a good faith effort to implement the program as described.

            I doubt this will be worth much to you, but I have glancing knowledge of a GEAR UP program in an urban district in the Northeast; it’s being administered by someone with whom I had a great deal of professional interaction and of whom I have a very high opinion. I don’t doubt that the problems you’re describing are real, but I also very, very seriously doubt the program my acquaintance is running leaves participants worse off than before. Overall, GEAR UP funding may well be an inefficient or even counterproductive strategy for attempting to help its target population, but since I’ve never written for GEAR UP and gone into its nitty-gritty details, I reserve judgment.

          • keranih says:

            I am *not* a grant writer (among my many blessings) but from my exposure to various NGO/non-profit grants is that measuring inputs is far more common than measuring outputs, which is in turn far more common than measuring effects. Among other issues, in many cases the base rate is unknown, and as running an observation trial to figure out the base rate would, by definition, not be changing anything or “helping” anyone, it’s a difficult sell.

            Another issue the failure to hold “all else” constant and instead submit to the impulse to “try everything” to improve a clearly suboptimal situation.

            Another: actual effects are actually downstream quite a ways, and according to the data at hand, only tenuously connected the current program, which has cool outputs like “jobs for program evaluaters” and “funding for attractive additional equipment.”

            At the same time, the inputs are arguably “helping” – older people get hot meals, that’s good right? The question about how much that helps vs *some other* help – currently not tried and not funded – doesn’t get asked.

            Tenth and last – confounders and sample manipulation happens in ways that frequently can’t be controlled by the study. For instance – a study want to see if adding a particular textbook to the teaching program for students designated as “at risk” improves their end-of-year performance vs the general population. If you’ve got 500 in the general population and 100 in the at risk, and you swap 5 bright-but-ornery kids out of general for 5 slow-but-behaved kids out of at-risk…you’ve just effected a 10% change in your at risk population. And that change could have happened for ever so many *legit* reasons.

          • Friendlygrantadmin says:

            I have never worked for an NGO and don’t take NGO work as a freelancer–my experience is exclusively with domestic and domestic-serving organizations. While I’m broadly aware of the critiques of NGOs, I can’t speak to them as anything but an observer and won’t attempt to do so here.

            Domestic funders do *not* emphasize inputs over outputs and have not since circa 2000 or earlier. They don’t even emphasize outputs–they look for specific outcomes. This has led to lots of spurious evaluations since, as you correctly note, it’s not really possible to attribute specific effects to grant programs aside from the large federal programs such as Head Start, for which independent studies can at least be attempted. I would prefer that funders provide oversight but place less emphasis on program evaluation than most currently do. It’s actually one of my pet peeves that many foundation’s guidelines seem to expect a nonprofit to close the achievement gap or end poverty in their community with a $50k grant. That’s … not realistic.

            One of the reasons grants became the preferred funding mechanism was so different nonprofits could experiment with different solutions to community problems. Some of these problems are frankly intractable, but that doesn’t mean no improvements are possible, and that doesn’t mean we should permit the maximum of human suffering until we’ve worked out a foolproof formula for allocating charitable resources in the most absolutely efficient way.

            I’m all for minimizing waste and maximizing impact, but since perfection is impossible, I’d just as soon my grandma get that free meal instead of eating cat food.

            tl; dr, don’t make the perfect the enemy of the good.

  13. VirgilKurkjian says:

    Yeah, and on the OTHER hand, somehow we also get the worst of both worlds.

    I know a “famous” tenured psychologist at an Ivy League who was discovered re-using hypodermic needles between participants (yes, in a psych study!!!). And of course, the university covered it all up — the professor has kept their position and you would have a hard time finding any mention of it anywhere.

    Since they don’t stop crap like this, but do stop / impede people like Scott, I’m not sure if there’s much of a role for IRBs at all.

    • Friendlygrantadmin says:

      I think the complexity of IRB rules actually makes this sort of abuse more likely since it means so few people understand them.

      Having said that, the financial regs are really not arcane–I can still remember them in detail, which is more than I can say for the IRB regs–but I still knew of one horror story of an ongoing, deliberate evasion of them that was sanctioned by senior administrators. It was at a smaller institution (not mine–I really can’t say enough good things about my former employer), and they’re less likely to be audited.

      While I’m all for smarter and more streamlined regulations, I don’t know what the alternative to bureaucracy is when you have a field rife with deliberate malfeasance such as that you’ve described.

  14. jayarava says:

    From time to time I dip into studies of the Buddhist monastic legal corpus known as the Vinaya or Discipline. It mainly consists of rules and the stories behind each rule. No rules were ever made arbitrarily. Rules were only made to forbid monks from doing things that were being done. The censured acts ranged from murder to acting like a lay person, but there’s always a story.

    I suspect that the rules you are struggling to comply with are the same. That behind each incomprehensible rule is a story about someone who did whatever it is the IRB is forbidding you from doing.

    I’ve read enough pre-review board studies to have some idea of the Nazi-like things that were done to people in the name of science, especially in the USA. And this carried on well after the world knew *exactly* what Nazi’s were like. So yeah, you do have to prove that you are not going to act like a Nazi, because sadly you are standing on the shoulders of Nazis or people who behaved no differently from them. And doing so in the most litigious society on the planet.

    Having bipolar disorder sounds bad enough as it is, without having to participate in badly run and badly designed studies. The risk of doing harm is quite high.

    But it does sound like you have poor guidance and support to help you navigate the process. And I sympathise with you on that basis. Applying for grants and permissions ought to be about half of any post-graduate university course these days. Attracting grant money is a key performance indicator and thus you ought to be formally taught how to do it.

    BTW Almost every survey I’ve ever participated in had obvious design flaws.

    • Nancy Lebovitz says:

      Good for the Buddhist monastic legal system! Do any other legal systems have a policy of including the reasons laws are adopted?

      • Jaskologist says:

        The differing opinions attached to Supreme Court decisions are like this. I think it would be great for laws, too, but I suppose that would touch off a whole other round of negotiations.

      • Protagoras says:

        Plato’s Laws is for other reasons my least favorite of his works, but the hypothetical constitution he sketches out there does require that every law include a justification for why it should be a law.

  15. Radu Floricica says:

    I’ve seen familiar attitudes in a political group recently, and I’m thinking maybe there is some common fallacy at work. Something to do with people moving from a frontier mentality to a post scarcity one, which comes with a solid dose of risk-avoidance. And since people don’t use their system 2 to compute risks and costs in alternatives, they end up in a perpetual “doing something risky is bad” mindset – or even worse, “doing something bad is bad”.

    Case in point, extra regulation reduces immediate risk, with the long term cost of less science. When comparing alternatives it’s more or less obvious that the downsides are small and the benefits larger, but most people don’t make the comparison. They just fall into one of two camps: cost oriented (bad is bad) and benefits oriented (this is cool). And after they make the snap judgement they use confirmation bias to justify it, going to rather funny lengths.

  16. zorbathut says:

    My sort-of-tangential IRB story:

    I was chatting with some people online and mentioned a small experiment I wanted to do. Someone said they were surprised I could get that through an IRB, because it seemed obviously harmful to the subjects, and that we’d need a pretty serious disclosure form before starting. I said, no, we’re not using an IRB, that would be crazy, and also, no disclosure form, that would also be crazy. They said this meant the experiment was illegal; I said that you only needed to bother with the IRB process if you were getting government funding (and that we’d *love* government funding but it ain’t ever gonna happen). They said that they hoped nobody else was exploiting the rules this way. I laughed and said that *thousands* of people were exploiting the rules like this, *every day*, and I was surprised they weren’t aware of it.

    The experiment?

    We wanted to do A/B testing on the color of the “Donate Now!” button on our nonprofit’s website. I thought green would be best, but our director thought it wouldn’t stand out enough.

    We weren’t reimbursing our, uh, “test subjects”, and we were in fact trying to get as much money out of them as possible (this being the entire point of the Donate Now! button). We certainly weren’t posting disclosure forms anywhere.

    The person I was talking to thought this qualified as a scientific experiment on human subjects, and we should have to go through all the rigamarole of an IRB in order to carry it out. I can’t entirely say he was wrong; I mean, it is an experiment, and on human subjects.

    But companies do experiments like this all the time. Dozens of them. Thousands of them. If you use Google, you’ve probably been involved in an experiment block at least once in the last month, and you didn’t know it then and you’ll never know it going forward.

    So . . . does this qualify? Should this qualify? Should government-funded organizations have to go through an IRB to do A/B testing on their website? If not, then why not? But if so, isn’t this going to hamstring the effectiveness of any government-funded organization with a significant online presence?

    Because it seems to me that if this *would* be covered, then we’re not enforcing it properly, because I guarantee government-funded agencies do a lot of A/B testing. (Like, for example, any political campaign that gets government funding.) And if it *wouldn’t* be covered, then a whole lot of other stuff – like Scott’s study – also shouldn’t be covered.

    But this kinda feels like a situation where the various branches aren’t talking to each other, and so many people aren’t aware that this fundamental inconsistency exists.

    Maybe I’m wrong.

    • VirgilKurkjian says:

      A lot of this discussion reminds me of one of the last articles from The Last Psychiatrist — I think we should consider IRB as Fetish.

    • Derrill Watson says:

      As others have pointed out above, this is the problem with everything that is “research” a “clinical trial”. Everyone is doing “research” constantly – I’ll try this new thing and see if I get a different result – I’ll keep doing what I’ve been doing and see what happens. What usually brings up the IRB in my field is if I’m going to write it down and publish it somewhere. I can change around the structure of the classes I teach any time I want, but if I want to publish the impact of a change in the classroom, I need to go to the board and have someone confirm to my reading audience that giving people video study guides instead of paper isn’t unethical and (more relevantly) that I’m going to be excessively careful with my students’ reputations since I’m writing about their test results. A/B testing on your own website? I wouldn’t think you need IRB. If you want to publish it, you need an IRB stamp of approval.

      • Though, all of those academic fields that tend to a postmodernist style, like the various Critical Something Studies fields, seem exempt from this, and can write whole papers based on “autoethnography” or whatever they call it, basically anecdotal stuff from their own lived experience.

    • Murphy says:

      Sounds almost exactly like the facebook drama a while back where facebook tried highlighting different types of stories to random users and gauged their reaction.

      But they made one crucial mistake, they made it science by publishing a paper on it. And science is always evil by default.

      Of course there were lots of people clamouring that everyone involved should be jailed because almost by definition for some people the change would have been slightly negative.

      of course if the company had just made the change based on a senior exec ejaculating at a wall and deciding based on whether it hit yes or no it would be 100% ethically acceptable whether or not 100% of users went in the slightly positive or slightly negative direction.

      • youzicha says:

        Indeed. And while the federal law only applies to funding recipients, the state of Maryland has a similar law that applies to anyone that does research on Maryland residents. After the Facebook discussion started, people pointed out that OkCupid also posted a series of blog posts about e.g. if people are more likely to date people of the same race as themselves, which would also probably be illegal in Maryland. (Notably, OkCupid has stopped making the posts.)

        Again, the trigger that makes it illegal is publishing the results, not carrying out the experiments themselves. Here’s some discussion from a news article:

        But was what Facebook and OkCupid did research? The two law professors argue that yes, it was.

        “Both Maryland law and federal law define research as a systematic investigation designed to develop or contribute to generalizable knowledge,” says Henry. Because Facebook published the results of its study in the a scientific journal, the Proceedings of the National Academy of Sciences, and because OkCupid shared the results of its study online, the two companies clearly intended for their findings to be taken generally.

        […]

        Facebook insists that the testing it was doing originated as product testing. And, indeed, “consumer acceptance study” is a specifically exempt kind of research under both federal and Maryland law.

        But, in an email, Henry writes:

        “The Facebook deception study is categorically different from corporate optimization. It was not about product testing or maximizing business-oriented results for Facebook. It involved deceptively manipulating people’s emotions for the purpose of testing a scientific hypothesis about emotional contagion, the results of which were ultimately published in a peer-reviewed scientific journal.”

        Grimmelmann said that this difference—between corporate testing and academic research—has been debated before.

        “Websites are not the only entities that do both research and not research,” he told me by phone. “The line has been litigated, written about, and thought hard about. It’s not as though this problem has never been considered before.”

        Even if a hospital changed internal procedures to waste fewer drugs, he added, it wouldn’t constitute research because their aims would be all internal. It’s publishing the research and making it generalizable that triggers the state law.

        • switchnode says:

          (Notably, OkCupid has stopped making the posts.)

          FWIW, I don’t think this is causal. Although there was an OkTrends post about the study after its publication in 2014 (and one last review post shortly afterwards), the previous post was three years old at that time.

          It seems more likely to me that this had to do with OkCupid’s 2011 acquisition by Match.com (probably for personnel rather than policy reasons—the blog’s author, Christian Rudder, subsequently “assumed day-to-day control of OkCupid”, and published a book shortly afterwards—although it is amusing to note that a post about the customer-unfavorable economics of paid dating sites was preemptively removed by another founder).

          There seems to have been a reboot of the blog, including a “data” category, in 2016, although it is very slight in comparison.

  17. Jacob says:

    I didn’t realize “private IRB” was something that existed. Isn’t/shouldn’t there be a concern that they’ll just rubber stamp everything? They certainly have a market incentive to do so.

  18. pistachi0n says:

    I had to do the “don’t be a Nazi” training to extract DNA from fecal samples from anonymized patients from another country that were collected before I even signed on to the project.

    What was more irritating to me was the amount of safety training I had to go through. I understand that there are risks to working in a lab, especially with human pathogens. But I had to go through several sessions of safety training where they essentially told me not to mouth pipette anything–it never would have occurred to me to mouth pipette if it hadn’t been for all the safety training telling us not to do it! But they didn’t actually cover any common ways people in the 21st century get sick or hurt from screwing up in the lab.

    I’m not exaggerating. I went to the basic lab safety training, where they told us to wear gloves and not to mouth pipette, and then to chemical safety training, where they told us it was especially important not to mouth pipette dangerous chemicals, and then to biosafety training, where they told us that if we must mouth pipette, please, for the love of God, mouth pipette something other than human waste, and then when it turned out that some of my samples were from HIV positive patients I had to go to blood borne pathogens training where they told us not to mouth pipette blood borne pathogens.

    Lab safety training drinking game: Take a shot every time they tell you not to mouth pipette. Down the whole bottle when you get in trouble for drinking in the lab.

    Anyway, I’m not saying safety isn’t important, but all of my trainings could have been easily condensed into something a quarter of the length, and they could have stood to focus less on mouth pipetting and more on waste disposal and proper sterilization.

    I also had to go to a two hour long radiation safety training, despite not working with anything radioactive. There’s not even anything radioactive in my building! But it’s required for anyone in a research lab at my institution because there’s another building somewhere else on campus that works with radioactive materials. Mouth pipetting didn’t come up in that one, though.

    • Montfort says:

      No one ever warned me about mouth pipetting in chemistry class (because I wasn’t doing research, I suppose), and consequently I never would’ve dreamed of doing it either.

      I guess all the warnings are left over from the days when people were switching over from mouth pipettes to more modern ones, but it’s possible the warnings are useful to foreign students. A stack exchange user links to a paper that claims some 28% of techs surveyed from Pakistani hospitals have mouth-pipetted something. Not sure if it’s a regular practice or just occasional, though.

      • pistachi0n says:

        That surprises me, both that they didn’t warn you in chemistry class and that the practice is still alive. I remember the first time I was warned about mouth pipetting was in a chemistry lab class when I was a first-year undergrad.

        Mechanical pipettors are designed to measure accurately and they’re also inexpensive (I mean, as far as lab equipment goes). It makes more sense if there are still people in the world who mouth pipette, and my university does have a lot of grad students from other countries. I figured it wasn’t something anybody had done for at least a hundred years.

        I suggested a mouth pipetting accuracy contest (with water, of course) when we were trying to brainstorm fun activities for the visiting potential grad students and the powers that be were Not Amused.

        • Eric Rall says:

          The big thing I never understood, and still don’t understand, is how you get any degree of accuracy with mouth pipetting: the mechanics of pipetting by mouth seem like it would require the pipette being fairly close to the absolutely worst possible angle for reading the volume of liquid in the pipette.

        • beleester says:

          You don’t even need a fancy mechanical pipetter to do it without your mouth. In AP Chem we used a special pipette bulb – a rubber bulb with some valves attached. Dirt cheap and pretty simple to use (one valve to suck, one to release).

          (Although I have used micropipetters where you can set the exact volume and they’re pretty cool.)

    • Edward Scizorhands says:

      People complain that a supermajority of OSHA training is about ladders, but ladders used to be 80% of workplace injuries, so it makes sense. (I made up that 80% number.)

      • bbartlog says:

        Falls are definitely a big deal in terms of workplace injuries. But a lot of them involve things like poorly designed stairs, loading docks with unprotected drops, and walkways with inadequate railings – not just ladders. Source, I spent a few months complying with OSHA regulations on elevated platform design back in the 1990s.

    • Murphy says:

      “Mouth pipetting didn’t come up in that one, though”

      that probably means that it’s ok, you should mouth pipette some radioactive materials, they probably taste fantastic! if it was dangerous they would have included it in the safety training.

    • moscanarius says:

      Ha! We were taught to safely (?) mouth pipette samples in microbiology lab classes, less than ten years ago. But everyone was aware that this was a huge anachronism and that we would not be doing it in any serious research laboratory.

  19. Salentino says:

    Scott, I think I may have found a security bug on your site. If I go to my Dashboard i.e. this page https://slatestarcodex.com/wp-admin/index.php I see a list of recent comments. Doing a mouseover of the number of comments reveals the email address of the most recent commenter in the link e.g. https://slatestarcodex.com/wp-admin/edit-comments.php?s=SOMEONES_EMAIL%40gmail.com&comment_status=approved

    I’m guessing this is not a desirable feature.

    • Edward Scizorhands says:

      It’s not easy to find but he has an email address.

    • Aapje says:

      @Salentino

      It is actually unethical to publish a severe security bug like this publicly right away, instead of first contacting the website (owner) and giving them some time to fix the bug. There are commonly agreed rules for this.

      Also, you can change your email under ‘profile,’ which I suggest that people do if they want to keep their real email private.

  20. MT says:

    I can give a patient twice the standard dose of a dangerous medication without justifying myself to anyone. I can confine a patient involuntarily for weeks and face only the most perfunctory legal oversight. But if I want to ask them “How are you this morning?” and make a study out of it, I need to block off my calendar for the next ten years to do the relevant paperwork.

    You are overlooking (or ignoring) a very important distinction here. You have the power to give drugs and confine people because you are trusted to be acting in the best interests of your patients based on your knowledge and expertise, not just messing around for the hell of it to see what happens. That trust is vested in you not because you’re a swell guy (which you would be also in a scientist hat) but rather because of the trust we have in the medical profession.

    You have powers that are so strong and potentially abusable as a doctor, yet you can’t do something so simple as a scientist. If only you just stretched your inherited trusting relationship to ask a few questions that aren’t medically necessary, or give drugs that weren’t actually needed etc – hell you could easily do that if you wanted, and the other doctors already do it! If anything it is a bit bizarre the amount of trust we have that the doctor is doing the right thing.

    Not to say that the paperwork isn’t bad or that it’s justified, just this ‘I am a doctor, I am a GOD within these hospital walls, get out of my way’ line of logic doesn’t really help you here. Though it is a good stereotype.

    • Edward Scizorhands says:

      I believe he was pointing out the wide dichotomy between “I can do all these dangerous things with no oversight” versus “I can’t do these harmless things because of excessive oversight.” Not arguing that he should be treated as a God for research.

  21. suntzuanime says:

    If you think peeing in a cup doesn’t have risks, I feel like you haven’t really gotten into the true IRB spirit.

    • Murphy says:

      As pistachi0n’s post reveals, there’s a significant risk that the patients will grab a pipette and attempt to mouth pipette other patients samples. As such all patients should probably be required to attend a safety training course where they’re trained to not mouth pipette human urine samples.

  22. The Nybbler says:

    I feel like I’m protesting a police state, and people are responding “Well, you don’t want total anarchy with murder being legal, do you?”

    Welcome to Libertarianism, Scott. Glad you could make it.

    • Sonata Green says:

      2012:

      [Y]ou never run into Stalinists at parties. […] If I did, I guess I’d try to convince them not to be so statist, but the issue’s never come up.

      2017:

      I dare you to tell me we’re at a happy medium right now.

  23. konshtok says:

    there are ever increasing numbers of college educated young women with no real skills
    the regulatory industry is the only place they can work at (in?)
    this is not going to get better

    if you don’t need money and/or academic credit you should just ignore the IRB

  24. keranih says:

    I think that given how many times the Tuskegee Syphilis experiment gets brought up, and how often it is mischaracterized – for instance:

    Nazism isn’t the reason IRBs exist. Far worse. American unethical experimentation is, and omitting it is a huge error.

    – that more attention should be paid to just what went wrong with the experiment. If Tuskegee Syphilis is used to justify certain preventive measures, then those measures should be those which would prevent a repeat of the experiment.

    The TSE did *not* deliberately infect anyone. The TSE can not be taken as an object lesson in “don’t deliberately harm patients” or “we have to be careful in order to prevent experimenters from deliberately harming patients.” Yet through rumor mill and imo deliberate disinformation and rabble-rousing, the idea that the government deliberately made people sick keeps circulating. The WP article Scott linked to quotes a 1999 survey that showed eighty percent of African American men believed that the experiment deliberately infected people. The implications for both disease prevention awareness (like, “I don’t need to wear a condom, people get syphilis by deliberate infection, not by sex”) and for trust in health recommendations is likely very large.

    The primary ethical and professional flaws of the experiment were:

    1) Failure to adequately explain the diagnosis to the enrollees, and to explain the ramifications of the diagnosis.
    2) Failure to educate the enrollees on the option of effective treatment if such became available in the future.

    The failure to adequately explain the diagnosis is difficult to defend against, unless one has a signed and witnessed statement by the patient that they understand the diagnosis. I do not doubt that the researchers failed to do what is now considered adequate explanation – but at the same time, I don’t necessarily think that even what we now consider “adequate explanation” would have changed the course of history that much. The profound information incompetence among average college-exposed people today is surprising – among the poorly educated, often illiterate rural black population of that time, even more so.

    Take as an example the attitudes and behaviors described in this LA Times article. And in this one from the NYT. Or this one from the Atlantic. Indiscriminate sex spreads disease. This has been known for quite some time. It’s still a behavior that people do. (That the 2016 Atlantic articles goes so far as to describe the AIDS crisis as “over” is…well, that’s the other sort of issue.)

    That other sort of issue is the unfactual tales that get circulated in less-educated areas – such as the attitudes described in Rebbecca Skloot’s book The Immortal Life of Henretta Lacks (Reader’s guide pdf here.) A doctor took samples of cervical cancer cells – now known to be caused by HPV, as most cervical cancers are – and eventually the doctor discovered that this sample of cells would self-replicate in a way that extremely few human cells will. This set of cells – HeLa – became the foundation of cell-line tissue samples which allowed for testing of drugs and chemicals on human tissue without the side effects of actually applying the drugs to humans.

    In Skloot’s book, she describes the times and process of getting the cells from Lacks, the eventual widespread use of HeLa cells in medicine, and the discovery of the use of the cells by Lack’s family. Skloot does a fairly decent job of separating out cause, effect, intent, and modern ethics from time-dependent ethics, and the book is well worth the read. However, Skloot leaves unsaid the likely source of Lack’s infection with syphilis and the HPV that killed her – the husband who was, decades later, trying to sue Johns Hopkins for a cut of the profits from selling HeLa cells. Skloot also fails to examine the irrational attitudes of Lacks’s family, who, upon seeing the vials of cell tissue, said “I thought they had her (Lacks) here – or, you know, an arm or a leg.” And this was after years of explanation of what Lacks’s actual legacy was – a clump of grey tissue.

    This sort of blatant lack of comprehension of basic science is not limited to poorly educated African Americans, although the rumor-mongering is probably worse there than other American subcultures. Pick a group – anti GMO activists, anti-nuke activists, anti-government militia, anti-vaccine types, flat earth creationists – and you’ll find a lump or twelve of beliefs unhinged from reality.

    A solid attempt by the TSE researchers to explain that for this patient, their “bad blood” was related to a specific sexually transmitted disease with (and I think this is esp important) potentially deadly effects for their sexual partners and not-yet-born children was the right thing to do. Given that over 90% of the enrollees got treatment elsewhere (by 1963) indicates that this might not have fallen entirely on deaf ears. By the same token, it’s not at all clear that failure of the researchers to inform the patients had that much of a limiting effect on ability to get treatment.

    The second ethical failure of the trial was to not inform the participants of the new effective treatment once it became known. For this, I can’t find any excuse. I can’t hold the providers responsible for *providing* the treatment free of charge – but I can and do hold them responsible for not attempting to explain to the participants that such treatment existed.

    Circling back around to Henriette Lacks – part of the unethical activities there was in releasing PPI information and medical records to a reporter without the family’s knowledge or consent. Given the sexually-transmitted nature of the disease that killed Lacks, one can see the distress this could cause, on top of the ethical issue of privacy. On the other hand – the dead are acknowledged to have no claim against slander, isn’t that so? And the issue of wealthy families being able to hush up news reporters in a way that the poor can not is another age old problem – as is the way that the follies of dead rich people make better press and better sales than the follies of dead poor people. In this light, given that Lacks’ records were released deliberately, it makes little sense to put such emphasis as Scott describes on stripping personal data from records in use, when what is needed is more emphasis on not releasing information on purpose.

    My point in all this is not that IBRs are necessarily evol or even that the Tuskegee Syphilis experiment was not a major stain on the professional researcher. Instead, I think we should get what lessons are there to be found from the experience, and not invent other, more grave evils to use as justification for rules that hamper the advancement of knowledge. And on top of that, in the cause of the advancement of knowledge, we should vigorously reject false tales of misconduct, if only because the allegations are used to justify more obnoxious rules. Tales of witches hexing milk cows only lead to burning alleged witches, not to more plentiful milk.

    • Yes, Tuskegee did cause a quantifiable reduction in life expectancy because it caused black men to distrust doctors:

      “The researchers estimate that life expectancy at age 45 for black men fell by up to 1.4 years in direct response to the 1972 disclosure of the Tuskegee study.”

      I didn’t realize there was so much misunderstanding of the real nature of the study, though. Maybe it was sensationalism that caused the most damage?

    • Nancy Lebovitz says:

      ” PHS researchers attempted to prevent these men from getting treatment, thus depriving them of chances for a cure. A PHS representative was quoted at the time saying: “So far, we are keeping the known positive patients from getting treatment.”[20] Despite this, 96% of the 90 original test subjects reexamined in 1963 had received either arsenical or penicillin treatments from another health provider.[21]”

      You’re right that the men weren’t deliberately infected. However, there are a lot of bad details to the experiment.

      • keranih says:

        Nancy, if your google fu brings up the actual sources for that contemporary quote, I would really appreciate it. The wikipedia article is lousy with dead links.

        Plus – “At the time” – in reference to an experiment that ran for forty years? How dreadfully specific.

        (pls note I am not saying the quote is a lie, but that it is unverifiable with the dead links)

        • Nancy Lebovitz says:

          https://books.google.com/books?id=yTpse3iEMA8C&pg=PA61&lpg=PA61&dq=So+far,+we+are+keeping+the+known+positive+patients+from+getting+treatment&source=bl&ots=Ib_nIJaIAp&sig=VCvWdjvYE57QhqhNn47yFCXQSXU&hl=en&sa=X&ved=0ahUKEwjNpbnMlIbWAhXK6iYKHRNNAhwQ6AEIKDAA#v=onepage&q=So%20far%2C%20we%20are%20keeping%20the%20known%20positive%20patients%20from%20getting%20treatment&f=false

          My google fu has only turned up that an ebook would cost $7, which I’m dithering about. The author of the book was the mens’ lawyer.

          Meanwhile, Eunice Rivers was a black nurse who was in contact with the men– her plausible opinion was that being part of the experiment was the only way they could get medical care at all.

          Anyone want to weigh in on the ethics of her choice?

          • Jiro says:

            her plausible opinion was that being part of the experiment was the only way they could get medical care at all.

            If that’s true, then this sounds like a case where Copenhagen ethics is a good idea. (Unless any of the rationalists who oppose Copenhagen ethics claim that this behavior is ethical.)

          • keranih says:

            Thanks for that link – the bit I was able to read did give a name (Dr Murray Smith) and a date – 1942. This was significantly before the use of penicillin (per wp, in 1942, there was enough penicillin available to treat 10 patients in the whole USA.) The treatment discussed would have been some of the treatments tested early in the experiment (ie, mercury and arsenic), and found to be ineffective.

            I don’t think it’s at all correct to use this quote – as the main WP article does – to indicate that the experimenters were at this time attempting to keep the patients from effective treatment. Likewise, from the linked book it is not clear the indications that the patients were being excluded from public health activities against syphilis were talking about pre- or post-penicillin efforts.

            Regarding Rivers – I think that she – along with the other researchers, both Caucasian and African American – had cause to firstly see themselves as not the same sort of people as the illiterate farm workers that made up much of the study population. I also think she was motivated to find a lack of harm just as the lawyer for the study enrollees was motivated to find harm.

            Having said that…I also think she was right. Both the linked book (where a doctor speaks of identifying and treating glaucoma among the participants) and my own experience in developing nations (and modern American poor) is that getting poor people in to see a doctor regularly is hard, and getting older men in to see the doc is like shifting cold molasses uphill. In part it’s because of money, and the lack of local docs that comes from living in a place where the locals don’t have money to pay for docs. (The study enrollees were transported to the big city and the big hospital for regular checkups – this was a huge event for them, and they surely wouldn’t have wasted the funds on an extravagance like that, even if they had the money.)

            I myself would like to run the numbers, with the data we have, over the likely impact of treatment for the enrollees when it became available. Late syphilis is more difficult to manage than when the disease is caught earlier. I don’t know if the data shows that the enrollees who did not have syphilis had better life outcomes than those not infected in the general population, nor if those who did have syphilis prior to the introduction of penicillin were worse off than those in the general population with the disease. That would help decide if the results supported Rivers’ choice in hindsight.

            I think it’s important to remember that we can’t compare the outcomes to the expected outcomes of today, but of the expected outcomes for that population at that time.

          • Nancy Lebovitz says:

            Mass production of penicillin was in place in 1945. The Tuskegee study started in 1932, so I think the men were marginal for late stage syphilis.

          • Nancy Lebovitz says:

            Considering the large loss of trust that resulted from the Tuskegee experiment, it might have been a good time to rely on deontology and not lie to people.

          • keranih says:

            Nancy, I’m not following the deontology comment.

            Given that this cohort of men were already infected for varying periods in 1932, its not at all clear to me that treatment of the previously infected would have been a simple matter by 1947. Treatment of the control group who became infected after 1947 is another matter, but I don’t know how many of the 201 original controls became infected after that time.

            I think it’s easy to decry the pop culture version of the study, but that the ethics of the actual study were more complex, and we are better served by understanding the actual arguments and failures, so that we are forearmed against *those* dangers, and not the cartoon villain evils which populate the pop culture version.

          • Mary says:

            I believe that comment is that a deontologist who flatly refused to lie to people under any circumstances would have done more good by even consequentalist standards than the experiment did.

    • hyperboloid says:

      Okay I’m calling it, we have now officially reached peak Slate Star Codex, someone is defending the Tuskegee experiment.

      No, the US public health service did not intentionally infect patents in Macon county with syphilis (though they did do exactly that in Guatemala at the same time), but they did everything short of it.

      The Tuskegee experiment began in 1932 when PHS partnered with the Tuskegee Institute, and the Rosenwald Fund, to do a clinical study on untreated syphilis patients. Originally the plan was was to study the subjects for between six and nine months, and then provide treatment with the limited means available at the time. This plan involved ethical transgressions (though not the grotesque ones that the study would later become known for) on account of the fact that the subjects were not going to informed of the treatment schedule. The idea was that they would simply be told that they had an opportunity to receive special free treatment, They would then show up for medical exams, for the first few months they would be given placebos , and only once enough data had been collected would they receive any real treatment. The doctors involved justified this on the, not entirely unreasonable, grounds that the patients would be receiving treatment that they would otherwise not have access to.

      Early on the program encountered a serious setback when, struggling under the strain of the great depression, the Rosenwald fund backed out, leaving the doctors with out funds to purchase the medication that would ultimately be used to treat the patients. The PHS doctors decided to continue anyway essentially offering the men of Macon county completely fraudulent medical treatments so as to gain access to them as test subjects. As the years went on the study deviated, further and further from any acceptable standard of ethical medical practice. During world war II 250 of subjects registered for the draft, and upon induction were immediately diagnosed with syphilis by army doctors and offered treatment, the PHS staff intervened and succeed in preventing many of the men from receiving any medical care. By the late nineteen forties Penicillin had become the treatment of choice, and unlike earlier toxic Salvarsan, and mercury based treatments it was highly effective. A cure had been found. Nevertheless, even as PHS was administering free courses of Penicillin to syphilis patients around the country as part of national push to control the disease, the men in the study were never treated. Of the 399 who started the study 74 lived to it’s conclusion, of those who had died in the intervening decades 129 had of syphilis, or syphilis related complications. In addition 40 wives of test subjects had been infected, and 19 children had been born with congenital syphilis.

      Tuskegee was not just malpractice, it was negligent homicide.

    • Noumenon72 says:

      The LA Times article says 15 in 100,000 people have syphilis. That information makes me more comfortable with indiscriminate sex than I was previously.

      • On the other hand, those willing to engage in casual sex with you are probably more likely than the random person to have syphilis, due to engaging in it with others. How much more likely I have no idea.

        • keranih says:

          For the USA, I would not be as concerned about syphilis as about other STDs. (Recent CDC summary) Syphilis is one of the three big bacteria STDs, as opposed to herpes, HPV and HIV.

          Factors to consider should include the severity of the disease – herpes, for instance, very rarely progresses to systemic or fatal disease. Also to be considered is the sex of the person – HPV is more significant in females as are several other diseases, while syphilis and gonorrhea are more commonly seen in men. Some diseases compound the risk of other diseases – gonorrhea and HIV, for example. Finally, some diseases are far more responsive to treatment and or vaccination – HPV has a fairly effective vaccine, but while chlamydia, syphilis and gonorrhea can all be treated with antibiotics, over one third of gonorrhea isolates are resistant to one or more drugs.

          Finally, consider the prevalence rate in the subpopulation in which the person chooses sexual partners. Men who have sex with men have STD rates far greater than other subpopulations. African American men are found to be infected more often than men of other ethnicities. Among women, the rates are highest among sex workers, esp sex workers who use drugs. (CDC stats)

      • John Schilling says:

        The LA Times also says that 153 in 100,000 people have gonorrhea, which unlike syphilis now comes in largely drug-resistant strains. Is there a reason why you are only concerned with syphilis, which is unusually easy to prevent and cure as STDs go?

  25. onyomi says:

    It’s interesting to contrast the standards to which the government holds scientists and researchers to prevent them becoming Nazis with the standards to which they hold the police to prevent them becoming Nazis. (Note that I’m not saying I’m sure the police are underregulated/accountable, though I strongly suspect they are; I’m just saying the difference itself is instructive with respect to how governments treat their “bosses” (citizens) as opposed to their enforcers (“public servants”).)

    • tlwest says:

      If no scientist is ever able to do their job again because of insane paperwork, who is going to complain? A few scientists. Except in the abstract, there is no loss as far as the public is concerned. Having even one experiment turn bad makes people feel a little bad, so it outweighs the perceived benefit of essentially zero.

      If no policeman is ever able to do their job again because of insane paperwork, who is going to complain? Pretty much every single citizen. Having a bunch of policemen turn out to be nazis is an acceptable cost given the benefit.

      Given the incentives to the university bureaucrats (job loss if it goes south, no gain if it works), I’m amazed they ever approve anything any trials at all. At least in a bribery culture, you can give the bureaucrats *some* incentive to approve your work :-).

  26. madrocketsci says:

    One more pixel in the picture that bureaucracy is Evil. Period. There is nothing that it does not ruin eventually. There is no way to harness it. There is no way to contain it. Whatever useful function it seems like it might possibly just this once serve in the beginning is always betrayed in the end amid the wreckage of every other value. It is the progressive substitution of dead mechanical procedure for intelligent attention and thought. Once the damage begins, it is only a matter of time.

    Bureaucracy has ruined medicine. Bureaucracy has ruined the space program. Bureaucracy doesn’t kill civilizations (or subsets thereof), but it locks them in a paralytic coma which only total destruction from an outside competitor, or total collapse can end.

    It might not be the only evil in the world, but it is evil.

  27. Lirio says:

    If bureaucracy is evil, then it is a necessary evil, since i do not see how you could possibly run modern civilization without it.

  28. Alan Crowe says:

    One bit of history that has always puzzled me is the origins of the Stalinist purges in the 1930’s. I understand that once they get going, purges can be self sustaining. If you torture people into confessing that they are White Russian saboteurs, you’ve got some confessions to feed your paranoia. If you continue, the people that you question next will accuse others to deflect attention from themselves. It snowballs. But how did it start?

    Then I picture new soviet officials, elevated from peasant backgrounds. They have never read Kafka. Gordon Tullock is 40 years in the future. They have experience of the arbitrary rule of the old Russian aristocracy, but high hopes for well intentioned bureaucracy; it is the shiny new thing.

    Then they encounter shit like the IRB. For some, it is a learning experience. Running a bureaucracy is hard. Let’s start thinking about incentives. For others, it is just plain baffling. For those of a paranoid turn of mind, it just has to be covert sabotage by deep-cover White-Russian sleeper-agents. And that is the seed from which the purges grow.

    Well! That is a bit of a stretch. Still, bad bureaucracy does fail really hard. So hard that it is hard to understand what is going on. I claim that these failures pose two challenges for sociology. The first challenge is: what is causing this, how can we fix it? The second challenge is: these failures mess with peoples heads, because they are so hard to understand; what knock on effects do we expect?

  29. paranoidaltoid says:

    I can give a patient twice the standard dose of a dangerous medication without justifying myself to anyone. I can confine a patient involuntarily for weeks and face only the most perfunctory legal oversight. But if I want to ask them “How are you this morning?” and make a study out of it, I need to block off my calendar for the next ten years to do the relevant paperwork.

    This is the point IRB apologists seem to be missing. If we should have an IRB with this kind of power for research, we should definitely have one for all the daily non-research activities of doctors. That is where most of the abuse is going to happen.

    Yet we don’t have anywhere near the regulation. My guess is that preventing doctors from doing their jobs would have obvious, direct negative consequences. But preventing doctors from doing research has indirect, distant negative consequences. Requiring a mountain of paperwork be filed every time a doctor wants to ask a patient “How are you?” would be immediately recognized as terrible and not accepted. Yet we’ve accepted such regulations for doctors who want to research.

  30. MB says:

    My intuition is that the current consensus in medicine, some fields of biology, and social science is largely manufactured. Even if the effects are real, their size is on average much smaller than has been widely touted. Some effects depend on almost inconceivably complicated factors and interactions.
    Hence the importance of making research in these fields as hard and slow as possible. The main researchers know each other and share the same basic philosophy and ideas; they might even trust each other to some extent. Any heterodox thinkers will have been smoked out by graduate school. No young researcher is given a lab of his or her own until people are sure he or she gets along well with others and will do nothing to upset the cart (is the impression I’m getting). Money (in the millions) and prestige are at stake.
    As the current “replication crisis” has shown, it’s probably all too easy to set up an experiment slightly differently (or even the exact same experiment in a less careful manner or in a more careful manner or in slightly different circumstances) and arrive at very different results. If sufficiently many experiments are conducted (N > 1), it’s almost unavoidable that something like this should happen eventually. Then even a careful and good-faith researcher, who has done almost everything right, will be forced to waste time to defend prior results against attack from a scientific inferior, from the ignorant masses, or from an ill-wishing adversary.
    I am sure that even the most honest researchers feel protective toward their own results and hostile to those who try to overturn them. This goes double for dishonest researchers, who know exactly what’s up.
    Hence it’s impossible to arrive at a consensus just using the scientific method. Scientists’ best hope for consensus is to prevent any wrong or inattentive or just too many people from conducting scientific experiments in the first place.
    In addition to making science close to impossible to conduct in the US for the little people, sufficiently well-connected researchers in the life sciences can also set up their experiments in third-world countries, dispense with the cumbersome US rules, and only report the results if they are favorable to their hypotheses.
    They understand that they got where they are by also wielding power and influence, not strictly by following the scientific method, and are certainly not about to step aside now.

  31. actinide meta says:

    My first reaction to your post was sympathy and horror at the Kafkaesque bureaucracy. I’m not any kind of fan of bureaucracy. But on reflection, I’m afraid I think that your study, despite a good cause and the best of intentions, was unethical, that fifty times as much bureaucracy wouldn’t be sufficient to protect patients, and that, sad as it is, there is probably no ethical way to study committed patients in a mental hospital.

    First, risks. Although you are a very smart and thoughtful guy and had trouble coming up with anything to put in the risks section, your experiment actually did have risks to your patients that, in my opinion, are significant:

    * It risked their privacy, because you might have done a worse job protecting this information than the existing patient information system. This was definitely exacerbated by the IRB’s “help”: what you called “encryption” sounds like it was a policy naively designed to decrease the risks that, in sharing data with other researchers or people involved in the study, you leak personally identified data. That is, they though that if you were to share the “data” half of your data with someone who is helping you with your regression, but didn’t share the “subject information” half, your patients’ privacy wouldn’t be compromised. They were wrong. Just knowing that someone was in your hospital on a particular day is almost enough information to identify someone uniquely; just a few bits more information extracted from the survey itself would do it. There is no way this “anonymization” would meet an appropriate differential privacy standard. And even if you designed a protocol that had no privacy risks, and had it reviewed by someone competent to do that (i.e. clearly not your IRB) there’s a nontrivial chance that someone on your team would violate it. That is a risk of harm.

    * As you eloquently said yourself when discussing the problems with getting consent from patients, asking patients to consent to being experimented on might make them kind of paranoid. Some of them already thought you were experimenting on them! It might decrease their trust in you. It might make them suspect that other parts of their treatment are actually for the benefit of experimenters and not them. Even if these concerns are irrational, presumably patients in a mental hospital might not all be rational, and their state of mind is what is at issue.

    * Apparently there is a risk that patients will stab themselves with the pen or pencil they need to sign the consent form

    * In light of these very non-obvious risks, I think it’s reasonable to imagine that there might be more, unknown ones. And frankly, I expect that the typical researcher trying to do a study under these circumstances will be significantly less thoughtful, and significantly less ethical, than you, and might fail to think of or disclose even more serious risks.

    And this is pretty much the safest study I can imagine anyone doing, so clearly there is no such thing as a study without significant risks to patients.

    Second, consent. I don’t think that someone committed to a mental hospital can consent to anything. They are completely in the power of the staff of the hospital. They may reasonably (or unreasonably!) fear retaliation if they don’t agree with something the staff clearly wants them to do. And they might very well be incompetent to consent as well. For both of these reasons, you can’t get their consent for a study for the same reason you can’t get their consent to have sex with you. The consent forms are a cargo cult ritual; if it is not ethical to do the study without consent it is even less ethical to do it after coercing someone to sign a consent form. Or two consent forms, as the case may be.

    Third, conflict of interest. As someone with power over involuntarily (or so-called “voluntarily”) committed patients, you have a really extraordinary responsibility to act only in their interest. Doing anything, no matter how safe or well-intentioned, that has any other goal than treating that individual patient, creates a conflict of interest that I think is inherently unethical. No matter how good a person you are, at some point the patient’s needs and your needs could come into conflict and your decisions might be swayed. I don’t think anyone other than the patient has a right to OK this conflict of interest, and as discussed they can’t. And even the potential for this type of conflict of interest could make someone somewhere more reluctant to seek treatment, which is yet another source of harm.

    Now, maybe you are thinking that if I think this stuff is unethical, I’d be REALLY horrified to hear about all the other stuff that went on at your hospital. And that’s probably true, but I’m not sure it’s here or there with respect to this particular issue.

    So how could you possibly do even passive data collection on mental inpatients ethically? I think you would have to go find people who are in their right minds and not in a hospital, and try to get them to agree prospectively that if they are ever committed certain types of studies may be done on them in accordance with certain rules. You would have to come up with a scheme by which the answer to this question is blinded, so that you can have a computer extract the data from the patient information system for those people who have previously consented, but make it impossible for even malicious staff to learn anything about whether or not a patient consented, to a differential privacy standard. You have to get that system peer reviewed by real privacy and security experts, AND you have to be able to convince every ordinary person that the system works (otherwise they may fear that, if ever committed, there will be retaliation against them for NOT consenting, and be coerced into consenting). Obviously this would be absurdly difficult and expensive if it’s even possible at all. And this approach is even more challenging for studies that actually affect patients’ treatment. I don’t think you can blind the study and protect the patients from retaliation at the same time.

    So I’m afraid I think the right thing to do is stop all research of this type. “Fortunately” that should be much easier than reducing bureaucracy 🙁

    • Murphy says:

      Net result of your approach: it’s entirely 100% impossible to weed out ineffective or even actively harmful interventions because it effectively bans all research involving mental health patients.

      Of course that’s no skin off your nose, you get to feel ethically squeaky clean/ what should it matter to you if someone who isn’t you is harmed because nobody was allowed show that the treatments or tests being routinely used on them don’t work?

      You have a kind of monstrous definition of “the right thing to do”. Derived from the some squeaky clean first principles but it’s the same kind of reducto ad-absurdum that tells us that doctors should murder people for their organs and distribute those organs to sick kids who need transplants.

      When you get an absurd conclusion like either that or yours then it’s a sign that the reasoning process followed is not one suitable for dealing with real people in the real world and that you’ve followed principles off into the badlands without reference to what it actually means for the people it affects.

      • actinide meta says:

        I have loved ones who have found or might find themselves in a mental hospital. In that very unfortunate event I would like them to go to a hospital where the only concern of the staff is the well being of their current patients, not research. If you asked me to consent in advance to such research on me I would refuse, and I bet the majority of subjects would refuse if they were not under duress. I don’t need any long chain of reasoning from first principles to reach this conclusion, and I don’t think it’s absurd. Nor does it make it impossible to learn anything about mental health treatment. You might not be able to get the very best kind of evidence in the exact population you want, and that’s a real shame. But we don’t have an RCT for parachutes either, and the reason is that there is no ethical way to do it.

        I think research is very valuable and I made an attempt to come up with a way to gain consent for this kind of research. Basically the equivalent of organ donor status. But I fear it is really hard and expensive. Another strategy could be to have research done only at special research hospitals, and you can only be committed to such a hospital based on prior consent. I think that mitigates the risk of retaliation because if you don’t consent you don’t wind up in the hands of the researchers. You probably have to pay people to consent.

        You are the one arguing on utilitarian grounds that doctors can harvest their patients’ (organs/data) for others’ benefit, not me. I want real, uncoerced consent before people make sacrifices for the common good.

        • Jiro says:

          I would add something.

          Imagine a situation where there are alternative treatments X and Y and a lot of doctors support each side.

          If you believe that X is the best treatment, it’s ethical to give the patient X.

          If you believe Y is the best treatment, it’s ethical to give the patient Y.

          Suppose you believe that X is the best treatment, and you decide to give the patient treatment Y anyway because you don’t like him very much and you don’t think he deserves the best treatment. Is that ethical? After all, it would be okay for another doctor to give him Y, so surely you can do it, right?

          For the same reason that it would be unethical to give the patient Y (even though another doctor could ethically do it), it would be unethical to give the patient a randomized choice between X or Y.

          • Edward Scizorhands says:

            What if they don’t know whether X or Y is better but believe they are both good?

          • actinide meta says:

            I’m involved in funding a study that has more or less this ethical dilemma, with a staggering number of lives at stake. X is the standard of care, and a lot of patients don’t survive. Y is not a priori likely to be extremely harmful, and might be lifesaving. The doctors that believe in Y can’t randomize, because giving X would be (from their perspective) murder. Other doctors (orders of magnitude more) want high grade evidence for Y before switching from X. So a study can be designed by Y advocates but basically has to be done by people right on the fence. And it has to have adaptive design and stopping conditions designed in so you can analyze it if and when the results start to pull them off the fence. And the patients are very sick, so consent is probably coming from family.

            There are no easy answers, even when the purely utilitarian calculation isn’t close.

          • @actinide meta:

            One solution is to randomly assign patients to doctors, some of whom believe in X and some in Y. It isn’t perfect, because the doctors might differ in other ways, but it solves the ethical problem.

          • Jiro says:

            David: The point is that this answers the question “if it’s okay to do X and okay to do Y, why isn’t it okay to have trials where you randomly choose X or Y?”

            It’s okay to do X or Y conditional on your belief that X or Y is the best available treatment for the patient. Unless you believe that X and Y are exactly equal, you can’t ethically choose randomly between them.

          • actinide meta says:

            @DavidFriedman I thought of randomly assigning people to doctors (in fact, I suggested it in the case of the study I referred to above, but there are too many practical institutional challenges to make it work).

            I have another idea along the same lines, but both more powerful and more carefully ethical (at the cost of more complexity). Since this comment thread is aging I will probably bring it up in an open thread at some point.

          • Murphy says:

            @noted, extreme arrogance can make anything you do ethical. As long as you believe in yourself and everything you do then you can do no wrong.

        • Murphy says:

          I’ve had loved ones in hosptial, when they’re in hospital or if I was in hospital I might very well want the hospital administrator to say “fuck everyone else, I’m dedicating our resources to this person”

          I wouldn’t want the hospital to have any student doctors or nurses at all, why would I want to be the person they learn on? Let them practice on someone else. All doctors in my hospital should be consultants!

          I’d want that hospital have all the answers without spending any resources on getting those answers, someone else in some other country should do that.

          If I’m little timmy or little timmy is my family I want nobody to ever make any taboo tradeoffs. Unlimited resources for timmy!

          But that would be a shitty hospital administrator because a competent one needs a touch of rule utilitarianism in their decision making. What I might hypothetically want for me is is a fairly shitty guide to what the hospital should actually do.

          A more sane guide might be “what would the most sane and sensible version of me prefer the system to be if I didn’t get to know in advance who I’m going to be in the scenario”. Which yields much more sane answers. Taboo tradeoffs are made, little timmy dies but the childrens unit has enough funding stay open. Reasonable low risk research is done promptly and everyone gets subjected to less harmful procedures and more helpful ones. Nobody murders me for my organs but the worry of me getting a flec of dust in my eye or a papercut doesn’t utterly paralyse the system for finding out if treatment being used are actually harmful to me or others.

          When you follow any single framework be it utilitarianism, deontology etc without applying any kind of critical thinking then you end up in the land of evil-absurdity.

          if you follow utilitarianism too far you end up with doctors murdering people for their organs.

          If you follow deontology into absurdity then you end up knowingly harming millions for fear that a patient might get a paper-cut.

          Sane and non-evil people almost always follow a mix of ethical systems.

          You’ve followed one into absurdity.

          If there are 2 treatments and you’re 55% sure that one of them is better you don’t get an ethical pass on leaving that 45% sitting there. Particularly if it’s an open question and you know damned well that there’s a 45% chance that you may be severely harming all your patients for your entire career.

          But the flavour of deontology that says that ignorance is a moral shield and that arrogance in your decisions makes anything you do ethical will tell you that that’s perfectly OK if you follow it into the lands of absurdity.

  32. Virbie says:

    Is there anyone familiar with the IRB’s functioning or even who actually works for the IRB who can attempt to even slightly justify the other side of this? It seems like the IRB functionaries have a decent amount of discretion here, and it’s hard for me to look at the enthusiastic Kafka re-enactment they’re conducting without finding the individuals involved utterly despicable.

    Obviously, I might be mistaken if I’m misunderstanding the incentives involved in the situation and this is a classic case of Moloch, but in that event, I’d like to know more about what these incentives are. I’m trying really hard to be charitable and imagine any CYA scenario in which some of the rules mentioned in Scott’s earlier posts were relaxed and it came back to bite the auditor in the ass, and I’m finding it really difficult to do so. It’s easy enough to think that they’re just shitty people or “useful idiots” who are incapable of understanding the damage they’re doing to the world, but besides being facile, that’s a little too dark and cynical even for me.

    • Douglas Knight says:

      Several people in these comments have given cost-benefit analyses under which IRBs win. You probably read those comments and failed to understand what they were doing because they had different values than you and you could not believe they were really claiming those points as benefits. For example, IRBs exist in psychology and not just medicine because Zimbardo didn’t want people refuting his fraud. But he wasn’t as brazen in admitting his plans as the people in these comments.

      There is a simple explanation of Scott’s experience, which is that his hospital doesn’t do research, so there’s no pressure for the IRB to function. Large numbers of people claimed that their research university IRBs work much better. But research university IRBs are slow and petty compared to drug company IRBs, which requires some explanation. And I’ve heard a lot of stories of research university IRBs being not just slow and petty but crazy, which is a big mystery. In particular, I’ve heard this claimed not (just) about bureaucrats, who might have skewed incentives, but professors.

  33. xmd says:

    Your story is interesting, but belies a stereotype which unfortunately is based in truth: the average M.D., while intellectually of the finest order, has absolutely *shit* for training in some areas (some of those areas include *statistics*, *research/methodology*, and, unfortunately in your case, *psychology*).

    On the one hand, I agree: your story includes clear “Catch-22” like absurdities that no sentient being should be subjected to. That sucks.

    On the other hand: your story seems to talk about this world of mysterious regulations on this thing called “research” which has caused you trouble. The faming kind of makes you look like a rube, I’m sorry to say.

    A lot of MDs who want to do research find themselves unprepared, and end up going back to get more training (such as an M.Ph.) have you considered that you might want to up you game a bit?

    I have a Ph.D. in a Clinical field, and have worked with many MDs and hold them in very high esteem.

    None of us can do without the Ph.D. statisticians (the Mentats, to use the Dune analogy).

    • The Nybbler says:

      Getting a Ph.D. or an M.Ph isn’t going to help with the IRB process, nor is bringing in statistics PhDs. The person required is called an “expediter”, one whose skill is in navigating bureaucracies (also telling the appropriate lies in a deniable fashion, and often enough greasing the appropriate palms)

      • xmd says:

        Sure, politics matters. But you seem to be claiming that lack of specialist training / knowledge in the subject…doesn’t matter?

        Does not compute.

        The IRB is a gunfight.

        Don’t come with your B team. And if you do, and fail, it’s clear where the blame lies.

        • The Nybbler says:

          I’m saying that additional knowledge of the subject matter would not have helped Scott’s issues with the IRB.

          • xmd says:

            one by one, I sought out the laziest attendings in the hospital and asked “Hey, would you like to have your name on a study as Principal Investigator for free while I do all the actual work?” Yet one by one, all of the doctors refused, as if I was offering them some kind of plague basket full of vermin. It was the weirdest thing.

            Oh yes, indeed – he bought the A team, just like a person educated in the subject matter would.

            I stand corrected.

    • Sam Reuben says:

      So, are you saying that Scott’s problems were due to a lack of good fundamentals in statistics? If so, please quote a passage which indicates that.

      Otherwise, are you saying that his problems were caused by an inability to navigate the bureaucracy? If so, are you saying that this ability to navigate the bureaucracy is clearly linked to one’s ability to perform good research?

      If not, then are you criticizing Scott for not being able to do research, or not being able to play the research-approval game? If the latter, then is the research-approval game important enough for the kind of study that Scott wanted to do that he shouldn’t complain that he had to play it?

      The latter point is what Scott’s essay was about, by the way. He was saying that he didn’t think he should have to be good at gaming the system in order to do a simple study.

  34. pontifex says:

    Clearly, we need to extend the IRB system to Computer Science, to prevent the Robopocalypse.

    Remember, there is no future but what you choose. And what you note down in 50 pages of paperwork (black or blue pen only, please.)

    • 6jfvkd8lu7cc says:

      Well, in some parts of computer crafts there is already a tradition of publishing criminally illegal research under relatively stable nicknames… And I guess ransomware would allow corporations a way to indirectly finance such research with plausible deniability.

  35. ayegill says:

    Although I’m an economist, this has caused me to think of experiments I could do with radiation such as secretly exposing a large number of students to radiation and seeing, years later, if it influences their income.

    “Exposed to dangerous levels of radiation during a freak accident in an economics lab, Peter Parker acquired the abilities of an economist. Able to model complex systems with a massive margin of error, he fights unsound fiscal policy on the streets of Washington. He is… Econ-man

  36. abnovi says:

    Thanks for linking to the reclassifying everything as clinical trials petition! My cognitive neuroscience lab (studying what people’s brains look like when they watch Sherlock) has had multiple lab meetings where we’ve concluded we are completely screwed if basic science turns is categorized as clinical trials. Btw this is at Princeton, where every lab has a research assistant in charge of IRB paperwork, every department has a research studies administrator (or two, or three), and the university has multiple IRB members who don’t make ethical decisions, they just file paperwork.

    Also: one way I’ve dealt with the IRB in the past is to… Kind of ignore it? They said we were paying subjects too much. We asked if they would shut our study down if we continued paying them at our current rates. They said they wouldn’t shut us down, they just didn’t like it. So we kept on going.

    • Murphy says:

      It makes you wonder what the world would look like if IRB’s got applied to everything equally.

      “No you can’t increase the janitors salary, if you do members of vulnerable groups might be tempted to apply for the janitorial position which involves working with solvents. Compensation needs to be kept at the current token levels so that only rich people who want to be janitors for the love of cleaning will apply”

      • Jiro says:

        It is ethically permissible to not give the janitor the best salary you can using the available resources.

        It is not ethically permissible to do the same with patient treatments.

        • random832 says:

          The fact that you don’t actually know which option is best is the entire point of doing an experiment. (And having them fill out the questionnaire isn’t actually a change to patient treatment if he isn’t using it to formally diagnose anyone whom he wasn’t doing so anyway).

        • keranih says:

          It is not ethically permissible to do the same with patient treatments.

          …I think this stands on the slippery slope of declaring medicine as entirely outside the market, which I understand some people believe that it is, but I hold that it is entirely ethical to provide a level of quality medicine commiserate with proffered pay, and so, yes, it is ethically permissible to offer variable levels of quality treatments to a patient, so long as they understand what they are signing up for.

          • Nancy Lebovitz says:

            “provide a level of quality medicine commiserate with proffered pay”

            I’m sure you meant commensurate and that was a typo, but it’s a great typo.

        • Jiro says:

          The amount that the patient is charged falls under “available resources”. You have to not deliberately give the patient a treatment that is substandard for the amount paid. (And yes, consent can change this.)

          Deliberately giving a janitor pay that is substandard for the amount of work he does is fine (but will probably result in him getting a job elsewhere).

        • John Schilling says:

          The amount that the patient is charged falls under “available resources”. You have to not deliberately give the patient a treatment that is substandard for the amount paid.

          When are patients ever charged for their treatment, as opposed to e.g. insurance companies or national health services?

          I think you are letting “available resources” do all the heavy lifting in your argument, and you need to pin that down in more detail if you want to persuade anyone. Yes, in the narrow case of a cash customer, you have to give them what they paid for. That’s no some special principle of medical ethics, that’s the old common-law rule against fraud. But even there, you can offer them a discount to (maybe, if they wind up in the control group) accept less than the best available treatment.

          And in the case at hand, there’s no treatment involved at all, just a diagnostic. There’s no need to withhold the standard/best diagnostic in order to also evaluate a second diagnostic, and no such thing was proposed in this case.

        • Murphy says:

          Not all research subjects are patients. But human research means IRBs.

          because one case is put on a pedestal and one is not.

          Nobody will ever challenge you for paying your janitors too well and the very idea would be considered absurd.

          However, make it “research” by keeping track of how many square meters of floor someone cleans per day if you pay them more and publish the results?

          now you’re an evil scientist and deserve jail for running experiments on vulnerable humans and to avoid that you need someone standing over your shoulder declaring that it’s wrong to pay them more because it might be bad for them.

          Lets subject everyone and everything to IRB’s!

          If preventing harm to vulnerable people is important when dealing with researchers because “incentives” then it should be no less important when dealing with other groups with potential perverse incentives. When banks loan money and offer credit cards or when fast food chains try to make their latest unhealthy offering even more addictive while targeting their advertising at the children of poor people.

          Lets bring on the IRB’s to stand over their shoulders and the auditors to review the risk of harm to which they’re subjecting millions of vulnerable people.

  37. Sam Reuben says:

    As a small suggestion, I believe the best way to express the point encapsulated in the IRB post would be:

    There are many, many activities and practices out there right now which are easier to implement than they are to study, insofar as their regulation is concerned.

    This is a clear absurdity. It should never be easier to get permission to do something than it is to study doing it, because the study is supposed to show that it’s worth doing (not to mention safe). If something is dangerous enough that it needs significant regulatory oversight before it can be studied, then it should take that same regulatory oversight before it can be implemented normally, and if it’s safe enough that it can be implemented with no regulatory oversight, the oversight required for a study should be minimal. There’s a real issue in that, even in the cleanest and neatest IRB setups, it’s harder to get permission to study the usefulness of a set of forms than it is to get permission to just hand the forms out. This is purely by virtue of serious IRB intervention being needed even for these kinds of studies.

    The solution is clearly not to eliminate IRBs, because they’re still clearly worthwhile for the things which it’s hard to get permission to implement, seeing as the studies involved can clear or bar the way for implementation based on the results. There should probably be adjustments as to how they interact with easy-implementation subjects, though, because the way it works now is just going to push low-risk fields further and further away from rigorous scientific study. After all, why deal with an IRB when you can just hand out the forms, take down notes yourself, and spread the news of the results by word of mouth? This kind of obstruction is just going to make IRBs less and less relevant to these fields.

    • Douglas Knight says:

      That’s one point, but it’s not the only point. It’s the canary in the coalmine. If the IRB is crazy in the easy cases, what reason is there to expect it to be useful in the hard cases?

      • Jiro says:

        Perhaps it’s “crazy in the easy cases” because it has a fixed difficulty that doesn’t scale with the size of the case? It may actually be appropriate for larger studies, even if they are also hard cases.

        • Douglas Knight says:

          If the problem were the amount of paperwork, you could imagine that this is a fixed amount, appropriate to the hard cases, and that the only way to avoid it in the easy cases is to completely excise it.

          But that isn’t the problem. When I said “crazy” I didn’t mean a metaphorical “crazy amount of work.” I meant that they get the wrong answer. If they cannot evaluate the safety of safe studies, can they evaluate the safety of dangerous studies? If what the IRB encourages people to list fake dangers for safe studies, it probably encourages them to list fake dangers even when real ones exist.

        • Edward Scizorhands says:

          I doubt this is the case here, but bike shedding can explain why simple things are hard while complicated things are easy.