My IRB Nightmare

September 2014

There’s a screening test for bipolar disorder. You ask patients a bunch of things like “Do you ever feel really happy, then really sad?”. If they say ‘yes’ to enough of these questions, you start to worry.

Some psychiatrists love this test. I hate it. Patients will say “Yes, that absolutely describes me!” and someone will diagnose them with bipolar disorder. Then if you ask what they meant, they’d say something like “Once my local football team made it to the Super Bowl and I was really happy, but then they lost and I was really sad.” I don’t even want to tell you how many people get diagnosed bipolar because of stuff like this.

There was a study that supposedly proved this test worked. But parts of it confused me, and it was done on a totally different population that didn’t generalize to hospital inpatients. Also, it said in big letters THIS IS JUST A SCREENING TEST IT IS NOT INTENDED FOR DIAGNOSIS, and everyone was using it for diagnosis.

So I complained to some sympathetic doctors and professors, and they asked “Why not do a study?”

Why not do a study? Why not join the great tradition of scientists, going back to Galileo and Newton, and make my mark on the world? Why not replace my griping about bipolar screening with an experiment about bipolar screening, an experiment done to the highest standards of the empirical tradition, one that would throw the entire weight of the scientific establishment behind my complaint? I’d been writing about science for so long, even doing my own informal experiments, why not move on to join the big leagues?

For (it would turn out) a whole host of excellent reasons that I was about to learn.

A spring in my step, I journeyed to my hospital’s Research Department, hidden in a corner office just outside the orthopaedic ward. It was locked, as always. After enough knocking, a lady finally opened the door and motioned for me to sit down at a paperwork-filled desk.

“I want to do a study,” I said.

She looked skeptical. “Have you done the Pre-Study Training?”

I had to admit I hadn’t, so off I went. The training was several hours of videos about how the Nazis had done unethical human experiments. Then after World War II, everybody met up and decided to only do ethical human experiments from then on. And the most important part of being ethical was to have all experiments monitored by an Institutional Review Board (IRB) made of important people who could check whether experiments were ethical or not. I dutifully parroted all this back on the post-test (“Blindly trusting authority to make our ethical decisions for us is the best way to separate ourselves from the Nazis!”) and received my Study Investigator Certification.

I went back to the corner office, Study Investigator Certification in hand.

“I want to do a study,” I said.

The lady still looked skeptical. “Do you have a Principal Investigator?”

Mere resident doctors weren’t allowed to do studies on their own. They would probably screw up and start building concentration camps or something. They needed an attending (high-ranking doctor) to sign on as Principal Investigator before the IRB would deign to hear their case.

I knew exactly how to handle this: one by one, I sought out the laziest attendings in the hospital and asked “Hey, would you like to have your name on a study as Principal Investigator for free while I do all the actual work?” Yet one by one, all of the doctors refused, as if I was offering them some kind of plague basket full of vermin. It was the weirdest thing.

Finally, there was only one doctor left – Dr. W, the hardest-working attending I knew, the one who out of some weird masochistic impulse took on every single project anyone asked of him and micromanaged it to perfection, the one who every psychiatrist in the whole hospital (including himself) had diagnosed with obsessive-compulsive personality disorder.

“Sure Scott,” he told me. “I’d be happy to serve as your Principal Investigator”.

A feeling of dread in my stomach, I walked back to the tiny corner office.

“I want to do a study,” I said.

The lady still looked skeptical. “Have you completed the New Study Application?” She gestured to one of the stacks of paperwork filling the room.

It started with a section on my research question. Next was a section on my proposed methodology. A section on possible safety risks. A section on recruitment. A section on consent. A section on…wow. Surely this can’t all be the New Study Application? Maybe I accidentally picked up the Found A New Hospital Application?

I asked the lady who worked in the tiny corner office whether, since I was just going to be asking bipolar people whether they ever felt happy and then sad, maybe I could get the short version of the New Study Application?

She told me that was the short version.

“But it’s twenty-two pages!”

“You haven’t done any studies before, have you?”

Rather than confess my naivete, I started filling out the twenty-two pages of paperwork. It started by asking about our study design, which was simple: by happy coincidence, I was assigned to Dr. W’s inpatient team for the next three months. When we got patients, I would give them the bipolar screening exam and record the results. Then Dr. W. would conduct a full clinical interview and formally assess them. We’d compare notes and see how often the screening test results matched Dr. W’s expert diagnosis. We usually got about twenty new patients a week; if half of them were willing and able to join our study, we should be able to gather about a hundred data points over the next three months. It was going to be easy-peasy.

That was the first ten pages or so of the Application. The rest was increasingly bizarre questions such as “Will any organs be removed from participants during this study?” (Look, I promise, I’m not a Nazi).

And: “Will prisoners be used in the study?” (COME ON, I ALREADY SAID I WASN’T A NAZI).

And: “What will you do if a participant dies during this research?” (If somebody dies while I’m asking them whether they sometimes feel happy and then sad, I really can’t even promise so much as “not freaking out”, let alone any sort of dignified research procedure).

And more questions, all along the same lines. I double-dog swore to give everybody really, really good consent forms. I tried my best to write a list of the risks participants were taking upon themselves (mostly getting paper cuts on the consent forms). I argued that these compared favorably to the benefits (maybe doctors will stop giving people strong psychiatric medications just because their football team made the Super Bowl).

When I was done, I went back to the corner office and submitted everything to the Institutional Review Board. Then I sat back and hoped for the best. Like an idiot.

October 2014

The big day arrived. The IRB debated the merits of my study, examined the risks, and…sent me a letter pointing out several irregularities in my consent forms.

IRREGULARITY #1: Consent forms traditionally included the name of the study in big letters where the patient could see it before signing. Mine didn’t. Why not?

Well, because in questionnaire-based psychological research, you never tell the patient what you’re looking for before they fill out the questionnaire. That’s like Methods 101. The name of my study was “Validity Of A Screening Instrument For Bipolar Disorder”. Tell the patient it’s a study about bipolar disorder, and the gig is up.

The IRB listened patiently to my explanation, then told me that this was not a legitimate reason not to put the name of the study in big letters on the consent form. Putting the name of the study on the consent form was important. You know who else didn’t put the name of the study on his consent forms? Hitler.

IRREGULARITY #2: Consent forms traditionally included a paragraph about the possible risks of the study and a justification for why we believed that the benefits were worth the risks. Everyone else included a paragraph about this on our consent forms, and read it to their patients before getting their consent. We didn’t have one. Why not?

Well, for one thing, because all we were doing was asking them whether they felt happy and then sad sometimes. This is the sort of thing that goes on every day in a psychiatric hospital. Heck, the other psychiatrists were using this same screening test, except for real, and they never had to worry about whether it had risks. In the grand scheme of things, this just wasn’t a very risky procedure.

Also, psychiatric patients are sometimes…how can I put this nicely?…a little paranoid. Sometimes you can offer them breakfast and they’ll accuse you of trying to poison them. I had no illusions that I would get every single patient to consent to this study, but I felt like I could at least avoid handing them a paper saying “BY THE WAY, THIS STUDY IS FULL OF RISKS”.

The IRB listened patiently to my explanation, then told me that this was not a legitimate reason not to have a paragraph about risks. We should figure out some risks, then write a paragraph explaining how those were definitely the risks and we took them very seriously. The other psychiatrists who used this test every day didn’t have to do that because they weren’t running a study.

IRREGULARITY #3: Signatures are traditionally in pen. But we said our patients would sign in pencil. Why?

Well, because psychiatric patients aren’t allowed to have pens in case they stab themselves with them. I don’t get why stabbing yourself with a pencil is any less of a problem, but the rules are the rules. We asked the hospital administration for a one-time exemption, to let our patients have pens just long enough to sign the consent form. Hospital administration said absolutely not, and they didn’t care if this sabotaged our entire study, it was pencil or nothing.

The IRB listened patiently to all this, then said that it had to be in pen. You know who else had people sign consent forms in pencil…?

I’m definitely not saying that these were the only three issues the IRB sprung on Dr. W and me. I’m saying these are a representative sample. I’m saying I spent several weeks relaying increasingly annoyed emails and memos from myself to Dr. W to the IRB to the lady in the corner office to the IRB again. I began to come home later in the evening. My relationships suffered. I started having dreams about being attacked by giant consent forms filled out in pencil.

I was about ready to give up at this point, but Dr. W insisted on combing through various regulations and talking to various people, until he discovered some arcane rule that certain very safe studies with practically no risk were allowed to use an “expedited consent form”, which was a lot like a normal consent form but didn’t need to have things like the name of the study on it. Faced with someone even more obsessive and bureaucratic than they were, the IRB backed down and gave us preliminary permission to start our study.

The next morning, screening questionnaire in hand, I showed up at the hospital and hoped for the best. Like an idiot.

November 2014

Things progressed slowly. It turns out a lot of psychiatric inpatients are either depressed, agitated, violent, or out of touch with reality, and none of these are really conducive to wanting to participate in studies. A few of them already delusionally thought we were doing experiments on them, and got confused when we suddenly asked them to consent. Several of them made it clear that they hated us and wanted to thwart us in any way possible. After a week, I only had three data points, instead of the ten I’d been banking on.

“Data points” makes it sound abstract. It wasn’t. I had hoped to put the results in the patients’ easily accessible online chart, the same place everyone else put the results of the exact same bipolar screening test when they did it for real. They would put it in a section marked TEST RESULTS, which was there to have a secure place where you could put test results, and where everybody’s secure test results were kept.

The IRB would have none of this. Study data are Confidential and need to be kept Secure. Never mind that all the patients’ other secure test results were on the online chart. Never mind that the online chart contains all sorts of stuff about the patients’ diagnoses, medications, hopes and fears, and even (remember, this is a psych hospital) secret fetishes and sexual perversions. Study data needed to be encrypted, then kept in a Study Binder in a locked drawer in a locked room that nobody except the study investigators had access to.

The first problem was that nobody wanted to give us a locked room that nobody except us had access to. There was a sort of All Purpose Psychiatry Paperwork room, but the janitors went in to clean it out every so often, and apparently this made it unacceptable. Hospitals aren’t exactly drowning in spare rooms that not even janitors can get into. Finally Dr. W grudgingly agreed to keep it in his office. This frequently meant I couldn’t access any of the study material because Dr. W was having important meetings that couldn’t be interrupted by a resident barging into his office to rummage in his locked cabinets.

But whatever. The bigger problem was the encryption. There was a very specific way we had to do it. We would have a Results Log, that said things like “Patient 1 got a score of 11.5 on the test”. And then we’d have a Secret Patient Log, which would say things like “Patient 1 = Bob Johnson from Oakburg.” That way nobody could steal our results and figure out that Bob was sometimes happy, then sad.

(meanwhile, all of Bob’s actual diagnoses, sexual fetishes, etc were in the easily-accessible secure online chart that we were banned from using)

And then – I swear this is true – we had to keep the Results Log and the Secret Patient Log right next to each other in the study binder in the locked drawer in the locked room.

I wasn’t sure I was understanding this part right, so I asked Dr. W whether it made sense, to him, that we put a lot of effort writing our results in code, and then put the key to the code in the same place as the enciphered text. He cheerfully agreed this made no sense, but said we had to do it or else our study would fail an audit and get shut down.

January 2015

I’d planned to get a hundred data points in three months. Thanks to constant bureaucratic hurdles, plus patients being less cooperative than I expected, I had about twenty-five. Now I was finishing my rotation on Dr. W’s team and going to a clinic far away. What now?

A bunch of newbies were going to be working with Dr. W for the next three months. I hunted them down and threatened and begged them until one of them agreed to keep giving patients the bipolar screening test in exchange for being named as a co-author. Disaster averted, I thought. Like an idiot.

Somehow news of this arrangement reached the lady in the corner office, who asked whether the new investigator had completed her Pre-Study Training. I protested that she wasn’t designing the study, she wasn’t conducting any analyses, all she was doing was asking her patients the same questions that she would be asking them anyway as part of her job for the next three months. The only difference was that she was recording them and giving them to me.

The lady in the corner office wasn’t impressed. You know who else hadn’t thought his lackeys needed to take courses in research ethics?

So the poor newbie took a course on how Nazis were bad. Now she could help with the study, right?

Wrong. We needed to submit a New Investigator Form to the IRB and wait for their approval.

Two and a half months later, the IRB returned their response: Newbie was good to go. She collected data for the remaining two weeks of her rotation with Dr. W before being sent off to another clinic just like I was.

July 2015

Dr. W and I planned ahead. We had figured out which newbies would be coming in to work for Dr. W three months ahead of time, and gotten them through the don’t-be-a-Nazi course and the IRB approval process just in time for them to start their rotation. Success!

Unfortunately, we received another communication from the IRB. Apparently we were allowed to use the expedited consent form to get consent for our study, but not to get consent to access protected health information. That one required a whole different consent form, list-of-risks and all. We were right back where we’d started from.

I made my case to the Board. My case was: we’re not looking at any protected health information, f@#k you.

The Board answered that we were accessing the patient’s final diagnosis. It said right in the protocol, we were giving them the screening test, then comparing it to the patient’s final diagnosis. “Psychiatric diagnosis” sure sounds like protected health information.

I said no, you don’t understand, we’re the psychiatrists. Dr. W is the one making the final diagnosis. When I’m on Dr. W’s team, I’m in the room when he does the diagnostic interview, half the time I’m the one who types the final diagnosis into the chart. These are our patients.

The Board said this didn’t matter. We, as the patient’s doctors, would make the diagnosis and write it down on the chart. But we (as study investigators) needed a full signed consent form before we were allowed to access the diagnosis we had just made.

I said wait, you’re telling us we have to do this whole bureaucratic rigamarole with all of these uncooperative patients before we’re allowed to see something we wrote ourselves?

The Board said yes, exactly.

I don’t remember this part very well, except that I think I half-heartedly trained whichever poor newbie we were using that month in how to take a Protected Health Information Consent on special Protected Health Information Consent Forms, and she nodded her head and said she understood. I think I had kind of clocked out at this point. I was going off to work all the way over in a different town for a year, and I was just sort of desperately hoping that Dr. W and various newbies would take care of things on their own and then in a year when I came back to the hospital I would have a beautiful pile of well-sorted data to analyze. Surely trained doctors would be able to ask simple questions from a screening exam on their own without supervision, I thought. Like an idiot.

July 2016

I returned to my base hospital after a year doing outpatient work in another town. I felt energized, well-rested, and optimistic that the bipolar screening study I had founded so long ago had been prospering in my absence.

Obviously nothing remotely resembling this had happened. Dr. W had vaguely hoped that I was taking care of it. I had vaguely hoped that Dr. W was taking care of it. The various newbies whom we had strategically enlisted had either forgotten about it, half-heartedly screened one or two patients before getting bored, or else mixed up the growing pile of consent forms and releases and logs so thoroughly that we would have to throw out all their work. It had been a year and a half since the study had started, and we had 40 good data points.

The good news was that I was back in town and I could go back to screening patients myself again. Also, we had some particularly enthusiastic newbies who seemed really interested in helping out and getting things right. Over the next three months, our sample size shot up, first to 50, then to 60, finally to 70. Our goal of 100 was almost in sight. The worst was finally behind me, I hoped. Like an idiot.

November 2016

I got an email saying our study was going to be audited.

It was nothing personal. Some higher-ups in the nationwide hospital system had decided to audit every study in our hospital. We were to gather all our records, submit them to the auditor, and hope for the best.

Dr. W, who was obsessive-compulsive at the best of times, became unbearable. We got into late-night fights over the number of dividers in the study binder. We hunted down every piece of paper that had ever been associated with anyone involved in the study in any way, and almost came to blows over how to organize it. I started working really late. My girlfriend began to doubt I actually existed.

The worst part was all the stuff the newbies had done. Some of them would have the consent sheets numbered in the upper left-hand-corner instead of the upper-right-hand corner. Others would have written the patient name down on the Results Log instead of the Secret Code Log right next to it. One even wrote something in green pen on a formal study document. It was hopeless. Finally we just decided to throw away all their data and pretend it had never existed.

With that decision made, our work actually started to look pretty good. As bad as it was working for an obsessive-compulsive boss in an insane bureaucracy, at least it had the advantage that – when nitpicking push came to ridiculous shove – you were going to be super-ready to be audited. I hoped. Like an idiot.

December 2016

The auditor found twenty-seven infractions.

She was very apologetic about it. She said that was actually a pretty good number of infractions for a study this size, that we were actually doing pretty well compared to a lot of the studies she’d seen. She said she absolutely wasn’t going to shut us down, she wasn’t even going to censure us. She just wanted us to make twenty-seven changes to our study and get IRB approval for each of them.

I kept the audit report as a souvenier. I have it in front of me now. Here’s an example infraction:

The data and safety monitoring plan consists of ‘the Principal Investigator will randomly check data integrity’. This is a prospective study with a vulnerable group (mental illness, likely to have diminished capacity, likely to be low income) and, as such, would warrant a more rigorous monitoring plan than what is stated above. In addition to the above, a more adequate plan for this study would also include review of the protocol at regular intervals, on-going checking of any participant complaints or difficulties with the study, monitoring that the approved data variables are the only ones being collected, regular study team meetings to discuss progress and any deviations or unexpected problems. Team meetings help to assure participant protections, adherence to the protocol. Having an adequate monitoring plan is a federal requirement for the approval of a study. See Regulation 45 CFR 46.111 Criteria For IRB Approval Of Research. IRB Policy: PI Qualifications And Responsibility In Conducting Research. Please revise the protocol via a protocol revision request form. Recommend that periodic meetings with the research team occur and be documented.

Among my favorite other infractions:

1. The protocol said we would stop giving the screening exam to patients if they became violent, but failed to rigorously define “violent”.

2. We still weren’t educating our patients enough about “Alternatives To Participating In This Study”. The auditor agreed that the only alternative was “not participating in this study”, but said that we had to tell every patient that, then document that we’d done so.

3. The consent forms were still getting signed in pencil. We are never going to live this one down. If I live to be a hundred, representatives from the IRB are going to break into my deathbed room and shout “YOU LET PEOPLE SIGN CONSENT FORMS IN PENCIL, HOW CAN YOU JUSTIFY THAT?!”

4. The woman in the corner office who kept insisting everybody take the Pre-Study Training…hadn’t taken the Pre-Study Training, and was therefore unqualified to be our liaison with the IRB. I swear I am not making this up.

Faced with submitting twenty-seven new pieces of paperwork to correct our twenty-seven infractions, Dr. W and I gave up. We shredded the patient data and the Secret Code Log. We told all the newbies they could give up and go home. We submitted the Project Closure Form to the woman in the corner office (who as far as I know still hasn’t completed her Pre-Study Training). We told the IRB that they had won, fair and square; we surrendered unconditionally.

They didn’t seem the least bit surprised.

August 2017

I’ve been sitting on this story for a year. I thought it was unwise to publish it while I worked for the hospital in question. I still think it’s a great hospital, that it delivers top-notch care, that it has amazing doctors, that it has a really good residency program, and even that the Research Department did everything it could to help me given the legal and regulatory constraints. I don’t want this to reflect badly on them in any way. I just thought it was wise to wait a year.

During that year, Dr. W and I worked together on two less ambitious studies, carefully designed not to require any contact with the IRB. One was a case report, the other used publicly available data.

They won 1st and 2nd prize at a regional research competition. I got some nice certificates for my wall and a little prize money. I went on to present one of them at the national meeting of the American Psychiatric Association, a friend helped me write it up formally, and it was recently accepted for publication by a medium-tier journal.

I say this not to boast, but to protest that I’m not as much of a loser as my story probably makes me sound. I’m capable of doing research, I think I have something to contribute to Science. I still think the bipolar screening test is inappropriate for inpatient diagnosis, and I still think that patients are being harmed by people’s reliance on it. I still think somebody should look into it and publish the results.

I’m just saying it’s not going to be me. I am done with research. People keep asking me “You seem really into science, why don’t you become a researcher?” Well…

I feel like a study that realistically could have been done by one person in a couple of hours got dragged out into hundreds of hours of paperwork hell for an entire team of miserable doctors. I think its scientific integrity was screwed up by stupid requirements like the one about breaking blinding, and the patients involved were put through unnecessary trouble by being forced to sign endless consent forms screaming to them about nonexistent risks.

I feel like I was dragged almost to the point of needing to be in a psychiatric hospital myself, while my colleagues who just used the bipolar screening test – without making the mistake of trying to check if it works – continue to do so without anybody questioning them or giving them the slightest bit of aggravation.

I feel like some scientists do amazingly crappy studies that couldn’t possibly prove anything, but get away with it because they have a well-funded team of clerks and secretaries who handle the paperwork for them. And that I, who was trying to do everything right, got ground down with so many pointless security-theater-style regulations that I’m never going to be able to do the research I would need to show they’re wrong.

In the past year or so, I’ve been gratified to learn some other people are thinking along the same lines. Somebody linked me to The Censor’s Hand, a book by a law/medicine professor at the University of Michigan. A summary from a review:

Schneider opens by trying to tally the benefits of IRB review. “Surprisingly,” he writes, a careful review of the literature suggests that “research is not especially dangerous. Some biomedical research can be risky, but much of it requires no physical contact with patients and most contact cannot cause serious injury. Ill patients are, if anything, safer in than out of research.” As for social-science research, “its risks are trivial compared with daily risks like going online or on a date.”

Since the upsides of IRB review are likely to be modest, Schneider argues, it’s critical to ask hard questions about the system’s costs. And those costs are serious. To a lawyer’s eyes, IRBs are strangely unaccountable. They don’t have to offer reasons for their decisions, their decisions can’t be appealed, and they’re barely supervised at the federal level. That lack of accountability, combined with the gauzy ethical principles that govern IRB deliberations, is a recipe for capriciousness. Indeed, in Schneider’s estimation, IRBs wield coercive government power—the power to censor university research—without providing due process of law.

And they’re not shy about wielding that power. Over time, IRB review has grown more and more intrusive. Not only do IRBs waste thousands of researcher hours on paperwork and elaborate consent forms that most study participants will never understand. Of greater concern, they also superintend research methods to minimize perceived risks. Yet IRB members often aren’t experts in the fields they oversee. Indeed, some know little or nothing about research methods at all.

IRBs thus delay, distort, and stifle research, especially research on vulnerable subgroups that may benefit most from it. It’s hard to precise about those costs, but they’re high: after canvassing the research, Schneider concludes that “IRB regulation annually costs thousands of lives that could have been saved, unmeasurable suffering that could have been softened, and uncountable social ills that could have been ameliorated.”

This view seems to be growing more popular lately, and has gotten support from high-profile academics like Richard Nisbett and Steven Pinker:

And there’s been some recent reform, maybe. The federal Office for Human Research Protections made a vague statement that perhaps studies that obviously aren’t going to hurt anybody might not need the full IRB treatment. There’s still a lot of debate about how this will be enforced and whether it’s going to lead to any real-life changes. But I’m glad people are starting to think more about these things.

(I’m also glad people are starting to agree that getting rid of a little oversight for the lowest-risk studies is a good compromise, and that we don’t have to start with anything more radical.)

I sometimes worry that people misunderstand the case against bureaucracy. People imagine it’s Big Business complaining about the regulations preventing them from steamrolling over everyone else. That hasn’t been my experience. Big Business – heck, Big Anything – loves bureaucracy. They can hire a team of clerks and secretaries and middle managers to fill out all the necessary forms, and the rest of the company can be on their merry way. It’s everyone else who suffers. The amateurs, the entrepreneurs, the hobbyists, the people doing something as a labor of love. Wal-Mart is going to keep selling groceries no matter how much paperwork and inspections it takes; the poor immigrant family with the backyard vegetable garden might not.

Bureaucracy in science does the same thing: limit the field to big institutional actors with vested interests. No amount of hassle is going to prevent the Pfizer-Merck-Novartis Corporation from doing whatever study will raise their bottom line. But enough hassle will prevent a random psychiatrist at a small community hospital from pursuing his pet theory about bipolar diagnosis. The more hurdles we put up, the more the scientific conversation skews in favor of Pfizer-Merck-Novartis. And the less likely we are to hear little stuff, dissenting voices, and things that don’t make anybody any money.

I’m not just talking about IRBs here. I could write a book about this. There are so many privacy and confidentiality restrictions around the most harmless of datasets that research teams won’t share data with one another (let alone with unaffiliated citizen scientists) lest they break some arcane regulation or other. Closed access journals require people to pay thousands of dollars in subscription fees before they’re allowed to read the scientific literature; open-access journals just shift the burden by requiring scientists to pay thousands of dollars to publish their research. Big research institutions have whole departments to deal with these kinds of problems; unaffiliated people who just want to look into things on their own are out of luck.

And this is happening at the same time we’re becoming increasingly aware of the shortcomings of big-name research. Half of psychology studies fail replication; my own field of psychiatry is even worse. And citizen-scientists and science bloggers are playing a big part in debunking bad research: here I’m thinking especially of statistics bloggers like Andrew Gelman and Daniel Lakens, but there are all sorts of people in this category. And both Gelman and Lakens are PhDs with institutional affiliations – “citizen science” doesn’t mean random cavemen who don’t understand the field – but they’re both operating outside their day job, trying to contribute a few hours per project instead of a few years. I know many more people like them – smart, highly-qualified, but maybe not going to hire a team of paper-pushers and spend thousands of dollars in fees in order to say what they have to say. Even now these people are doing great work – but I can’t help but feel like more is possible.

IRB overreach is a small part of the problem. But it’s the part which sunk my bipolar study, a study I really cared about. I’m excited that there’s finally more of a national conversation about this kind of thing, and hopeful that further changes will make scientific efforts easier and more rewarding for the next generation of doctors.

This entry was posted in Uncategorized and tagged , , , . Bookmark the permalink.

333 Responses to My IRB Nightmare

  1. Randy M says:

    I’m definitely not saying that these were the only three issues the IRB sprung on Dr. W and me. I’m saying these are a representative sample. I’m saying I spent several weeks relaying increasingly annoyed emails and memos from myself to Dr. W to the IRB to the lady in the corner office to the IRB again. I began to come home later in the evening. My relationships suffered. I started having dreams about being attacked by giant consent forms filled out in pencil

    So, it was a riskier study than you thought?
    (Love these kinds of posts, by the way)

    • VivaLaPanda says:

      To start your study you have to fill out a Psychological Risks to Researching Individuals form that indicates you are aware of the harm possibly incurred by interacting with the IRB and that you accept all psychological liability incurred by you or your associates during the process of wading through the hellish morass created by the IRB. Please sign in pen only.

  2. wfro says:

    Tiny correction: despite being a professor at both University of Michigan Medical School and University of Michigan Law School, Carl Schneider appears to only have a JD and not both an MD and a JD.

    • Scott Alexander says:

      Thanks, you’re right. I saw he was a Professor of Internal Medicine and assumed that meant he was a doctor. I’m still not sure how you get that job without being one, but whatever. Corrected.

      • Matt says:

        Based on your story, it appears that doctors need quite a bit of instruction in law.

        I’m just an engineer who volunteers for a local cave rescue organization. As part of that, I took a two-weekend Medical First Responder course (CPR and first aid). About a third of the material, in my estimation, was legal. HIPAA, Good Samaritan, patient abandonment, etc.

        After we ‘learned’ how to measure someone’s blood pressure, the instructor told the class that we (about 30 of us) could try it out, on him. Not on each other, because of HIPAA, he said. Rather than form a queue of 30 people to take turns taking his blood pressure, about 3 people did it and the rest of us did not.

        • Schmendrick says:

          Law student here…health law is a massive and terrifying field. If you ever want to send a healthcare lawyer into fits, just mention ERISA…

          • Scott Alexander says:

            I had to take a Psychiatry And The Law class. I told my attending I didn’t understand ERISA, and he brushed it off with “nobody understands ERISA”.

          • sighthndman says:

            @ Scott Alexander:

            1. Fiduciary liability (and a separate trust). [1]
            2. Employer control (and they write the documents). [2]
            3. Ha ha. I can recover the assets, and I don’t “owe” you anything.

            Hey! Whatever happened to employee loyalty? I don’t understand.

            [1] But there are rules.
            [2] Ditto.

        • Chris Hibbert says:

          At Google, I am a member of the Earthquake Response team and the Building Evacuation team. (Otherwise, just an ordinary software developer.) They give us medical training as first responders, and we were allowed to measure each other’s blood pressure, as well as learning to check for breathing and perfusion and practicing on one another.

          The training includes triage, first aid, evacuation, structural evaluation (for collapsed and damaged buildings. It’s fun, and I hope I never have to use it. OTOH, I live and work in California, so the big one is coming.

          • Kir says:

            This actually relates to Scott’s point about increased regulations not hurting the massive corporations as much as it hurts the little guy.

            At Google, a legal team has already gone over their responsibilities in the event of an injury at a training activity with a fine-tooth comb. Your health insurance is already “settled”, and someone is probably giving them a discount for having this training/structure.

            None of this applies to Matt’s local community organization.

      • spandrel says:

        It happens. I’m a Professor of Cardiology at an Ivy league medical school, though my PhD is in mathematical physics. Never took a clinical course, nor ever saw a patient.

  3. jw says:

    We will know the Bureaucracy has been perfected when no one will be able to get permission to do anything….. we’re almost there…. 😉

    • Antistotle says:

      We will know the Bureaucracy has been perfected when they can give you the form you are required to fill out before tearing the bureaucracy down. And a ISO 9001 process for the workflow for having that document denied (because, as you note, nothing gets *approved*).

    • Error says:

      8.2.5

      PERMISSION_GRUDGINGLY_GRANTED

      THIS RESPONSE IS NOT SUPPORTED IN THIS VERSION OF THE PROTOCOL. The Working Group for the Bureaucratic Protocol agreed that it is highly unlikely that a REVIEWER will ever use this response when PERMISSION_DENIED is available. It is only included as an explanation to implementors who do not fully understand how bureaucracy works.

      In the unlikely event that a REVIEWER sends a PERMISSION_GRUDGINGLY_GRANTED response as a nonstandard extension, it MAY later respond to requests from the same client with PERMISSION_REVOKED, whereupon the client MUST terminate the project session, and MAY cry.

      (with apologies to RFC 2795)

  4. Jacob says:

    Does any person on an IRB have any incentive to approve a single research project in their entire lives? You know who got actual research published? Nazis.

  5. Anon. says:

    So what’s going on exactly? Do IRBs simply attract a particular kind of person, or do they face some sort of incentives that make them act this way?

    • Charles F says:

      If it is a matter of the kind of people they attract, could somebody who doesn’t want to do research themselves but does like navigating regulations help research happen by working for an IRB and being more lenient/helpful? Or would that just lead to the next level up making everything even harder?

    • jdaviestx says:

      As a computer guy, this sounds a lot like a case where somebody was dealing with some unwieldy paperwork and decided to computerize it to save time and money. And nothing sets you down the “one size fits all” path like hiring some computer people who understand computers very well but don’t actually understand the business process at all (and who you sure as hell aren’t going to pay to spend time learning it because computer people are expensive) to “automate” a process that wasn’t very well thought through in the first place. And once the computer programs are in place, the people running them have no choice but to play by the arbitrary rules set up by the computer programmers who half understood what the “subject matter experts” half explained in three 30 minute “kickoff” meetings.

      • Schmendrick says:

        Where’s James C. Scott when you need him? This has “intelligibility problem” written all over it.

      • “one size fits all”

        Refusing pencil signatures is pretty standard, because if the possibility that they can be erased and overwritten.

        • CatCube says:

          On that one, I’d more blame the hospital administrator side who wouldn’t grant a variance to the “no pens” rule. You’re going to tell me that it’s not possible to implement controls to make sure that the one lone pen used for signing the paper isn’t left with the patient?

        • random832 says:

          Refusing pencil signatures is pretty standard, because if the possibility that they can be erased and overwritten.

          For what purpose? It makes more sense to refuse pencil for the *rest* of the form, if there are any portions that need to be filled out, but the signature is the one part it shouldn’t matter for – the person who has custody of the form has no incentive to erase a valid signature, and no more ability to forge a new one than to do so on an already blank form.

        • vV_Vv says:

          Non-erasable (well, difficult to erase) pencils exist.

          • po8crg says:

            They’re called copying pencils, and you can buy a box on Amazon.

            I’d have been very tempted to just drop a few bucks of my own money and get copying pencils and see if the IRB will approve them.

    • Andrew says:

      Probably a little of column A, a little of column B.

      I imagine you’ve got the standard disincentive for doing anything that the FDA and other regulatory agencies face – anything you approve can backfire, anything you deny just quietly dies with no repercussions for you.

    • nikitaborisov says:

      I’ve often heard said that the IRB’s primary incentive is to protect the institution, from any lawsuit that might be result from a study, or from an audit that results in 27 demerits from a risk-free study.

      That said, being at a large research university, my experience with IRBs (for studies that were likewise minimal risk) is a bit more positive; there was some bureaucracy but it wasn’t anywhere as painful as what Scott describes, and the IRB did offer helpful suggestions. Given that research is a big part of our mission (and that we got a ton of funding for it), I would guess our IRB has more balanced incentives than a small Midwestern hospital. (The fastest I ever got an approval was when federal for a project was recommended contingent on IRB approval.)

    • ManyCookies says:

      It seems like risk/liability aversion first and foremost. I’m reminded of CatCube’s post from a few Open Threads ago: “Well the field of medicine did some really really bad experiments from lack of ethics, we have to be super sure we don’t repeat that ever again”. And then the board/regulatory had 100 years of bad medical experiments they had to be super sure we never repeated ever again.

    • . says:

      [jettisons Heinlein’s law with reckless – but dashing – abandon] There could be an incentive to prevent doctors from wasting time by doing research, when he could be talking to patients and more directly improving their metrics.

    • Walter says:

      Goons like that tend to be mostly motivated by ‘it’s my job.”

      Not in an evil way or anything, just in a ‘there are a lot of hours in a day’ kind of way. Like, say you are an HR person. Your job is to make sure that the lads in the labs jump through all the hoops. You have 4 hours of meetings a day. The rest of the time just kind of…is there. So if anyone is dumb enough to communicate with you, or make eye contact, or whatever, you are going to make absolutely sure they follow everything you can possibly throw at them.

      • TeMPOraL says:

        This goes contrary to my experience. I believe most people wouldn’t subject you to unnecessary hell just because of boredom – after all, there’s plenty of cats on the Internet to fill the downtime in an office work. I believe you get the book thrown at you when:

        – they strongly dislike you, or your project,
        – they have something to gain (like a raise or promotion) from making you jump through extra hoops, or
        – your project puts them at risk and they need to cover their arses.

        • Murphy says:

          I think you’re oversimplifying.

          There’s a certain personality type who want to “contribute”

          Think little middle managers who add pointless requirements and changes to projects, they’re “contributing”. When it comes time for their annual review they can point to all the projects they’ve “contributed” to. Even if literally the only thing they’ve done all year is subtly slow down or make slightly worse everything they’ve touched they’ve ended up with an impressively long list.

          They honestly convince themselves they’re helping. People don’t like to believe they’re useless or that their organization would be better off without them.

          So they work hard, genuinely they work hard to do everything in their job description to prove to themselves and others that they’re “contributing”.

          And if their list of tasks includes “make sure the researcher has listed the risks of the study” they’re going to damned well do that even if there are no realistic risks, even if they have to have them invent risks. Because otherwise they wouldn’t be “contributing” and wouldn’t be proving their worth.

          It’s not just IRB’s, you get the same pattern with many middle managers (also known as the useless fraction of the population).

          Reviews to establish the “environmental impact” of a new file server that probably have larger impacts in terms of the sanwiches eaten during the meeting than the server ever will or “contributions” of IT security demands from people who only have a hazy idea of what encryption is.

          Turkeys don’t vote for Christmas and middle managers find ways to look busy and feel like they’re worth something in any organization.

          I think walter isn’t so far from the truth: encounter one of these beasts who’s been searching for things to “contribute” to who’s worrying about their own value and you’ll quickly find yourself landed with a long list of useless “contributions”

          • Walter says:

            I tend to agree with Murphy. Re: Temporal’s objection, there is a personality type who doesn’t just surf the net at work. Like, divide the world into strivers and slackers. This is a striver in a slacker job.

    • NickK says:

      This could be Pournelle’s Iron Law Of Bureaucracy:

      …in any bureaucratic organization there will be two kinds of people: those who work to further the actual goals of the organization, and those who work for the organization itself. Examples in education would be teachers who work and sacrifice to teach children, vs. union representatives who work to protect any teacher including the most incompetent. The Iron Law states that in all cases, the second type of person will always gain control of the organization, and will always write the rules under which the organization functions.

    • vV_Vv says:

      do they face some sort of incentives that make them act this way?

      I think so. They have to justify their jobs: if they rarely finds any violations then somebody may start to question whether their job is really needed, while if they find 27 violations in a risk-free study, which they can then tally into annual reports, they can argue that they are doing a very important job preventing all these Dr. Mengeles from abusing their subjects, and in fact they can argue that they have so much work that they need to hire more staff, who will then need somebody senior to train and manage them, and thus the organization entropically expands to fill all the available space like a gas in a container.

      The walls of the container are the research institutions which eventually need to get work done and have the financial power and political clout to apply enough pressure to contain the bureaucratic expansion, while two random doctors at a community hospital who want to play the science game get blown away like dust in the wind.

      • sighthndman says:

        This is in fact why you shouldn’t give your work to more than two or three people to review. Everyone who reviews your work must find something to correct in order to justify their review. (They’ll even correct your use of the singular “they” even though they know it’s a political choice you made and that you’re going to stick to it.) It’s a rare person indeed who can say “I wouldn’t do it that way but your way is good too”.

  6. alwhite says:

    This is why we can’t have nice things…

  7. fishchisel says:

    Don’t you think it’s a problem in our society, how difficult it is to argue for taking more risks?

    I’m in construction. Our industry is more and more expensive, and more and more bureaucratic, every year. I want to argue, “Hey, maybe it’s fine if the occasional labourer impales himself on some rebar and dies. Is it really worth spending (literally) half our time carrying out risk assessments and safety audits?” But saying such a think to anyone senior would get me fired. Contracts are won based on safety records and no one is willing to take on any risk.

    I’m particularly interested in the use of subcontractors. Subbies tend to be smaller companies (perhaps 50 employees). They’re often owned and run by a single owner who has come up through the ranks and is unlikely to have gone to university, or even completed high school. They are much more casual about procedure (one of my jobs is to hassle them about this) but they’re also *much* faster for it. I suspect without the subcontracting tradition in our industry (who takes on all the risk, and gets all the work done) we wouldn’t be able to build anything at all

    • Randy M says:

      My last job had a motto “Nothing we do is worth getting hurt for” On the one hand, I agree, it was just a paycheck to me, I didn’t want to risk my life for the shareholders and they didn’t want to pay my insurance. On the other hand, some day I’d like to do something worth getting hurt for.

      • TeMPOraL says:

        At least they were honest about it. I wonder how many other jobs this motto could apply to, and if people realize the implications.

        I’m in software, and most of the job openings I see are squarely in the “nothing we do is really worth even the time we spend on it” group. It’s all just shifting money around or monetizing people’s attention.

        I too, one of these days, would like to do something worth getting hurt for.

        • Nancy Lebovitz says:

          There’s a C.S. Lewis essay that I can’t find easily which was about the difficulty of finding real work– work that actually made the world better, so the problem has been around for at least most of a century.

          One of my friends had the lower standard of writing programs that people actually use, though I think he wouldn’t have chosen to program for something that was actually evil. In any case, he eventually got a job at google that met his standard.

          • Aron Wall says:

            “Good Work and Good Works”, found in The World’s Last Night and Other Essays.

            (Although, if you want nearly all of his non-academic essays, you’re better off buying C.S.Lewis Essay Collection and Other Short Pieces.)

            From the essay:

            I often see a hoarding which bears a notice to the effect that thousands look at this space and your firm ought to hire it for an advertisement of its wares. Consider by how many stages this is separated from ‘making that which is good.’ A carpenter has made this hoarding; that, in itself, has no use. Printers and paper-makers have worked to produce the notice—worthless until someone hires the space—worthless to him until he pastes on it another notice, still worthless to him unless it persuades someone else to buy his goods; which themselves may well be ugly, useless, and pernicious luxuries that no mortal would have bought unless the advertisement, by its sexy or snobbish incantations, had conjured up in him a factitious desire for them. At every stage of the process, work is being done whose sole value lies in the money it brings. Such would seem to be the inevitable result of a society which depends predominantly on buying and selling. In a rational world, things would be made because they were wanted; in the actual world, wants have to be created in order that people may receive money for making the things.

          • Nancy Lebovitz says:

            Aron Wall:

            Thank you.

            The World’s Last Night search on [good work]

            The title essay is interesting in regards to game theory.

            Here’s the passage I was thinking of:

            “And of course we shall keep our eyes skinned for any
            chance of escape. If we have any “choice of a career”
            (but has one man in a thousand any such thing?) we
            shall be after the sane jobs like greyhounds and stick
            there like limpets. We shall try, if we get the chance, to
            earn our living by doing well what would be worth do-
            ing even if we had not our living to earn. A consider-
            able mortification of our avarice may be necessary. It is
            usually the insane jobs that lead to big money; they are
            often also the least laborious. “

      • Murphy says:

        I think there’s a certain number of micromorts per month I’m willing to just accept such that things which make my job less pleasant or more frustrating for the sake of reducing that micromort count further just isn’t worth it.

        Much like background radiation.

        I don’t want plutonium in my drinking water but I’m also not going to start wearing lead lined coats to protect myself from natural background radiation.

        There’s a certain reasonable level of risk such that investing in reducing it further yields dramatically diminishing returns for everyone involved and actually makes employees more miserable.

        I’m willing to trade a small risk of death for a reduction in misery and frustration.

        When I worked in a factory I loved that the machines had good safety sensors and everything was well marked such that there was no risk of me losing body parts. On the other hand it actively made my life worse to be surrounded by “don’t be an april fool, hold the hand rail” posters next to the 3 steps outside the door and being chided if you didn’t keep one hand on the railings at all times when walking up steps and other endless trivialities.

        • shar says:

          Having never heard the term “micromort” before, I thought you were saying there were a certain number of “little deaths” you were willing to accept per month.

      • drachefly says:

        Nice. I’ve done many unsafe things, and I have been injured, but I have never been injured doing an unsafe thing.

      • onyomi says:

        Awesome. And another point in favor of not trying to offload personal accountability to an abstract system.

      • Noumenon72 says:

        Once he realized safety was third, and received the benefit of knowing he had to be super careful, what could he do? He “grabbed onto things everywhere he went” and acted cautious. First, that’s slow and probably ineffective without knowing all the risks. Second, as soon as I realized I was responsible for my own safety, I would want to design a guardrail in the dangerous spot, or learn from statistics about which conditions lead to man overboard… and for everyone else, safety would be back to being other people’s responsibility, except without the institutional norms surrounding it.

        In my opinion as a ten-year factory employee, if you have to be careful, there’s an accident waiting to happen. Personal responsibility is only good for catching things that slip through the Swiss cheese model.

        • The Nybbler says:

          In my opinion as a ten-year factory employee, if you have to be careful, there’s an accident waiting to happen.

          If you have to be careful _all the time_, there’s an accident waiting to happen. If your workers are running both ways across narrow catwalks with no guardrail above chemical tanks, you’re going to be producing a Joker a week, at least. On the other hand, if there’s one cut you have to make with your saw that’s a hell of a lot easier when you remove the guard, being careful might be the right answer.

          This is more applicable to a small workshop than a factory; the nature of mass production means that any routine need for care will be effectively “all the time”.

    • MoebiusStreet says:

      Interesting thought. Do you think this ties into the “Cost Disease” thing?

      FWIW, I too notice that as a society we’ve been really loading up on the risk aversion thing. As common today as saying “have a nice day” is “drive safe” or “have a safe trip”. This infuriates me. I want somebody to wish that I have a rich and fulfilling experience, not one in which I’m just able to successfully navigate the sharp corners of the world.

      • analytic_wheelbarrow says:

        Yep, the whole “have a safe flight back to ” thing drives me crazy. And of course, they *should* be saying “have a safe ride to the airport… the flight itself is ridiculously safe.”

      • sighthndman says:

        While the safety, security and (arguably) beauty are far more prevalent than they ever have been. Equality, of course not, since for most of human history the great mass of humans were subject to the same population leveling effects: starvation and disease, both having the same root economic causes. That’s pretty equal.

    • onyomi says:

      Sometimes I wonder if third world growth rates aren’t because they have more low-hanging fruit, but because they haven’t yet been able to afford to build up the massive structure of “making sure nothing ever goes wrong” systems and officials. The empire state building was completed in 410 days at a cost of 41 million dollars (600 million in 2017 dollars). We can’t even install a new drain in my city in less than 410 days.

      • rlms says:

        Five people died in building the Empire State Building. If we take the figure of 3400 workers as accurate, that gives the likelihood of death for a random worker as 0.15%. In comparison, the probability of a random American soldier dying in Iraq was 0.3%. Anyone with a dogmatic objection to regulation should talk to someone who worked in construction decades ago (assuming the pace of regulation where you are is similar to that of the UK). Doing so might not persuade that current amounts of regulation are worth it — I don’t think it persuades me — but it will help you understand what problems it is trying to prevent.

        • onyomi says:

          I would not go so far as to say 5 deaths is an “acceptable” number, as I’d guess, offhand, that slightly better safety procedures of some kind probably could have prevented some, if not all of those. However, I’m also not of the “if even one life is saved it is worth infinity dollars” school of thought because money spent is resources which could have been devoted elsewhere. That is, if the amount of extra money necessary in compliance costs and growth foregone is enough that it could have paid for 1000 cancer treatments or brought fresh water to 100,000 third worlders, then it isn’t obvious to me it was worth it.

          • Jiro says:

            if the amount of extra money necessary in compliance costs and growth foregone is enough that it could have paid for 1000 cancer treatments or brought fresh water to 100,000 third worlders, then it isn’t obvious to me it was worth it.

            Allowing people to die and compensating by having the money pay for cancer treatments or fresh water for the third world would be an ethical offset. And we just had an article about those.

          • rlms says:

            I don’t subscribe to the “lives have infinite value” school either. The way I think a lot of people look at it is to consider an acceptable level of risk is. There aren’t any situations where the only acceptable level of risk is zero, but it seems to me that the risk to an Empire State builder should be a lot less than half that of a deployed soldier.

          • onyomi says:

            @Jiro

            How is it an ethical offset to chose between risking the deaths of 5 people or 1000 people? It’s not even a trolley problem, because the trolley doesn’t start off on one track or the other (though I guess you could consider the current regulatory regime the “default”).

          • Jiro says:

            Deciding that it is okay to do things which cause loss of life, on the grounds that you can then use the savings to save other lives, is an ethical offset. Just because the second number is larger doesn’t mean it’s not an ethical offset.

          • onyomi says:

            @Jiro

            By your definition, any moral decision with a downside of any kind counts as an “ethical offset,” because you’re choosing one option over another. Doesn’t mean the positives of the better option are “offsetting” the negatives of the worse option.

            “Ethical offsetting” as described in the last post was intentionally doing something unethical but then doing something else good to make up for it. For example, eating one animal, but donating money to reduce animal cruelty. The problem with this is that there’s a third, even better option, assuming you think killing animals is bad: not eating the animal and donating money to reduce animal cruelty. The problem is precisely that there’s no need to make a choice between two less-than-perfect alternatives, which is what makes it different from e.g. a trolley problem.

            In the Empire State construction example there is no third option because we’re talking about a limited pool of resources. Every extra resource spent on construction safety is a resource unavailable for e.g. cancer treatment. Which doesn’t mean nothing should be spent on construction safety, only that there is a point past which more resources devoted to construction safety would have been better spent elsewhere.

          • Jiro says:

            “Ethical offsetting” as described in the last post was intentionally doing something unethical but then doing something else good to make up for it.

            People who use ethical offsets claim that the action becomes ethical as a result of the offset.

            So the definition is really “intentionally doing something that would be unethical if it wasn’t for the offset”.

            Clearly having more people die than necessary, when there’s *no* savings as a result, is unethical. Likewise, having more people die than necessary, and pocketing the savings, would be unethical.

            So you must believe that the savings, and the fact that the savings can be used for cancer treatments, changes it from unethical to ethical. (Furthermore, it would be pointless to even argue this if you didn’t think the cancer treatments change it from unethical to ethical.)

            So it’s unethical without the cancer treatments and ethical with them. That fits the definition of ethical offsets.

          • Incurian says:

            In comparison, the probability of a random American soldier dying in Iraq was 0.3%… There aren’t any situations where the only acceptable level of risk is zero, but it seems to me that the risk to an Empire State builder should be a lot less than half that of a deployed soldier.

            Iraq was pretty safe for US soldiers compared to most of the other wars we’ve been in, and so isn’t a fair comparison point if you’re trying to say “construction is nearly as dangerous as war.”

            It would probably be fairer to say something like, “Wow, the Iraq war was relatively safe [by the standards of American wars], it wasn’t much more dangerous than construction!”

          • rlms says:

            @Incurian
            My understanding is that it was safe in the sense that modern wars with a professional army, modern technology, and presumably modern “health-and-safety guidelines” to reduce danger are safer than e.g. Vietnam, but not safe in comparison to other modern wars (if Wikipedia is accurate, building the Empire State Building was 7 times as risky as the Gulf War).

            I’m definitely not trying to say that construction is almost as dangerous as war nowadays; I like to think all the crazy regulations actually have some use! Page 5 here suggests I’m right about the UK at least, the annual work-related-death rate for construction workers is 0.0018%. That seems like a reasonable level to me.

          • Incurian says:

            I guess it depends what wars you want to compare it to. Good point about comparing constructions industries across countries. I wonder how their price and timeliness compare. If it’s favorable to the UK, I wonder what the cause is.

          • onyomi says:

            @Jiro

            So it’s unethical without the cancer treatments and ethical with them. That fits the definition of ethical offsets.

            No, my point does not depend on those who built the Empire State Building spending the money they saved on cancer treatments.

            It would be an ethical offset if someone intentionally cut corners on accepted safety standards of the time and pocketed the savings, but then donated an equal sum to charity, somehow calculating that he was “even,” morally speaking. That’s not what I’m talking about. I agree that that sort of moral calculation doesn’t work. Doing a good thing doesn’t “cancel out” doing a bad thing.

            Risk taking is not, by itself, unethical. Taking undue risks’ with others’ lives, especially if they don’t understand the risk they’re taking, is. So, whoever was in charge of the Empire State Building safety might have acted unethically, but the simple fact that five people died does not mean he did. Maybe that is the usual number of deaths at the time for a project of that scale. Maybe all usual procedures were followed but they were very unlucky.

            If the builders did not act unethically in the building, then spending their profits on caviar and yachts does not make their previous actions unethical, just as spending it on charity does not “offset” them if they did act unethically. That doesn’t mean, however, that spending maximal resources on safety was the ethical path. Because doing that (or, especially, mandating that) has costs to society which may or may not be greater than the gains.

            The point is that resources are fungible. Extra money mandated on construction safety is resources that can’t be spent elsewhere. Growth foregone because construction takes longer than it needs to is growth foregone. Making society poorer is an ethical cost that has to be taken into account (which is not to say all money spent on safety makes society poorer; after all, some productivity was lost with the lives of those five men).

            This is also not to say that individuals can’t engage in ethical offsetting, but “society” can. It’s just to say that mandating maximal risk avoidance in any given case is not automatically the most ethical path, nor is the fact that someone died prima facie evidence someone acted unethically.

          • Jiro says:

            Extra money mandated on construction safety is resources that can’t be spent elsewhere… Making society poorer is an ethical cost that has to be taken into account

            I would normally understand the phrase “take into account” to mean that the result of some analysis can change as a result of the thing which is being taken into account.

            But you seem to be denying that.

            If you don’t think that spending the resources on other things affects whether the original action is moral, what does it even mean to say that you’ve “taken [it] into account”?

    • marcus says:

      I hear you loud and clear. I know what it’s like to choose between a modest risk increase versus a large increase in cost: “yes I don’t have the exact piece recommended by the scaffolding company but this substitute is just as good. Besides, if I don’t do this now it will turn an hour job into a two week job and I’m not willing to pack up, wait a week for a part to ship, reset me equipment and cost myself hours and hours of extra time. Yes there is a modest chance I’ll fall to my death because of my decision, but Christ I’m just a lowly grunt anyway.”

      And other times it’s like, you want me to do what without a harness? No thanks. Idk. I guess in my mind it comes to, are we allowing individuals to make a choice about risk without coercion? And how much/little should we allow the market place to sort it out? Should someone be fired for not performing a task with a 5% risk of fatality, or should they be protected because 5% is way too fucking high?

      Edit: I guess a lot of what I’m saying is that not only is it inefficient for someone else to choose an acceptable level of risk for me, but it’s also questionable to think someone else can quantify that risk with greater accuracy than me. A lot of what we’re complaining about is guidelines turning into rules and the countless exceptions they miss.

    • aureamediocrit says:

      The way I see it its not so much do we have a trade off between risk and efficiency, its that we have already realized all the large gains in safety while for every new regulation you have an infinite ceiling for time consumed. We’ll approach but never reach zero, but time consumed will always go up. Worse new rules are often added after an incident but don’t address what originally caused the incident, a knee jerk reaction to be seen doing something.
      My personnel experience with this was with electrical safety in the navy. The navy used to have its own set of regulations, that while not as stringent as OSHA standards, had kept sailors safe and there had been no fatalities for decades. Then a supervisor killed himself while supervising a pretty standard maintenance item. To be clear he broke the existing rules that he should have known and would have kept him alive.
      The navy’s response to this was to decide that we needed our rules to be just like OSHA rules.
      Nothing about the new rules would have prevented the supervisor from killing himself, but the change did incur a huge monetary and time cost. Time in retraining everybody in the new rules and increased manpower and time doing maintenance. The cost in money from updating publications and the new equipment we had to purchase.
      I have several examples like this and I imagine things like this happen all the time in other industries. Nobody is willing to accept that you can’t remove the human factor from the equation, or as a crusty old chief was fond of saying,”You can’t fix stupid.”

      • sighthndman says:

        I like that. I also notice that, par for the course, the response to breaking rules is not to discuss reasons for the rules, not to discuss or even mandate stricter enforcement, but to introduce more rules.

  8. Nicholas Wagner says:

    Is this post in response to that Thiel-funded herpes vaccine study that tried to get the runaround on the FDA? Link

    • Scott Alexander says:

      Not directly, though probably reading that reminded me on some level that this subject existed and that I hadn’t written it up yet.

      My main point is that it’s stupid to have this level of scrutiny for a study with zero risk. The herpes study does have risks, so it’s a totally different question whether it needs scrutiny or not.

      • Witness says:

        As described, I think your study had net negative risk. Which, it would be amusing if you had put that in your “warning”.

        • Scott Alexander says:

          Doesn’t every cost-outweighs-benefits study have net negative risk in expectation?

          • drachefly says:

            No, because the risk is the question about “what if it goes wrong”, so you can’t average in the cases where it worked. Here, the only difference is more screening, which may or may not be particularly helpful.

      • tmk says:

        Posted now, it is going to be taken as a commentary on that case. Especially since you quite often post things as commentary on a recent event without mentioning the event. It starts to feel like motte-and-bailey arguing, even if that was not the intention.

  9. TheWackademic says:

    I wonder whether or not your negative experience results in part from trying to do a study at a non-research focused hospital? For instance, I’m surprised that everyone didn’t have to take a “research ethics” class at the start of your program, and that your hospital didn’t have an office set up for storing confidential study information. My experience has been that this process is much more streamlined at major research universities, so maybe one axis to consider here is how the IRB process in particular hurts low or medium-resource institutions.

    • vV_Vv says:

      And I suppose that an established research institution is able to put pressure on the IRB any time they are being particularly asinine with their nitpicking.

      At a non-research institution, your management and co-workers are going to see you as a weird wannabe scientist who is looking for trouble, so they aren’t going to provide any help (e.g. no pens allowed, most attending doctors refusing to be PIs) and if the IRB shuts you down, they’ll probably think that f*ck you, you deserved it.

    • Toby Bartels says:

      That game was written by Douglas Adams (Hitchhiker’s Guide to the Galaxy, etc), by the way.

      • Sniffnoy says:

        Going by this article, actually almost none of the game was due to Douglas Adams other than the general idea and the blood pressure meter. The puzzles and structure of the game were by various non-Douglas-Adams people and the prose was mostly by Michael Bywater.

        (Comment edited after rereading the article)

        • Toby Bartels says:

          Interesting. I’m quite find of that game, so maybe I’d bet check out this Bywater guy, who I haven’t otherwise heard of.

          (If you tell me that Adams didn’t write any of the first Dirk Gently book besides the original Doctor Who treatment, then I’ll have to re-evaluate my entire opinion of Adams.)

        • qwints says:

          Fascinating article.

  10. VirgilKurkjian says:

    Bizarre.

    FWIW, I’m a graduate student in the Social Sciences. Our IRBs have the same rules on paper, but we get around it by using generic versions of applications with the critical info swapped out, or just ignoring them altogether. Though we don’t have to face audits, so…

    I’ve found that usually if you make one or two glaring errors in the application on purpose, the IRB will be happy to inform you of those and approve it when you correct them. They just want to feel powerful / like they’re making a difference, so if you oblige them they will usually let you through with no further hassle.

    I’m sorry this soured you on research — IRBs can vary widely, if you end up under a different one you should consider trying again.

    • Murphy says:

      https://rachelbythebay.com/w/2013/06/05/duck/

      One of the animators for Battle Chess tacked a duck onto the queen sprite animations as sacrificial producer bait. Sure enough, the only change the project manager requested was “lose the duck”.

      • sinxoveretothex says:

        I feel like the comparison with the dog at the end is really weak. Dogs pee on stuff to mark territory, they wouldn’t pee on all trees if those were in a tight row, for example. The two situations don’t seem related at all.

        Or perhaps I’m just experiencing producer syndrome and telling her to lose the “duck” of her article…

  11. dahud says:

    I’ve had exactly one interaction with an IRB – in 6th grade. My science fair project involved studying the health risks of Communion as performed in the Episcopal church. (For those unfamiliar, a priest lifts a silver chalice of port wine to your lips, you take a small sip, and the priest wipes the site with a linen cloth and rotates the chalice.)

    Thing was, the science fair was being held by a Baptist University. The IRB was really not fond of the whole wine thing. They wanted me to use grape juice instead, in the Baptist fashion. I, as a minor, shouldn’t be allowed anywhere near the corrupting influence of the communion wine that I had partaken of last Sunday.

    Of course, the use of communion wine was essential to the study, so we reached a compromise. I would thoroughly document all the sample collection and preparation procedures, and let someone of age carry out the experiment while I waited in the hall. I didn’t mind, really. Less work for me in the end.

    (In case you were wondering: at least from a bacterial perspective, communion was perfectly safe. Between the alcohol, silver chalice, and cleaning procedures in the ceremony, change in bacterial growth between the before and after samples was almost nil, and only barely above the control dish that stood open for the length of the ceremony.)

    • Douglas Knight says:

      Normally students are completely exempt from IRB. No journal publication, no problem.

      • dahud says:

        The IRB was a condition of this particular science fair. It was really more of an IRB-Lite, to show the kids a bit more of how science was done. Plus, it never hurts to catch that kid who wants to glue his cat to the ceiling fan or something.

  12. doubleunplussed says:

    When things are too hard to do legitimately in other contexts, you get black markets and other similar things. Too hard to access media you’d happily pay for? Piracy happens.

    I wonder if there is a place for anonymous research. You just do it without telling anyone and publish it anonymously. You take the precautions you deem necessary without being compelled by an ethics body and you document this in the publication, which would be published online in a torrenty or blockchainy way so that it could be accessed even though it officially violated ethics laws.

    I suspect there wouldn’t be much research done this way since it would not allow people to take credit for their work.

    I mean, what would happen if you just took this data, analysed in and published a blog post here about it? How illegal would that be? I suppose you’d want to be completely anonymous though.

    • Aapje says:

      You could make it available through Tor and call it Dark Science (as it is on the dark web).

      • The Big Red Scary says:

        I see. I missed this comment, and suggested something similar below, but had the decidedly less cool name “medical study underground”.

    • Chrysophylax says:

      I suspect there wouldn’t be much research done this way since it would not allow people to take credit for their work.

      You couldn’t take credit for it in the eyes of organisations that follow the rules. But they aren’t the only people who care.

      A publication can be cryptographically signed in ways that let you prove that you are the person who signed it. (The standard procedure is that the signature is a combination of a message and some secret information, a “private key”. You can use publicly-available information, the “public key”, to verify the validity of the signature, proving it was made with the corresponding private key. You can then use the public key to encrypt a second message in a way that can only be decrypted with the private key. If Fred can read your message, Fred is very probably the author of the signed message.)

      This lets you claim credit with people who are willing to play the game. You now have only three major problems: getting a big enough network of secret researchers to make it worthwhile; identifying other members of the network; and verifying the quality of anonymous work.

      Growing the network is tricky. There are a lot of people who’d like to be rid of the bureaucracy, but convincing them that it’s worth spending time on secret research is another matter. I think an important first step (if not a whole solution) is to show that you have a really good platform. It might help to have some other good justification for it – for example, avoiding the censorship of oppressive regimes.

      Identifying other members is fairly easy. Essentially, you have to use a procedure whereby nobody finds out who they’re talking to until everybody can blackmail everyone else. As a concrete example, two people with secret research can swap pass-phrases online, then use them in conversation in person. That’s a pretty solid way of showing that both sides are playing the game. It can then be verified by swapping messages based on things said in person.

      (Italian academics do something very much like this: they acquire reputations for being incompetent. This lets them make corrupt bargains because neither side can get a job any other way. This is also why criminals often have obvious tattoos: it makes it harder to get a legal job.)

      I can think of two ways to handle peer review. One is public comments with voting, letting people get good reputations and giving them an incentive to read other people’s stuff. The other is some kind of gas system, whereby you need to do a certain amount of reviewing to be allowed to publish more papers. You could also use both components.

      • Essentially, you have to use a procedure whereby nobody finds out who they’re talking to until everybody can blackmail everyone else.

        Alternatively, nobody ever finds out the realspace identity of the person he is talking to. The reputation is linked to an online identity proved via digital signature.

        • doubleunplussed says:

          The downside I had in mind is that you couldn’t use your ‘dark science’ reputation to advance your ‘real world’ career. Maybe you could if there were ‘dark scientists’ in the hiring committees and you could prove your identity to them using digital signatures etc, but it would still necessitate people learning each others’ real world identities. Otherwise ‘dark science’ would have a funding problem! People might do science for the love of it, but it still costs money and they still need to eat.

          • 6jfvkd8lu7cc says:

            Well, if you have a dark-side recommendation letter from a respectable-in-the-dark person that evaluates your research favourably without going into details, proving this is a letter to you without revealing the corresponding identity is technically feasible.

            Of course, an attempt to use such a letter will disclose the fact of dark-side participation (at least to other dark-side members), but not detailed identity.

            We can only hope the situation doesn’t get bad enough for this approach to make sense…

    • benquo says:

      That seems like the obviously ethical thing to do in this case, unless the punishment would be extreme. Note what’s going on here, if we track the actual facts and not the label “Ethics”, which bears no resemblance to the contents. Doctors are doing a thing that in expectation harms the patients. Scott wanted to compile evidence to blow the whistle on them, and a board of officials gave him a prohibitively expensive set of requirements. So he tried to meet these requirements, and when he couldn’t, he gave up, intimidated into silence.

      Publishing this account is a good step towards the right outcome, and maybe compiling a lot of records like this before doing any whistleblowing you’re likely to be punished for is the best strategy here.

      • 6jfvkd8lu7cc says:

        I remember (maybe falsely) that during Renaissance time there were some books that were written, prohibited by censorship. stolen from the authors’ homes and then published — or at least copied and disseminated. And then there were various allegations.

        Given that both malware and Wikileaks currently exist, a cyberpunk outcome would be for the stolen-and-published scenario to be literally true and to actually happen without the doctors’ knowledge.

      • carvenvisage says:

        Scott wanted to compile evidence to blow the whistle on them

        That is unnecessarilly adversarial. The idea is to update the information people work off not crucify them for lacking independent judgement. If you want to turn out doctors en masse it can’t be a hanging offence not to be an innovator.

    • nestorr says:

      I was thinking in terms of parallel construction, like law enforcement does. The NSA told us this guy’s a drug dealer, so let’s tail him until he gives us an excuse and we can “coincidentally” stop him for a broken taillight or whatever.

      Just do the study with the patients you have and then copy the forms yourself with pens and dotted i’s and crossed T’s

  13. MawBTS says:

    I felt energized, well-rested, and optimistic that the bipolar screening study I had founded so long ago had been prospering in my absence. Obviously nothing remotely resembling this had happened.

    Wow, that must have been disheartening to come back to after your successful campaign invading Poland.

  14. apollocarmb says:

    do any of your colleagues know about your blog?

  15. jdaviestx says:

    Careful there, Scott – you might turn into a Libertarian!

  16. Markus Ramikin says:

    It is the considered opinion of a good friend of mine and myself (how’s that for authority!)…

    …that your Dr. W is a goddamned hero. And I’m sure you’ve told them that already. But if somehow haven’t gotten around to it, you should let them know the Internets say so.

    Also, your writing grows ever more engaging.

  17. John Schilling says:

    You know who else is going to have working, scientifically proven psychological superweapons?

  18. Nabil ad Dajjal says:

    I’ve never done clinical research but if the difficulty of getting human primary cells is anything to go by it must be hellish.

    My lab is part of [big research hospital], literally right across the street, and is affiliated with [second big research hospital] a few blocks uptown. There are surgeons taking tissue and serum we could use out of patients and throwing it away literally every single day. Instead we pay top dollar to buy small quantities of them from a company because then we don’t have to do as much paperwork.

    I’m so glad to be working mostly with invertebrates and cell lines. Even mouse training was a huge pain in the ass, I shudder to think about human research.

    • shar says:

      I work in a biotech, and if I need any blood components (whole, plasma, serum, LeukoPak) I just fax the local blood center and they overnight it the next day. The only hard part is using a fax machine.

      In grad school I could go to the university center, ask them to draw my own blood and then just walk out with it. Think we just had to pay them for the bag.

      I guess blood components are probably the lowest hanging fruit since there are already vast networks drawing and processing blood from willing donors all day. Other cell types, or even blood components from patients with specific conditions (as opposed to whoever walks into the donor center that day) must be a qualitatively bigger pain in the ass.

      • Nabil ad Dajjal says:

        Blood normally isn’t that bad. I know malaria PIs who draw their own blood, and while getting umbilical cord blood is a lot harder you can still get a liter or so if you’re willing to drive down to the core.

        At the risk of giving away too much identifiable information, we need serum from cancer patients and muscle tissue from healthy adults. Both have been very challenging in the past and the lab has found a lot of workarounds rather than deal with the hassle.

        So yeah, I’d say that your last paragraph describes my experience pretty well.

  19. moscanarius says:

    I often have the impression that the people who write and enforce these rules must be living in an alternate world that is fully made of paper and ink, paper and ink, paper and ink – and nothing else. They don’t seem to have any clue about how the physical world actually works. How can someone actually think it is important to overwarn people about the risks of filling a questionnaire that is already at use in medical practice? Or to care so much about the privacy of data that, realistically, no one is going to steal?

    • Chrysophylax says:

      1. Fear of lawyers. The US is halfway towards being a post-litigious society: the cost and risk of court cases are so great that most people and organisations can’t afford to use the courts.

      2. Externalities. The bureaucrats bear considerable risk if they use their common sense and don’t capture the social benefits of being sensible about rules. (And they can’t coordinate to have every bureaucrat simultaneously start using common sense.)

      3. Lost purposes. Large organisations, particularly old ones with many layers of hierarchy, forget why they’re doing things. They keep doing work and following rules for no good reason, because the people who do the work aren’t the people who decide what should be done and nobody is asking whether it’s worthwhile. (Especially since they might realise that the whole department ought to be fired.)

      4. Hierarchy and politics. Punishing a junior scapegoat is very tempting for managers and saves the organisation a lot of embarrassment and the work of improving.

      5. Status and self-interest. Being a Big Important Person who tells other people they’ve done things wrong feels good. Being a Wise Guardian of Ethics who has Important Solemn Discussions feels good. Expanding your turf and becoming Even More Important feels good. Being paid feels fantastic, particularly when you don’t have to work very hard (and the best of all monopoly profits is an quiet life).

      6. Scar tissue. Every time something goes wrong, there’s a chance that someone will make a Rule so that it Never Happens Again. (This is sometimes sensible.) There is a very much lower chance that somebody will make a good, clear, proportionate rule, after giving due thought to costs and alternatives, and provide a clear explanation of why the rule was made and guidance on when to ignore it.

      7. Blindness to opportunity costs. People are very bad at noticing costs they don’t pay themselves, particularly when those costs are not immediately obvious and concrete. Think about people spending hours to save small amounts of money. In particular, think about managers who won’t spend small amounts of money to save large amounts of staff time, because salaried staff are a sunk cost and their time has already been bought. If they had to pay the staff in cash at the end of every day, they’d be much more interested in efficiency.

      • moscanarius says:

        Agreed. The decision chain has been lengthened to the point that the rulemakers have no clue about what the purpose of their regulations is, and the rule enforcers only care about not irritating their superiors and keeping their cushy jobs.

        • Douglas Knight says:

          That would make sense if the problem were bureaucrats. And maybe that’s the case at Scott’s hospital. But the typical IRB at the typical research university comprises research professors, exactly the people encumbered by it.

          • shar says:

            Yes, I’ve interacted with IRBs for animal studies at a research university and it was no hassle at all. Basically just a search-and-replace job on an old application template. The Board members were research professors running similar studies and they were happy to rubber-stamp anything that wasn’t obviously idiotic or cruel.

            I’m sure human trial IRBs are qualitatively harder to get past, but I suspect that the fact that Scott’s hospital had no institutional interest in research is what tipped this review process from “annoying” into “impossible”.

      • Schmendrick says:

        At least with regard to your first reason, most states and the federal government explicitly set up the procedural rules for civil suits in such a manner as to encourage as many cases as humanly possible to settle without going to trial. After all, trials are messy, complicated, and uncertain things, and even risky for the judges who might have their findings overruled by an appellate court. Neither the litigants nor the justice system want to risk everything going sideways because a judge’s clerks missed an important precedent, or one juror woke up on the wrong side of the bed. Furthermore, court systems (especially at the state level) are usually hilariously underfunded and understaffed. Add yet another incentive for everyone involved to want suits to go away as quickly as possible.

      • Surge says:

        Really good post!

    • TeMPOraL says:

      Oh, I think they do have the clue – it’s that, in their world, unlike in physical reality, colour also matters. The data taken by regular doctors as a part of their practice is not of the “Research” colour, so it doesn’t need paperwork. The same data taken as a part of a study has “Research” colour, so the paperwork is needed.

      I direct you to the classic: What Colour are your bits?. It explains what’s going on in that world, and why it may not make much sense to people dealing with physical reality on a daily basis.

    • Ketil says:

      I often have the impression that the people who write and enforce these rules must be living in an alternate world

      A danger here is when people are making rules that apply to others – that’s one kind of alternate world. Rules should be a tradeoff between costs and benefit, but when rule makers don’t see any of the costs, they make suboptimal choices. A similar thing happens when purchasers of a service or product are not paying themselves (E.g. health costs and public or private insurance schemes. Which is why the US system with more private insurance isn’t necessarily doing better than European countries with “socialized” medicine – they’re both third party systems).

    • carvenvisage says:

      I often have the impression that the people who write and enforce these rules must be living in an alternate world

      Isn’t that just what happens when you have a job, let alone a career? Like, almost or fully literally, accounting a different world than sales, and ethics is more outlandish than either, so if your impression is correct I don’t think it would be so unintuitive. Trying to merge all the worlds strikes me like an insane challenge compared to establishing sufficient feedback to each from the shared one. Easier to obliterate one of these worlds and make a new one than to have everyone living in easily-recognisably the same slice of reality.

      Or TL:DR Air traffic control maybe has to be a different world from being a fighter pilot. It’s hard enough to inhabit one.

  20. CthulhuChild says:

    I can’t help but think this has pretty direct bearing on the Cost Disease. And indeed, that the question of Cost Disease can be answered more generally as “institutions are risk adverse because repeat minor success is expected, not rewarded, while noteworthy fuckups get burned into institutional memory after everyone involved gets sacked. When no force dismantles the institution completely, things get worse over time”

    I work in the military, and there are similarly insane bureaucratic processes to do with acquiring new technology or making modifications to equipment that would increase it’s effectiveness. In the former case, the concern is corruption, in the latter case it’s a fear that there will be follow-on effects that have not been formerly validated (think of all the chain of events that lead to the Challenger Shuttle explosion).

    This is all well and good, but while the cost of a fuckup is obvious, the opportunity cost is not captured, and is indeed impossible to quantify.

    What I’ve seen happen in my line of work is the following: the equipment just gets modified off the record. The supreme irony is that this significantly increases the likely hood of unexpected follow-on effects, but because the process of formally requesting a change is so onerous there’s an incentive to skip that step.

    But in the case of Scott’s study, there is literally zero risk. The screening and subsequent interview are clinical tools that would be applied to the patients in any case. What I’m wondering is what’s to stop you from just DOING the damned study, then pulling the data out of their medical files after the fact? You would of course need to request permission to gather patient data, but since Scott managed to pull patient data for his other study, that seems like a trivial challenge, and you don’t need permission to execute a study to systematically ensure that the necessary patient data ends up on file.

    Or am I missing something obvious?

    • Nabil ad Dajjal says:

      The obvious thing you’re missing is that studies need to be published in order for people to read them*, and reputable journals won’t publish human research without IRB approval.

      That’s the issue Rational Vaccines is facing right now. They can’t publish their IRB-less trial because no peer-reviewed journal will touch it. And until it’s published, nobody will look at it.

      *An amusing example: one postdoc in my lab spent two and a half years unknowingly recreating a model system which had been made years earlier. He had just never known about it because it had been published on eLife and not a high-impact journal.

      • The Nybbler says:

        The obvious thing you’re missing is that studies need to be published in order for people to read them*, and reputable journals won’t publish human research without IRB approval.

        Obvious solution: disreputable journals. (Breitbart Research, if you will) Certainly with open-access journals being made nowadays, someone (perhaps Thiel) might consider founding one without an IRB requirement, perhaps even nominally based in a country of convenience.

        • Nabil ad Dajjal says:

          I sort of answered this below, but you’re running headlong into network effects if you try this.

          Breitbart Research is already starting at a one hand disadvantage because it’s new. Add in the stigma that it’s actively flouting the standards of the field, and it’s quickly going to become a dumpster where people throw their least publishable research.

          Disrupting the journal industry is something I could see someone floating as a pitch but I have no idea how one would do it.

          • Aapje says:

            You’d need to be pretty strict elsewhere, I guess.

            Perhaps by excellent peer review. I would say that having a Nobel prize winner or some other giant in the field participate in that might work.

          • ghi says:

            To solve this problem I recommend stepping back and asking two questions. Why do people want to publish studies and why do people want to read them? If the answers are “to gain the status of a published researcher” and “to site in my own study that helps me gain status as a published researcher”, well my only advise is to stop worshiping Ra.

            If the answer to the second question is “because I want to use the information to improve my own or someone else’s health or psychological state”, then you should be more concerned with the accuracy of the information than whether it’s Official Science ™. In fact there are thriving industries, i.e., alternative medicine and the self-help movement, that do just that.

          • Nabil ad Dajjal says:

            @ghi,

            I was OK with Moloch and Azathoth but I resist the move to create a demonology of strained metaphors.

            Anyway, the issue is that even if your motives are 100% pure the incentives still dissuade you from publishing in or reading sketchy journals.

            You want your research to be seen, right? Otherwise you’d just burn your lab notebook after every study. So you need to put it somewhere where people are going to know to look.

            You also want to read high-quality research. Otherwise you’d just randomly read the first page of Google results every day. So you need to know where to look to find high-quality research.

            The misuse of impact factors in academic hiring and grant approval is a serious problem that needs to be solved. It’s making this problem much worse. But it’s likely that the focus on top journals would still exist to a lesser degree even without it.

          • ghi says:

            I was OK with Moloch and Azathoth but I resist the move to create a demonology of strained metaphors.

            I’m not 100% sold on Ra either, but this metaphor seemed to fit too perfectly here.

            You want your research to be seen, right? Otherwise you’d just burn your lab notebook after every study. So you need to put it somewhere where people are going to know to look.

            People read alternative medicine and self-help books/websites all the time.

            You also want to read high-quality research. Otherwise you’d just randomly read the first page of Google results every day. So you need to know where to look to find high-quality research.

            Except the whole problem is that the medical journals aren’t publishing high quality research.

      • Chrysophylax says:

        I don’t understand. Can’t you get IRB approval to pull the data from patient records?

      • gwern says:

        He had just never known about it because it had been published on eLife and not a high-impact journal.

        Discovery is always a problem… In my own recent catnip research, I’ve run into two instances of this: since starting back in 2015, I had been under the impression that the only genetics research done on catnip was over 50 years old (indicating catnip response in cats is controlled by a single autosomal dominant Mendelian gene variant), and I was pondering how hard it would be to find some pedigrees and do a followup myself for a more precise estimate of how common the gene variant in question is, and do a power analysis for a project like 100 Cats to hopefully find the genetic variant. I had done a very thorough Google Scholar search, a Pubmed search, followed all citations in everything relevant, jailbroken all the papers and PhD theses, read the textbooks in Google Books & IA, bought and scanned old books, asked people for leads, posted my results online as I went, and so on and so forth, and I thought I had done a good job canvassing the entire English research literature on catnip and compiled the sum total reported data on around 100 cats.

        Then I did a random google search on cat genetics in general a few weeks ago, and stumbled across not one but two genetics studies which did pedigree studies and GWASes on not 20 or 30 cats but 200 and 300, which proved that catnip isn’t even Mendelian, it’s just polygenic like everything else, subject to considerable measurement error undermining all the previous estimates, and the GWAS had already been done and failed to find any hits (further proving it’s polygenic)! So, why were they totally absent from the literature and everywhere? Well, the first one (Villani 2011) is ‘merely’ a master’s thesis and who could ever trust such a thing, gosh? And the second one (Lyons 2013) exists only as an abstract in a report to the funder; no reason is given for why it hasn’t been published but I have a pretty good idea why… Oy vey.

        • Nabil ad Dajjal says:

          Yeah I feel you.

          I had a figurative heart attack recently because I stumbled over a decade-old publication in a journal I had never heard of which nearly duplicated my thesis project.

          I had been puzzled that nobody had used this fairly old technique to solve one of the big unsolved problems in my field. Everything needed to do my project was available in 2010, and people had known about the problem at least since the 70s based on histology. I swear to God I had spent weeks looking for any sign that another researcher had ever tried this before starting.

          Luckily for me, the study was from before sequencing was sophisticated enough to give a clear answer and the authors weren’t ambitious enough to ever follow it up. Now I have their dataset to play with in addition to my own work.

      • nikitaborisov says:

        Performing a study without getting IRB permission would also likely be sufficient cause to be fired from your current job, and would create difficulties getting hired elsewhere. You also would be exposed to liability should any patients involved decide to file suit, and your hospital would not assist in your defense.

        • Nope. Just find a location to do research that isn’t a research institution, and it’s perfectly kosher to publish the data you collect. (That’s how Facebook publishes experiments they do on users.)

    • nikitaborisov says:

      If I understand correctly, the bipolar questionnaire was not part of existing procedure, whereas the diagnosis was, so you couldn’t do it with retrospective data alone.

  21. mobile says:

    We will make a libertarian out of you yet, Scott.

  22. Chrysophylax says:

    You’re a hero.

    I’m just saying it’s not going to be me. I am done with research.

    In an important sense, it is going to be you.

    Functional Decision Theory says that you should talk about choosing your decision process, not your actions. A consequence of this is that you should make decisions as though you are also choosing the decisions of everyone who reasons like you do (and the predictions of everyone who understand how you think).

    How many young researchers would keep struggling if they thought they were alone, that nobody else was fighting to do the job right? Very few, I think. So you share in the credit for every good research project that gets past the bureaucrats.

    You also get a massive amount of credit for writing blog posts about broken systems. You’re not just preaching to your readers, you’re also preaching to the readers of every other blog that fights for a saner world. And I, for one, think you do an excellent job.

    • Scott Alexander says:

      > You’re a hero.

      Giving up on something when it gets annoying and then writing a blog post complaining about it, is the best kind of heroism. I feel like it’s what Hercules would have done too, had he known it was an option.

  23. johan_larson says:

    That’s quite an experience you went through. I would have been tempted to write up a deliberately horrific research proposal to send to the IRB, just to mess with them. Take one part The Island of Doctor Moreau, one part tentacle-hentai and one part Crusty the Clown; blenderize until smooth; serve chilled in a tall glass.

  24. Null Hypothesis says:

    Faced with someone even more obsessive and bureaucratic than they were, the IRB backed down and gave us preliminary permission to start our study.

    It seems that an obvious social good, would be to breed about 0.1% of the population to be hyper-competent, motivated, OCD professionals, and prohibit them from ever working for any bureaucracy. The rest should take care of itself.

    I know deliberately afflicting people with mental diseases is bad (or is that only if it’s part of a study? – the training was unclear) but the more I read the story the more I really think a greater good could be served here.

    On a less Nazi-istic note, Scott’s complaint with regulation hurting the little scientist is something I’ve been saying for a while on a broader scale. We have a significant degree of crony capitalism in America. But crony capitalism doesn’t take the form of businesses paying bribes and getting sweet-heart deals.

    It’s largely the regulatory and tax code alone that acomplish this. Big companies get behind new taxes and new regulations, because they have sufficiently high volumes to ameliorate the cost of accountants and lawyers that let them dodge restrictions and taxes. The little guys can’t afford to suffer them, and can’t afford to dodge them, so they just go away. Hell, the large companies often help write the new regulations. Or hire the politicians that wrote them as consultants for absurd salaries once they’re out of office. This isn’t accidental.

    The best analogy I can think of is one of chemotherapy. A deliberate poisoning of the marketplace, in the hopes that the smaller, disruptive, dynamic, aberrant organisms will get killed off before the large businesses do. That hurts the big businesses too – so they appear to be innocent – but they take a hit to their profit margin in exchange for maintaining their volumes and suppressing disruptive competition before it can get off the ground.

    The response to this theory is often vocal agreement, follow by: “Well, then, we need to regulate the businesses harder to stop this from happening!” and I just kind of sigh and stop talking at that point.

    • Scott Alexander says:

      Do you want Path from Xenocide? This is how you end up with Path from Xenocide. ಠ_ಠ

      • Null Hypothesis says:

        Would you believe me if I told you what I wanted was someone to reference Xenocide?

      • aureamediocrit says:

        Oh come on, it wasn’t like the whole planet was afflicted. Just, you know, one very tortured AI hunting little girl.

  25. James Miller says:

    Several forms I have to sign to do things at my college ask if what I will be doing will expose anyone to radiation. Although I’m an economist, this has caused me to think of experiments I could do with radiation such as secretly exposing a large number of students to radiation and seeing, years later, if it influences their income. Scott, would you mind doing the paperwork on this to get me approval?

    • andrewflicker says:

      I’ve always been fond of just taking things overly literally on forms like this. “Yes, I’ll be lighting the office in which the survey is taken, so survey-takers will be exposed to significant visible and infrared electromagnetic radiation.”

      Then again, I’m also the guy that red-lined a bunch in our boilerplate HR forms because taken literally they meant that if I happened to be near an uninvited guest that slipped on water near our office gym that I’d be held liable. Doesn’t have to be my guest, doesn’t have to be my water, I don’t even have to have seen the water or the guest, etc.

    • Null Hypothesis says:

      You could provide them a box lunch including a banana and a glass vial of polonium.

  26. arancaytar says:

    I feel like a study that realistically could have been done by one person in a couple of hours got dragged out into hundreds of hours of paperwork hell for an entire team of miserable doctors.

    Hm…

    https://slatestarcodex.com/2017/02/09/considerations-on-cost-disease/

  27. Aapje says:

    “What will you do if a participant dies during this research?”

    Scott, do you remember your answer?

    Mine would be ‘Feel bad,’ but I expect that the IRB wouldn’t like that one.

  28. Garrett says:

    Oh, dear! I’ve actually been through this. I work in tech, but volunteer in EMS. As a part of wanting to advance the profession of EMS I figured I’d take on a small study. It would be a retrospective study about how well paramedics could recognize diabetic ketoacidosis (DKA) and Hyperosmolar hyperglycemic state (HHS) in comparison to ER doctors. (The idea being that if an ER doctor can’t catch it, it’s just not reasonable for a paramedic in the field with worse conditions and equipment to be able to do so).

    Retrospective in that it would merely involve looking at narrowly-selected existing blanked-out medical records, normalizing them (because doctors don’t seem to care what text box they type in) and then doing a basic comparison.

    I had to do the “I am not a Nazi” training as well. In order to pass that, I had to be able to recite the FDA form number used as a part of new implantable medical device investigations. I wasn’t looking at a new device. I wasn’t looking at an old device. I was going to look at pairs of medical records and go “who correctly identified the problem?”

    To make matters more interesting, it was easier to get by the IRB if the records were completely blinded so that we didn’t have any identifiers which could link them back to the original people. At the same time, the IRB application has a section where they want us to specify what we are going to do if we identify some major health risk to the “participants”. This is stupid, because by this point in time (months/years later) the people involved have already seen doctors and been treated. And because we have no way of knowing who they are any more!

    Part of the “I am not a Nazi” training emphasizes how important it is to have a diverse set of participants in a study. It’s unethical to only study poor, illiterate black men. But it’s also unethical to study children unless absolutely necessary. So in order to make everybody happy, we had to explicitly exclude the records of children. Because they might be harmed if their records are used to improve care with no way to connect them to who they are personally.

    It’s now ~5 years after IRB and because of all of the headaches of getting the data to someone who isn’t faculty or a doctor, and who doesn’t have a $100k+ grant, I still don’t have my data. I need to send another email. I’m sure we can get an IRB extension with a few more trees sacrificed.

  29. Null Hypothesis says:

    If so he missed out on anonymizing Dr. W as “Dr. Kevin Casey.”

  30. SamChevre says:

    There a great post by Jacob Levy on this general phenomenon on Bleeding Heart Libertarians (which site is not in general even vaguely libertarian in any useful fashion), roughly contemporaneous with the events in the OP.

    An Argument about Regulation

    Time that we should be spending researching or teaching is instead spent asking for permission to do so, by humbly seeking to prove ourselves innocent of all sorts of potential malfeasance. No, I didn’t buy a glass of wine with that grant money. No, I haven’t given an in-class exam during the two weeks before finals. No, my study of Plato does not involve potential harm to human subjects or laboratory animals.

  31. Inty says:

    Hahaha! Reading this is refreshing. I’ve complained about research ethics procedures for a long time, and usually when I raise my concerns people say things like ‘Personally I’m happy with the balance of safety/efficiency’ or ‘You sound like Dr Krieger from Archer again.’

    For the current study we’ve been working on we’ve had to submit five amendments, and it’s usually for basic things like ‘We want to give participants more money’, and the amount of times they’re thrown back really helps build up frontal bone strength.

  32. J says:

    According to wikipedia, “IRBs are governed by Title 45 Code of Federal Regulations Part 46.[3] These regulations define the rules and responsibilities for institutional review, which is required for all research that receives support, directly or indirectly, from the United States federal government.”.

    Does that mean it’d be possible to set up a research organization of some sort that didn’t take federal funds and thus could avoid all the IRB nonsense? Now that you’re in private practice, could you run a study like that on your own?

    • Scott Alexander says:

      I’m not sure, but I think the main barrier is getting things published. I’ve heard some people say that journals ask if you’ve run your research by an IRB before they’ll accept it.

      • lightvector says:

        I think I’m not understanding things yet.

        If they are the main barrier, why is there not a strong incentive for the creation of journals that don’t require IRB approval for publication, given that they would be able to attract all of the valuable research that would never see the light of day otherwise?

        Is it that journals themselves are also under some sort of regulatory requirement? Or is there some force that has so far made it impossible for a journal not requiring IRB approval to become at least a little reputable?

        • Nabil ad Dajjal says:

          Or is there some force that has so far made it impossible for a journal not requiring IRB approval to become at least a little reputable?

          Network effects.

          Grants and promotions in academia are based on the number of publications weighted by the impact factor of the journals. Impact factor is a measurement of how often research from that journal is cited in other journals.

          So if you’re an academic researcher you are never going to submit a manuscript to F#&! IRBs Monthly unless it’s already been rejected from
          Proceedings of the National Academy of Nature Cell Science. And since you know that F#&! IRBs Monthly is a journal of last resort, there’s little to gain from reading it.

          The whole system of publication in science is hopelessly messed up. Periodically people suggest reforms but nobody agreed on the best way forward.

        • JulieK says:

          This sounds strikingly similar to the problem of people spending tens of thousands of dollars to go to university, not so much for any particular knowledge they gain there, but because no employer will look at them if they don’t have a degree from a reputable institution.

        • nikitaborisov says:

          If you wanted to enable IRB-free research, you would need both institutions that don’t take federal funding to carry out the research and journals that accept research from such institutions. Even if you overcame the inertia of getting enough people on board to establish a functioning research and peer review ecosystem, you’d have trouble avoiding getting branded as the Journal of Unethical Research and getting ostracized from the rest of the scientific community.

      • deciusbrutus says:

        “an IRB”? Does it have to be one subject to US jurisdiction?

        Because I’m imagining a transnational IRB that actually makes sense.

      • You CAN do such research, and plenty of places will publish it. You just need to be doing it in a facility that is for-profit and/or not government funded.

        If you’re looking to publish big-name, high impact journals, it’s harder. But if you’re doing good research, and you’re willing to promote it a bit and send it to people who are working in the area after it’s published, you can submit to PLOS-One and it will get cited anyway.

  33. mdv1959 says:

    I love this, one of your best IMHO.
    By the way, I’m guessing this should be conducive instead of conductive
    “to be and none of these are really conductive to wanting to participate in studies”

  34. jimrandomh says:

    The woman in the corner office who kept insisting everybody take the Pre-Study Training…hadn’t taken the Pre-Study Training, and was therefore unqualified to be our liaison with the IRB.

    Alternate hypothesis: that woman in particular was either incompetent and unusually obstructionistic, expecting a bribe, or personally hostile towards you in particular. The auditor’s nitpicks weren’t supposed to be enforced; the form wasn’t meant to be taken too seriously; everyone is expected to skirt the rules when the rules are stupid. That would explain how research sometimes gets done.

    • Scott Alexander says:

      The woman seemed nice and helpful in general. It was just that her job was to enforce the rules.

      • Ketil says:

        In my experience, bureaucrats come in two flavors: those that help you get things done according to the rules, and those who offer no help, but will find you afterwards and chastise you for doing it wrong.

      • Anonymousse says:

        The way you described your interactions, I got the sense she was apathetic and unwilling to help. I would expect a “nice and helpful” person to not only direct me to the items one at a time (Pre-study training, getting a PI, pointing me to forms), but try and anticipate what I might need in the future, ie, apply her knowledge to my situation.

        The inefficient and cumbersome routine you describe reflects poorly on her and paints a picture of incompetence or neglect.

      • shar says:

        If corner office lady had never taken the required training course, wasn’t she actually unqualified to do… her one job? Does that mean every other study she worked on was technically in violation? Were there any other studies that were successfully completed at [hospital]?

  35. deciusbrutus says:

    What are the barriers to asking a doctor for anonymized information about the results of screening tests and final diagnoses of patients? Do the patients have to consent to a retrospective study?

    • The Big Red Scary says:

      I was wondering about this below. More specifically, I am wondering if their barriers to doctors sharing aggregate information online.

  36. MostlyCredibleHulk says:

    I am reading this, and I can’t help but think: this is Scott doing an obviously Good Thing (figuring out how to heal people better). The bureaucracy is implementing something that looks like obviously Good Idea (protecting privacy, obtaining consent, not being a Nazi, that sort of thing). Nobody is interested in making the whole thing to fail and nobody went to any special effort to obstruct it. Nobody’s career, financial interests, pride, political opinions, religious tenets, etc. are on the line. Nobody expects bribes or financial profit from it. No activist groups full of people with immutable convictions and financed by shady billionaires are involved.
    Now, let’s try to add that, and the bureaucracy that is also often corrupt, lazy and vindictive, and try to do something that may be also good, but less blatantly obvious, under the rules, which were made out of considerations which are sometimes less obvious than “not being a Nazi is a good thing”. Then we get a normal process of interaction of a citizen trying to do something with the State.
    Now, after we properly imagined how pleasant such process could be and how many would give up before doing something-that-might-turn-out-to-be-a-good-thing – please explain me how comes so many people so actively support having more and more of it and have more and more people subjected to such scenarios. Everywhere, on every corner, in each aspect of our life.

    • deciusbrutus says:

      You know who else had government being on every corner, in every aspect of life?

      Sparta. The Roman Empire. Feudal lords (“Journeyman” was originally the term for a tradesman so skilled that they were /permitted/ to travel!)

      • Not according to Wikipedia:

        Journeymen were paid each day. The word “journey” is derived from journée “day” in French.

        What’s the basis for your version?

  37. Eponymous says:

    This was absolutely hilarious. And then inexpressibly depressing.

    At least I don’t have to fill out any forms to torture my publicly-available data.

    • dodrian says:

      I too found this funny, and then disheartening.

      I hear that big mood swings in a short period of time is a good indicator of bipolar disorder. If only there were some study showing how frequently this correlated with a clinical diagnosis.

  38. kyleboddy says:

    Nazism isn’t the reason IRBs exist. Far worse. American unethical experimentation is, and omitting it is a huge error. Massive and bureaucratic oversight exists because American scientists would stop at nothing to advance the field of science.

    The Tuskegee Syphilis Experiment is the landmark case on why ethical training and IRB approval is required. You should know this. This was 100% covered in your ethical training.

    https://en.wikipedia.org/wiki/Tuskegee_syphilis_experiment

    I get why IRB approval sucks. My Informed Consent forms get banged all the time. But we’re talking about consent here, often with disadvantaged populations. It pays to be careful.

    Last, most researchers who need speed and expedited review go through private IRB organizations now because the bureaucracy of medical/university systems is too much to handle. Our private IRB that we engage with sends back our forms within a week and their fees are reasonable. Their board meets twice per week, not once per month. The market has solved at least this particular issue.

    EDIT: Private IRBs do not care about nonsensical stuff like the Principal Investigator having an advanced degree or being someone high of stature. (For example, I am a college dropout and have had multiple IRB studies approved.)

    Only bureaucratic, publicly-attached ones do. That’s a very reasonable complaint.

    • The Nybbler says:

      Massive and bureaucratic oversight exists because American scientists would stop at nothing to advance the field of science.

      The Tuskegee Syphilis Experiment was done by the Federal Government itself. Why should every other researcher suffer, and the pace of advancement slow to a crawl, because the government acted like a government?

      It pays to be careful.

      Has the amount of damage prevented by the IRB system outweighed the amount of damage caused by it?

      • kyleboddy says:

        Why should every other researcher suffer, and the pace of advancement slow to a crawl, because the government acted like a government?

        Punishing the whole because the government acted the way they did is something governments do best. I think your question answers the overarching question quite neatly.

        To make it clear: I don’t like the IRB process. But I understand why it exists. And the private marketplace has mostly solved all the stupidity that exists in the process. Would it be nice if it was dismantled? Probably. But that’s not very likely. We have the 2nd best thing currently, and at least our IRB provider is a pleasure to deal with.

      • deciusbrutus says:

        The way regulations work is that something bad happens (Like giving a bunch of people Syphilis), then in the review someone asks “What would have stopped this?” to which an answer is given “Maybe if the scientists had some basic ethical principles”.

        Then a New Policy is implemented on the general case of the specific example. “Therefore now every experiment must adhere to all of the basic ethical principles that we have identified! Fill out the approved paperwork, citizen!”

        I’ve seen this in ‘real time’ (several months) in air traffic- after a situation with converging, nonintersecting runways where an aircraft missed a landing and while going around got too close to an aircraft departing the other runway, new rules came out that technically prohibited all landings and departures (because technically every aircraft conflicted with /itself/ and had to be clear of the runway’s /infinitely/ extended centerline before crossing the threshold if you read the text literally), but with a bit of judicious “That’s absurd, don’t read the text literally” only prohibited using converging runways for arrivals and departures effectively.

        • The Nybbler says:

          but with a bit of judicious “That’s absurd, don’t read the text literally” only prohibited using converging runways for arrivals and departures effectively.

          Which is the worst of all worlds, because now if some accident happens you can blame the poor sod who ignored the literal rule. But if some accident happens because some well-connected fool ignored the intended rule, it can be pointed out that the rule was absurd and widely ignored and he gets away scot-free. You get the drawbacks of a complex system of rules and the drawbacks of anarchy.

      • ECD says:

        I’ll say that issues with human experimentation are a lot broader than Tuskegee and didn’t always involve the government. For a not great source, see the wiki page. My favorite on there is Wendell Johnson’s, let’s experiment on orphans to see if we can induce speech impediments by belittling their speaking abilities. Turns out, we can.

        I’m very sympathetic to the argument that the government needs to do better on small projects, but human experimentation makes lots of people paranoid and unethically performed experiments do harm well beyond the people actually involved.

      • MostlyCredibleHulk says:

        The Tuskegee Syphilis Experiment was done by the Federal Government itself. Why should every other researcher suffer

        That happens more than people think. A lot of regulations stem from violations that were enforced by the government (e.g. a lot of anti-discrimination laws are the reversal of earlier discriminatory laws or regulations or actions by the same government. not all, but significant amount), in essence we make the fox guard the henhouse, based on the fact that the fox ate all the chickens last week.

        Has the amount of damage prevented by the IRB system outweighed the amount of damage caused by it?

        Was there ever a research about how much system like IRB costs in opportunity costs? I’m not sure it’s even possible, but I am pretty sure nobody have done it before designing the IRBs. Somehow, this experiment didn’t require the IRB review. See Yudkovsky on the same topic: https://www.facebook.com/yudkowsky/posts/10155654175799228

  39. onyomi says:

    I think this is another piece in the puzzle of my theory of “just let people use their best judgment, damnit” philosophy. Beyond a very basic, error-catching, conflict-of-interest avoidance level, trying to substitute objective systems for subjective, individual judgment seems almost always to make things worse. In the end, someone has to exercise discretion, and putting that person several levels of bureaucracy away from the matter makes things worse.

    In theory, such procedures are about holding people accountable for the power they wield. In practice, they are about everyone covering their ass. The bureaucratic enterprise will not be complete so long as the buck stops anywhere. Ideally we’d want to create some sort of perpetual motion buck transport system.

    The same thing is happening in academia. What are professors supposed to do? Educate the next generation, engage in meaningful and important research, contribute to the community, make the world a better place. But how do we know they’re making the world a better place, especially when it comes time to offer tenure or promotion? Well, we could look at the student evals, but everyone knows students give high marks to attractive professors with low grading standards. We could ask one of his colleagues to evaluate the quality of his research, but they’re buddies and besides, we can only afford one expert in x at this university, so who’s qualified to evaluate it? Better send it out for review by external experts… who will take a year to complete the review because they’re too busy filing out forms proving that they are making the world a better place, applying for grants, figuring out how to score more “making world a better place” points on their next project, and so on.

    • Nick says:

      A few years ago the Higher Learning Commission mandated that my university gather more data, especially student course evaluations, and that we tie the curriculum of a given class more explicitly to the learning goals of the program, the department, and the university. That all sounds good in theory, especially the latter, but in practice it meant syllabi ballooned to three times their size, full of identical boilerplate which professors were required to inform us about, and the student evaluations simply generated enormous amounts of useless data at the end of each semester. Drives me crazy, and I was only a student!

      • Toby Bartels says:

        At the community college where I teach, we have this sort of boilerplate (GELOs) in our syllabuses, but we don’t expect the students actually read it. As for the evaluations, they’re not tied to the GELOs, and I do read them.

  40. Ted Levy MD says:

    So Scott got a research idea, and was happy.
    Then he had to work with the hospital IRB because he had a research idea, and was sad…

  41. Eric Zhang says:

    God, this was painful. I think…I feel like I need to read an Ayn Rand book or something, and maybe donate all my money to Patri Friedman.

  42. Baeraad says:

    Heh!

    I’m sorry to hear about your Kafka-esque nightmare. Still, I’m not convinced that the moral is “bureaucracy is bad” so much as it is “a system built to handle very complex studies will produce very bizarre results when trying to handle a very simple study, and it’s hard to do anything about that because it’s tricky to define what constitutes a ‘very simple’ study.” Though it also sounds like they’re trying to do exactly that now, so that’s good.

    The general libertarian rant I am afraid I can’t sign off on. The last thing I want to do is give more freedom to “the amateurs, the entrepreneurs, the hobbyists, the people doing something as a labor of love.” Those are the exact group of people in the world I trust the least! Yes, even less than the monolithic, soulless corporate profit machines – for all that I’m no fan of those, either.

    • Baeraad says:

      In fact, having thought about it some more (for long enough that I can’t just edit it into my original post), I think I will go one step further and say that it sounds to me like your mistake was thinking that you could just piggy-back your study onto an existing procedure – just add a little tweak to get a great deal more output from a trivially small increase in input. And that put you at odds with a system that assumes that every study will be a huge, looming thing-unto-itself which will have a ton of moving parts that require carefully monitoring every step of the way. If your study had fit that description, all the paperwork would have made a lot more sense. The reason why the situation grew so bizarre was that the study itself was so small and simple as to be crushed beneath the bulk of the control mechanism.

      Well, like I said, I sympathise with your frustration, and I’m glad that there are wheels in motion to create allowances for smaller studies. But I also see very clearly why not everyone who thinks he’s got a bright idea can be allowed to run in and make a few tweaks. The system favours huge, lumbering mega-projects because huge, lumbering mega-projects are easier to keep track of than a myriad of people all doing their own really clever things that they’re sure can’t cause any harm. And the thing is, I do in fact think that any scientific profession has a lot of aspiring Mengele-esque mad scientists just chomping at the bits to do this really cool thing they’ve thought of and that they’re sure will do a lot more good than harm if only those stuffed suits in the review boards would let them try it – and while I do feel bad for the people with genuinely harmless ideas who have them smothered because the system isn’t granular enough to distinguish them from the mad scientist crowd, I do think it’s perfectly justified to take extremely careful precautions against the latter.

    • The last thing I want to do is give more freedom to “the amateurs, the entrepreneurs, the hobbyists, the people doing something as a labor of love.” Those are the exact group of people in the world I trust the least!

      The proposal isn’t to make it legal for those people to kill or rob, just to do research. How much harm do you expect someone with limited resources and constrained by ordinary criminal law to do in the process of trying to learn things?

      And the thing is, I do in fact think that any scientific profession has a lot of aspiring Mengele-esque mad scientists just chomping at the bits to do this really cool thing

      Both Mengele’s activities and the Tuskegee experiment depended on the researcher being backed by a government willing to authorize such things. Do you think a Nazi IRB would have blocked Mengele?

      • Nick says:

        The proposal isn’t to make it legal for those people to kill or rob, just to do research. How much harm do you expect someone with limited resources and constrained by ordinary criminal law to do in the process of trying to learn things?

        It’s also worth mentioning that making the process easier isn’t necessarily going to open the floodgates, just improve the situation for the marginal researcher. And the marginal researcher is more likely to be like Gelman or Lakens than a complete amateur.

      • JPNunez says:

        How much money do you think replicating the Stanford Prison experiment would take? A creative fellow could botch it for almost nothing. Or botch the Milgram experiment. Or try to replicate some study about verbal abuse for cents on the insult.

        Money helps being unethical, but it is certainly not a barrier.

        I am ok with keeping the amateurs out. Scott’s experience is a pay I am willing to pay for this, and the fault here seems to lie with the particular IRB, not with the system itself. Things are perfectible but hardly unnecessary.

    • Jack V says:

      But “run a study with minimal risk based primarily on data which is already available” is quite a common use case. There SHOULD be a way of doing it. And yes, it needs to have risk assessments in case there’s something the doctor misses, that’s quite a good idea, but “no risks” should be a common answer.

      For that matter, I remember (maybe in the UK?) hearing about emergency room admissions. Maybe from Scott, I can’t remember. Sometimes someone comes in, unconscious, and some hospitals procedure is to do procedure A. In other hospitals its NOT to do procedure A. Both of these clearly have good reasons behind them, and are commonly used. But it turns out, getting informed consent means you can’t randomise which people get without asking them, and find out which is actually better. That’s a bit of an edge case, but one that really matters. And yes, there ARE risks, and maybe just getting a doctor to sign off, “I think either of these procedures are acceptable” isn’t enough, it needs some sort of panel review saying “is this definitely ethical even without people’s consent?” But what I heard was, it was too hard to alter the regulations to make that possible.

      • Murphy says:

        It was a section in Ben Goldacres book Bad Pharma.

        Head trama, steroids or not steroids.

        The trial was eventually done and was the mother of all nightmares to get done and it turned out that half the doctors out there treating that kind of condition had been seriously harming/maiming their patients through ignorance.

        but in our society ignorance is an indestructible moral shield. Maiming people through ignorance is 100% ethically acceptable as long as you maintain your ignorance, even if you willfully maintain that ignorance and there is not considered to be any moral imperative to dispel that ignorance and gathering data is assumed evil by default because “evil scientists”.

        hence while its perfectly moral to maim hundreds of people in ignorance it’s immoral to admit ignorance and slightly adjust your procedure so that you can gather data and eventually stop unintentionally maiming people.

      • Douglas Knight says:

        It is from Goldacre (thanks, Murphy).

        This is why we need to do randomised trials wherever there is genuine uncertainty as to which drug is best for patients: because if we want to make a fair comparison of two different treatments, we need to be sure that the people getting them are absolutely identical. But randomly assigning real-world patients to receive one of two different treatments, even when you have no idea which is best, attracts all kinds of worried attention.
        This is best illustrated by a bizarre paradox which currently exists in the regulation of everyday medical practice. When there is no evidence to guide treatment decisions, out of two available options, a doctor can choose either one arbitrarily, on a whim. When you do this there are no special safeguards, beyond the frankly rather low bar set by the GMC for all medical work. If, however, you decide to randomly assign your patients to one treatment or another, in the same situation, where nobody has any idea which treatment is best, then suddenly a world of red tape emerges. The doctor who tries to generate new knowledge, improve treatments and reduce suffering, at no extra risk to the patient, is subject to an infinitely greater level of regulatory scrutiny and oversight; but above all, that doctor is also subject to a mountain of paperwork, which slows the process to the point that research simply isn’t practical, and so patients suffer, through the absence of evidence.
        The harm done by these disproportionate delays and obstructions is well illustrated by two trials, both conducted in A&E departments in the UK. For many years it was common to treat patients who’d had a head injury with a steroid injection. This made perfect sense in principle: after a head injury, your brain swells up, and since the skull is a box with a fixed volume, any swelling in there will crush the brain. Steroids are known to reduce swelling, and this is why we inject them into knees, and so on: so giving them to people with head injuries should, in theory, prevent the brain from being crushed. Some doctors gave steroids on the basis of this belief, and some didn’t. Nobody knew who was right. People on both sides were pretty convinced that the people on the other side were dangerously mad.
        The CRASH trial was designed to resolve this uncertainty: patients with serious head injury would be randomised, while still unconscious, to receive either steroids or no-steroids, and the researchers would follow them up to see how they got on. This created huge battles with ethics committees, which didn’t like the idea of randomising patients who were unconscious, even though they were being randomly assigned to two treatments, both in widespread use throughout the UK, where we had no idea whatsoever which was better. Nobody was losing out by being in the trial, but the patients of the future were being harmed with every day this trial was delayed.

        But this wasn’t the only harm done. Many trial centres insisted on delaying treatment, in order to get written consent to participation in the trial from a relative of the unconscious patient. This written consent would not have been necessary to receive steroids, if you happened to be treated by a doctor who was a believer in them; nor would you have needed written consent to not receive steroids from a doctor who wasn’t a believer. It was only an issue because patients were being randomised to one treatment or the other, and ethics committees choose to introduce greater barriers when that happens, even though the treatments patients are randomised to are the exact ones they would have got anyway. In the treatment centres where the local regulators insisted on family consent to randomisation, it delayed treatment with steroids by 1.2 hours on average. This delay, to my mind, is disproportionate and unnecessary: but as it happened, in this case, it did no harm, because steroids don’t save lives (in fact, as we now know, they kill people).

        • Jiro says:

          Allowing people to do things for non-research purposes creates different incentives than allowing them to do the same things for research purposes, so this is not obviously bad. The results of the change in incentives are unseen, while the obviously okay experiments that are prevented are seen, so seen versus unseen bias comes in.

          Furthermore, it’s actually pretty hard to write rules that allow all the obviously okay experiments and nothing else. (Of course, once a case turns up, you can say that that case is obviously okay and then write a rule specifically listing that one case, but that doesn’t help you write the rule ahead of time.)

  43. HeelBearCub says:

    So Scott, did you ask at the beginning of the process “What are all of the steps I will need to to go through in order to be able to perform psychological experiments on mental ill, indigent wards of the state?” (Make no mistake, that is what you were doing. No matter that the likelihood of harm in this case was small or non-existent. Not accepting your word for this is a Schelling fence.)

    In any case, it sounds like you never did. (Or, perhaps, you are deliberately amping up the Kafkaesque because it makes the story more compelling and that is one of the “obvious” jokes?)

    • Sniffnoy says:

      I think this is bucketing things wrong. As Scott points out — the exact same things he wanted to do were already being widely done elsewhere to no ill effect. Why does it make sense to group it with other more risky experiments — things that share the intent of gaining generally useful knowledge but don’t remotely share the same risk profile — rather than things that share the same risk profile but with only the intent of gaining information about that particular patient? Why is that a relevant way of grouping things? Why should we conclude that we need that sort of consent to administer a questionnaire for experimental purposes, but that there are no substantial issues of consent (while dealing with mentally ill, indigent, wards of the state!) when it’s for diagnostic purposes instead? You’re basically making an argument by category and I don’t think it works; what the category you name has in common, and what’s actually relevant to the question at hand, are not in fact the same.

      • HeelBearCub says:

        You are missing my point. You need to imagine the least convenient world, not the one where Scott Alexander who we all know and love is attempting to write papers that test a hypothesis. The rules are there for everyone. And if they are at an institution that does not do much of this, we should not expect them to be optimized.

        If Scott went in assuming he could test his hypothesis without jumping through hoops at all, that seems a failing on his part.

        If Scott did assume he would have to jump through hoops and didn’t assemble a detailed plan for this at the beginning, that also seems a failing in his part.

        If he asked at the beginning and was given false information, it seems he would not have said “The woman seemed nice and helpful in general. It was just that her job was to enforce the rules.”

        • Nancy Lebovitz says:

          On the other hand, Dr. W didn’t seem to know how bad it would get, and he’s familiar with the system.

          It’s conceivable that Scott ran into some unusual snags.

          • HeelBearCub says:

            On the other hand, Dr. W didn’t seem to know how bad it would get, and he’s familiar with the system.

            Was he?

            The quote is:

            Dr. W, the hardest-working attending I knew, the one who out of some weird masochistic impulse took on every single project anyone asked of him and micromanaged it to perfection, the one who every psychiatrist in the whole hospital (including himself) had diagnosed with obsessive-compulsive personality disorder.

            It says he “took on every single project”, not that he had ever done a study before.

            I suspect that this hospital did not regularly, or at all, conduct studies with their mentally ill patients.

    • 6jfvkd8lu7cc says:

      No, it was not what he was doing. «Experiments» implies change of actual handling of the patients, here he would be OK with doing everything exactly as before and then post-processing data.

      • HeelBearCub says:

        What do you call a series of tests designed to allow confirmation of a hypothesis?

        • 6jfvkd8lu7cc says:

          If they do not include any change in operation — observation and analysis.

        • Winter Shaker says:

          I think we can reasonably be expected to distinguish between ‘experiment’ in the ‘carry out manipulations on human subjects, collect the data, and then carry out manipulations on the data’ sense from ‘experiment’ in the ‘carry out manipulations on data which was already going to be collected anyway’ sense.

        • gbdub says:

          What do you call MLK Jr? A criminal?

          That Scott should have anticipated the hoops does not make them less stupid. The problem is a system that treats “mine existing data we’re collecting anyway for useful information” exactly the same as pumping people full of experimental chemicals is the problem, not Scott’s failure to predict the logic of it doing so.

        • HeelBearCub says:

          @6jfvkd8lu7cc, @Winter Shaker, @gbdub:
          Remember, we are talking about a Schelling Fence here.

          Yes, I understand that the emotional valence of the word “experiment” is different in lay-terms than the technical definition. But we aren’t actually talking about the emotional valence.

          I design an experiment to test a hypothesis about a population under the care of my hospital. The Schelling Fence is that a 3rd party (known as an IRB) needs to review that experimental design and that it should comply with both internal and external guidelines.

          The fact that the IRB and the guidelines did not make it easy for him to perform this particular experiment says very little about whether this Schelling Fence should be torn down.

          • Murphy says:

            Yes, a Schelling fence built primarily by people primed all their lives such that the terms “evil” and “scientist” are naturally expected to be paired through basically 90% of TV shows, movies and books featuring scientists.

            There are similar schelling fences that basically make it exceptionally hard to even try finding cheaper treatments affordable for poor people in the 3rd world. Similarly well intentioned, similarly constructed without sentient thought. They’re then maintained by amoral actors who own the rights to more expensive treatments and as such have a vested interest in keeping things the way they are.

            When a schelling fence appears to be harming the people it’s supposed to protect as in scotts example that actually is a good argument for tearing it down and rethinking where the fence should be. Everyone here knows why the fence is in place but can still see that it’s in a stupid place. that’s exactly when a schelling fence should be torn down.

        • MostlyCredibleHulk says:

          That sounds like reenacting the motte-and-bailey post (posts?).

    • MostlyCredibleHulk says:

      That makes it sound as if asking a person “did you feel happy and then sad” is some kind of Mengele-styled “psychological experiments”. Attaching vaguely nefarious label to something that you already admit can not harm anybody (and then attaching the weasel words “small or” – really, what “small” harm can be done by asking a person if they ever felt happy and then sad – that is not already done thousandfold in any hospital?), and trying to then see it basing only on the attached emotional component of the vaguely nefarious description – is exactly how it becomes a Kafkian nightmare.

      • HeelBearCub says:

        No, it makes it sounds like the hoops are their for a reason, and if Scott can’t successfully elucidate the reason (which he has not), then Chesteron says that the Schelling fence cannot be torn down.

        To some extent what we have here is the typical mind fallacy. “I would never abuse a system that allowed this kind of experiment without IRB review, therefore no one would.”

        Also, his experience at this one hospital is not generalizable. Mostly what we can deduce is that their was no profit to the hospital in conducting experiments, at least on this population, so they had not set up the rules to incentivize this.

        • MostlyCredibleHulk says:

          No, it makes it sounds like the hoops are their for a reason, and if Scott can’t successfully elucidate the reason (which he has not), then Chesteron says that the Schelling fence cannot be torn down.

          Oh, everybody knows the reason. At least the original reason. Not being a Nazi (not literally Nazi, but behaving in that way), all that sort of thing. The problem is that regulation – as is its way in 99.9999% of cases it is allowed to grow unrestricted – has long overgrown the initial reason and has metastasized into a self-sustaining system of rules that everybody agrees do not make sense but nobody can change because the only alternative is being ok with being a Nazi!

          To some extent what we have here is the typical mind fallacy. “I would never abuse a system that allowed this kind of experiment without IRB review, therefore no one would.”

          No, the claim here is that the rules applied to this case – and, by extension, many similar cases – are clearly ridiculous and do not make any sense and do not prevent any harm but only hinder useful things. It may be possible that in some other cases – like when the experimenters actually want to remove people’s organs and pump them full of new experimental virus to see what happens – that would be appropriate. But since the system can not make the difference, that is exactly a problem. Putting people in jail may make sense in some cases – e.g. when they are violent criminals. But if the system is unable to distinguish between the case of a violent criminal and the case of a regular citizen and puts everybody in jail just in case – this system is broken. One may even say this system is evil, even though the initial goal was very noble.

          Also, his experience at this one hospital is not generalizable.

          Why not? What makes you think this hospital is radically different from any other?

          • HeelBearCub says:

            Why not? What makes you think this hospital is radically different from any other?

            I’ll quote myself from above:

            I suspect that this hospital did not regularly, or at all, conduct studies with their mentally ill patients.

          • MostlyCredibleHulk says:

            Why do you suspect that? Any specific data pointing to it?

          • HeelBearCub says:

            The fact that he did not seek out anyone to publish with who had already published papers on his ward, or even hospital, for one.

        • beleester says:

          “I would never abuse a system that allowed this kind of experiment without IRB review, therefore no one would.”

          Reminder that “this kind of experiment” is asking people questions that the hospital asks as a matter of daily operations, so “a system that allowed this kind of experiment” doesn’t seem like it would be more abusable than allowing patients to be in the hospital in the first place.

          Also, a lot of stuff Scott ran into was plain old inconsistency. It’s one thing to say “The fence should be here, and even if you don’t like it at least we all agree where it is.” It’s quite another thing to say “Well, one person says the fence should be here, and another person says the fence should be here, and a third person says it’s over there, and all of those are Sacrosanct Schelling Fences that can’t be moved without a good reason.” Clearly one of those fences can’t be right, because other people have made it a standard practice to break them and not gone tumbling down the slippery slope.

          For instance, the hospital doesn’t allow patients to use pens, but I bet they still had to sign paperwork sometimes. If signatures in pencil are good enough for the hospital, why are they not good enough for the IRB?

          Similarly, Scott is authorized to view the patient’s diagnosis as a doctor, but he’s not allowed to view that same diagnosis, which he wrote, in his role as study investigator. It seems pretty obvious that you could fix this contradiction by appending “…unless the investigator and the doctor are the same person” to the rules about not viewing PHI, and it seems obvious that doing so would not allow additional violations of patient privacy.

          • HeelBearCub says:

            @beleester:
            “this kind of experiment” isn’t doing any work in that sentence. Focusing on it is a red-herring. “Abusing a system that allows” is the core of that sentence.

            Remember, Scott’s proposal is to somehow determine that a study doesn’t need IRB review. Someone will need to be trusted to make that decision. Who would have done that in this situation? Scott? The attending? The nice lady who doesn’t seem to know how the process works to begin with? Does that sound like a good idea in the general case?

            And we also have a number of posters who post that they are working with IRB processes that are fairly painless. Meaning that this is not a specific problem of IRBs, but the IRB process at this one hospital (perhaps only for this one ward.)

          • random832 says:

            Remember, Scott’s proposal is to somehow determine that a study doesn’t need IRB review. Someone will need to be trusted to make that decision. Who would have done that in this situation?

            The IRB should have determined it.

            Unpack your phrase “doesn’t need IRB review” to “doesn’t need [each specific rule that it ran afoul of] to be enforced” and this is not as contradictory as it sounds at first.

            The point isn’t that the IRB should have been completely separate from the study. The point is that for every “The IRB listened patiently to my explanation, then told me that this was not a legitimate reason” in the story, it was in fact a legitimate reason, the rules should have declared it to be a legitimate reason, and the IRB should have accepted it. The fact that there is no incentive structure for the IRB (or whatever rulemaking body whose rules they enforce) to make rules that provide for legitimate reasons for exceptions, and/or that they have no authority to grant individual exceptions based on their own judgement, is part of the problem.

    • moscanarius says:

      How would asking for all the burocratic steps before starting make said burocratic steps less nonsensical and less hindersome to research?

      As far as I understood, most of Scott’s complaint is about the stupidity of the regulations, not about their lack of transparency to outsiders. His annoyance at finding out there are always more regulatory steps is small compared to his annoyance at the irrationality of the rules.

      • HeelBearCub says:

        Have you ever done an actual project before?

        The statement “Failure to plan is a plan to fail” is an aphorism for a reason.

        • baconbacon says:

          So is “apologizing is easier than asking permission”.

          • HeelBearCub says:

            You are correct.

            Correctly identifying what rules are hard and what rules are soft and the actual expected punishment for a violation of rule is a life skill. For some people, especially people on the autism spectrum, this is more difficult.

            This is especially hard in an area where one has no experience.

          • The Nybbler says:

            “You’re autistic if you can’t examine a system, figure out which rules are serious and which rules are widely ignored, and act accordingly” is a sick burn but not a true one.

          • HeelBearCub says:

            @The Nybbler:
            No sick burn was intended, nor do I think it is even present.

            If you read it that way, I think this says more about you than it does about me or my statement.

          • carvenvisage says:

            Correctly identifying what rules are hard and what rules are soft and the actual expected punishment for a violation of rule is a life skill. For some people, especially people on the autism spectrum, this is more difficult.

            Which happens when your rules suck. Perhaps because you got someone very proud of not being ‘on the autism spectrum’ and their ‘soft skills’ to write them.

            No one denies that crappy rules gives you an opportunity to exercise your social savvy, that’s just obviously a negative and incidental thing, not a conclusively positive one. If before embarking on a project you had to complete a high jump proportional to its scope, then people, especially short people with weak legs, would have more difficulty, and having no trouble with the process would show strength, and perhaps even virtue, but that in no way means the process is good. It’s the other way around. Often a particularly bad process requires particular virtues or strengths to navigate. (and people become attached to it for that reason.)

          • HeelBearCub says:

            @carvenvisage:
            What speed do you go when you drive?

            Do you think this is because the rules about speed limits “suck”?

        • moscanarius says:

          Have you ever done an actual project before?

          Yes I have. Have you? If yes, do you agree 100% with every single rule you ever had to comply during the project’s execution?

          Your snarky retort is not an answer to what I asked. How would knowing everything tiny bit of the regulations in advance have made the regulation less nonsensical? Scott is not (and for the matter, neither are the commenters) complaining only about the seemingly unpredictable rules that jumped on him all the time, he is complaining that the rules themselves were bad. Had he had thorough guidance through the ruleswamp, we would still get a text complaining about the stupidity of many of the rules (but with many thanks to the bureaucrats that helped him navigate them).

          I don’t deny that better planning could have made the whole experience less painful and more fruitful, but it would not have changed the problems with the regulations themselves. I don’t think anyone here is against having ANY rules for research, we just think that the some of the rules we have are an unecessary burden – they are many, they are difficult to follow, they have no logical justification, and we see no benefit from their application. You may disagree on this, but can you at least realise that the issue is not just the unhelpfulness of the lady in the corner?

  44. jhertzlinger says:

    Was Golachab on the IRB?

  45. acbabis says:

    Well, because psychiatric patients aren’t allowed to have pens in case they stab themselves with them. I don’t get why stabbing yourself with a pencil is any less of a problem, but the rules are the rules. We asked the hospital administration for a one-time exemption, to let our patients have pens just long enough to sign the consent form. Hospital administration said absolutely not, and they didn’t care if this sabotaged our entire study, it was pencil or nothing.

    Couldn’t you have given them calligraphy brushes? Or quills?

    • JulieK says:

      Felt-tip markers? Crayons?

    • Loris says:

      I feel like Scott missed a trick here.
      The thing to do would have been to make the risks section all about “supervised use of a pen” and how this risk would be managed, then put “other risks : none” below it.

    • MostlyCredibleHulk says:

      If The List Of Approved Writing Devices says “pens”, then it’s pens. Or filling out The List Amendment Form on 25 pages which should list five peer-reviewed studies showing that newly proposed device is safe to use in every circumstance imaginable, including Arctic pole stations and immediate vicinity of active volcanoes.

  46. Eternaltraveler says:

    Most of the bureaucracy you experienced is institutional and not regulatory. I have done research both in an institutional setting (turn around time at UC Berkeley=5 months to obtain ethics approval and countless hours sucking up to self important bureaucrats who think it’s their sacred duty to grind potentially life saving research to a halt over trivia they themselves know is meaningless), and as an entrepreneur and PI at a biotech startup (turn around time for outsourced IRB=5 days with reasonable and informed questions related to participants well being), where we also do quite a bit more than ask questions. FYI the kind of research I did at UC Berkeley that took 5 months for approval has absolutely no regulatory requirements outside of it.

    Your former institution should fire it’s IRB and hire a new one and keep firing until they end up with an IRB that reads the regulations and doesn’t make up its own (so should every university). That won’t happen obviously. Ethics boards are just one more example of how (almost) all large institutions drown themselves in useless bureacracrats for no good reason.

  47. dacimpielitat says:

    “The IRB listened patiently to my explanation, then told me that this was not a legitimate reason not to have […].”
    prolly not supposed to be funny, but by the time I read the third time this line I was almost laughing in tears.

  48. John Nerst says:

    Two weeks ago there was that post about EA and I couldn’t help thinking about the description of Derek Parfit bursting into tears at the thought of suffering.

    I found it strange, as in, suffering is bad and terrible and we should work towards reducing it, but the existence of suffering is part of nature and not some great crime or injustice. It’s something to tackle in a calm, methodical way in order to achieve the best results. We’re making advances as it is, so at least things are moving in the right direction. There’s no good reason to panic or spend your life in a state of perpetual emotional emergency.

    The descriptions of EA people in general seem to indicate that many felt the same way: suffering (in the abstract) is terrible not just by judgement but viscerally, and eradicating it is the moral imperative. I find it hard to identify, because while I agree that suffering is bad (I mean, it’s suffering, it’s almost a synonym for “bad”) I don’t feel that particular burning passion (I’m suspicious of passion in general, as it prevents you from soberly examining your thought process and take conflicting values seriously…) that makes someone go full Comet King.

    With that in mind, this article told me something about myself. While I struggle to see the existence of suffering as a catastrophic crime, the existence of the kind of suffocating bureaucracy and systemic stupidity that this gives example of does induce full-on existential horror in me. As I read, I got a lump in my throat, I felt my heart rate go up and my muscles tense in ways that reading about violence, injustice and death typically doesn’t.

    It doesn’t appear to be because of the harm it does, not the very real costs and consequences of institutional stupidity. It’s the insanity itself that’s a moral crime, not it’s consequences. This is a perversion, a disease, something revolting and disgusting (words that I, in the past, thought people were being melodramatic and reflexively intolerant when they used in a moral sense). That’s what my limbic system seems to think, anyway, and I don’t appear to have much say in the matter.

    Apparently I feel this way about institutional stupidity, the same way I assume many EA:s feel about suffering, feminists feel about patriarchy, vegans about meat eating, communists about capitalism, some conservatives about homosexuality and anarchists about power itself.

    It kind of makes me feel like a bad person. Apparently I’m not as motivated by helping people as I should be, and what truly has the capability to drive me is horror and disgust at things that offend my sensibilities. What I don’t understand is what exactly determines what we feel this way towards and what we don’t. I certainly didn’t choose, and I remember feeling similarly since early childhood – it doesn’t appear to be some important pivotal experience behind it.

    Am I not very good because I can’t steer my feeling of moral outrage towards worthy issues? Or are “good people” simply those who happen to be wired in such a way that injustice, suffering, etc. is what offends their sensibilities? There is something important about politics, ideology and morality lurking here.

    • Jack V says:

      In my limited experience, some people feel injustice passionately, viscerally, and some people don’t. Concentrate on acting good and don’t worry about it.

      If you work on a particular problem you will probably develop more feelings about it. But it seems to me, some EA really do feel a gut sense of everyone’s suffering, and others came to that conclusion from an intellectual standpoint. Everyone has some issues they’re incensed about, and some they turn up, donate, volunteer for X hours, and then feel they’ve done their bit and go home.

      It’s difficult, because there may be something of a bell curve. Most people need a certain amount of outrage to initiate action. But you don’t have that if you haven’t thought about it — and you don’t have that if you’ve spent too long fighting it and have burned out. Doctors see people die a lot and have to go on doing their job, even being compassionate, when most aren’t as involved in the 1000th patient as the first one, but it works more or less anyway.

      • John Nerst says:

        I can’t say I worry, particularly, it’s more that I’m interested in the mechanism (and a little bit of annoyance at feeling moral outrage at things I can’t justify feeling more angry about than other things I don’t feel outrage about) and what it says about disagreements. I don’t think these reactions are the result of intentional cultivation, it doesn’t appear to be for me at least.

  49. Jack V says:

    Silly question, but I know what people mean colloquially by OCD, roughly “really obsessive about small details”. And I know a little about the actual medical condition (not exactly what was genengineered in Xenocide, but more like that than the colloquial version).

    And I know there’s been a trend to ask people to avoid the colloquial meaning as unhelpful to people who actually have a medical condition. But I assumed psychiatrists would use the clinical definition, but your story sounds more like Dr W exhibited the colloquial definition. Is that just how psychiatrists talk, or is the distinction more complicated than that?

    (I realise there can be an overlap but that didn’t sound like what you were talking about.)

  50. The Big Red Scary says:

    The ideal, long-term solution to the problem described above is reform of the system.

    But there must be some short-term solutions which would be better than nothing. For example, a medical study underground.

    Surely conscientious doctors try to learn from their experiences, and very conscientious doctors probably try to do this in a systematic way. Are there regulations preventing doctors from privately analyzing data and anonymously sharing the conclusions (but not the data) on a forum for doctors? Members of the forum could be ranked like on stackexchange and doctors reading the forum can decide for themselves whether it is legal or prudent to take into account the experience of posters on the forum. Perhaps there already is such a forum?

    Now suppose even such sharing of aggregrated information is forbidden. Someone might do it anyway, reasoning that the benefit outweights the cost. (Is it really so different from sharing experiences with colleagues face to face?) In this case, it might seem hard to collaborate on a study in the way described in the post, since one investigator could fear being outed by the other investigator. But some trust could be built in by investigator X encrypting a contract to investigator Y using Y’s public key and signed with X’s private key, and vice versa. So if investigator X were to try to defect, then investigator Y could prove that X was complicit, and vice versa.

    • The Big Red Scary says:

      The content of my comment above, it turns out, had already been discussed above, under a comment of doubleunplussed.

  51. Sniffnoy says:

    Taking my comment in reply to HeelBearCub above and considering what we can actually conclude from it–

    I think one thing that would really help here is a general rule that if it’s ethical to do something (where “something” might include “not having people sign consent forms”) for non-research purposes, then in those same circumstances it’s ethical to do that same thing for research purposes. Since in this case people were already giving this questionnaire without consent forms for screening or diagnostic purposes, and presumably that’s considered OK (nobody’s stopping them), this rule would apply to this case.

    …OK, the above is not quite true. There’s one problem with it: Publishing means, well, publishing, and there’s no guarantee in general that what’s published won’t reveal something confidential about the participants (some things can easily be anonymized, but not everything; and just because something easily can be anonymized, doesn’t mean that was actually done). Given this, it would make sense to get people’s consent even in cases like the above, not because of what might be involved in the study, but because of the risk that something revealing about them might be published inadvertently.

    …but seeing as the IRB looks over the protocol, it would make sense for the IRB to judge whether things are being anonymized sufficiently, and if so, say that such consent forms are not needed in such a case. Which would allow a case like this (pretty damn anonymous) to go without consent forms at all.

  52. Christian Kleineidam says:

    If one good thing comes out of the Trump administration it might be deregulation. The FDA already made a few very good moves from deregulating e-cigarettes, to granting the MAPS MDMA trial Breakthrough therapy designation to allowing 23andMe to provide genetic risk information for certain conditions.

    It might be worthwhile to think more clearly about what kind of policy tools could be used by the present administration to reduce regulation in this case.

  53. Oleg S. says:

    Removing such pointless bureaucratic hurdles looks like a good cause for a non-profit. Does anyone know organization or fund within EA community devoted to this cause? Any fund for improving decision-making in government in general?

    • John Greer says:

      Seconding this. If there’s not, it seems like a good idea for say, 80000 Hours or the Open Philanthropy Project, to have a post outlining the area so someone can consider working on it.

  54. 6jfvkd8lu7cc says:

    So, because of massive insitutionally-sanctioned abuse of a screening test as if it were a final diagnostic test, many hospitals prescript powerful drugs with unpleasant side effects to patients who don’t need these drugs in the first place?

    Where are the negligence lawsuits represented by ambulance-chasing lawyers when they can improve the procedures…

  55. Michael Arc says:

    So we keep using crazy NAZI racial theories created by people like Asperger but now we whitewash them by pretending the traits described aren’t simply the traits of the groups they wanted to oppress?

  56. Murphy says:

    Science is considered evil by default.

    I’ve seen the same phenomenon you did where XYZ is perfectly fine and allowed if you are doing it routinely, fine as long as you’re only doing it as part of a drunken bet, fine as long as you’re doing it because you feel like it. But if it’s recorded to gather knowledge then it’s evil by default.

    If a doctor or group of doctors honestly doesn’t know whether drug A or drug B are more effective for treating something and the research literature doesn’t provide an answer they are entirely free to make the choice based on whim, astrology or even roll a dice each time they have to make a choice between the 2. They are entirely free to use whatever whim they feel like to make that choice with zero repercussions.

    But if they attempt to record the outcomes to try to decide for later which treatment is actually better then suddenly they’re evil monsters doing unethical research. It doesn’t even matter if they never publish, they’re generating new knowledge and as such they’re monsters. it doesn’t matter if it changes literally nothing about how their patients are treated, once it’s science it’s evil by default.

    if it’s with the intent of learning something then it is considered, by default, evil and you’re probably going to kill us all with giant mutant ants or your questionnaire forms are going to grow legs and slit our throats with papercuts.

    The “official” reasons are about things like nazi experiments and syphilis but that kind of thing typically doesn’t so totally warp law, regulation and public perception for so long.

    personally I blame how science is almost always treated in fiction. It warps how the public/voters see science in general.

    Scientists are always evil monsters intent on doing evil for the sake of “science!” and there’s basically never any mention of the costs of not learning things, ignorance is glorified and serves as an impregnable moral shield.

    http://tvtropes.org/pmwiki/pmwiki.php/Main/ScienceIsBad

    Most writers are not scientists. Whether it is because they perceive science as cold and emotionless, or because they just disliked science and embraced literature after failing math in high school, luddism is an awfully common philosophy in the arts community.

    The typical theme is that some sort of advanced scientific research has Gone Horribly Wrong, creating a monster, causing an impending natural disaster and/or a massive government cover-up. The heroes typically discover the side-effects of the research and investigate, discover what’s going on, and try to stop it.

    • Nancy Lebovitz says:

      I think you’re on to something there, but it’s a broader issue.

      There’s a major cultural trait of blaming deliberate action if it goes wrong, and I don’t think it’s just about science.

    • Nabil ad Dajjal says:

      I think the objection comes from the lay perception of scientific research as frivolous. From their perspective, scientific knowledge is static so it’s hard to compare the value of any given experiment against risk to humans.

      I’m sure this bias has a real name but I think about it in terms of derivatives.

      There’s the Encyclopedic view of science, f(science), where we have a big warehouse of scientific knowledge and a scientist’s job is to pull out the right fact for the right situation. That’s what you usually see scientists do on TV. Most laymen seem to have this view.

      Then there’s the Translational view of science, f'(science), where scientific knowledge increases at a constant rate and a scientist’s job is to make sure that the next discovery is something profitable and not random trivia. That’s what scientists in obscure fields are often mocked for, that they’re researching amphibian wound healing instead of the next cancer cure. Most businessmen seem to have this view.

      Finally there’s the Consilience view of science, f”(science), where scientific knowledge on any two subjects ultimately feed back into one another and the scientist’s job is to accelerate that process. That’s how good scientists view their own work.

      If you’re operating on the first derivative then you can justify some human research, mostly clinical trials, but not basic research. If you’re not even on the first derivative then even that seems suspect: you have the information, just use it to help people already!

    • Squirrel of Doom says:

      > personally I blame how science is almost always treated in fiction

      My hunch is there is an underlying cause to both the fictional treatment and the public suspicion of scientists. I think it’s much easier for fiction to follow than to create such trends.

      I wonder how much Hiroshima added to world wide science-phobia?

      • HeelBearCub says:

        Maybe some. But fear of the power of knowledge is pretty old, memetically. I really don’t think Mary Shelley invented it, but “The Modern Prometheus” (Frankenstein) stands as an earlier example.

      • bbartlog says:

        Maybe. But I think the fictional ‘mad scientist’ archetype, along with the less dramatic negative portrayals that are still in the same vein, is actually descended in unbroken line from fictional characters that predate anything resembling modern science. I mean, you could say that fiction is just following the trend of human hubris, but I think we want a word other than ‘trend’ at that point.

    • MostlyCredibleHulk says:

      I don’t think it’s considered more evil by default than opening a grocery store or building a house is considered evil by default. You open a store, you have to get permits, comply with ADA, prop 65, NLRB rules, EEOC rules, and probably a hundred of other regulations. If you want to build a house, you have to again get permits, comply with zoning laws, account for environmental effects (God have mercy on your soul if you prospected site is a habitat of some rare species of a mouse), submit an architectural plan, etc. etc. I don’t think it’s because our culture views grocery stores and houses as inherently evil. I think it’s because our culture views regulations as being essentially zero-cost or very low cost compared to the dangers of “something going wrong” which is perceived as totally unacceptable and having to be prevented no matter the cost.

    • Jiro says:

      I’ve seen the same phenomenon you did where XYZ is perfectly fine and allowed if you are doing it routinely, fine as long as you’re only doing it as part of a drunken bet, fine as long as you’re doing it because you feel like it. But if it’s recorded to gather knowledge then it’s evil by default.

      It’s much harder to use to gather knowledge because making it easy to do it when gathering knowledge creates more bad incentives than making it easy to do as part of a drunken bet. Just because in the current situation no harm is being done by making it easy doesn’t change this.

      That’s one of the seeming paradoxes of incentives: something can be positive when applied to the balance that exists now, yet if it exists, you probably would have had a different balance in the first place, which would have been worse.

      • Murphy says:

        That sounds like a fully general counterargument.

        We can’t allow the peasants to travel without their lords permission, sure it seems like a good idea right now but that would change the incentives and we can’t predict the results. We can’t allow women to vote, sure it seems like a good idea right now but that would change the incentives and we can’t predict the results. We can’t do away with slavery, sure it seems like a good idea right now but that would change the incentives and we can’t predict the results.

        you can throw it at everything and it sticks just as well.

  57. JulieK says:

    Wait wait, this actually happened? The email version of this post that I read lacked the “epistemic status” header, and at first I thought it was another one of your first-person fiction pieces.

  58. fortaleza84 says:

    I doubt that Scott’s experiences are universal, i.e. I think that if he had received a nice juicy grant to do this research, the IRB would have been a lot more accommodating and a lot less hostile.

    I am an attorney and this reminds me of an experience I had as a junior associate at a big law firm. I wanted to do some pro bono work to get some hands on experience; the firm’s conflicts committee was very suspicious and hostile about approving me to take on this work. Which was pretty annoying at the time, but it hindsight, I can understand it: There is little upside for the firm since there is no money involved and it was not big splashy pro bono work like representing prisoners at Guantanamo Bay. On the other hand, whenever a law firm attorney walks into Court, there is always some chance it will blow up in a way that embarrasses the firm.

    So too here, from a cost/benefit point of view doing this research was not in the Hospital’s interest and thus the hostility I think.

    Edit: In fact, I would not be surprised at all if there were some “moral offset” going on here. i.e. the IRB regularly has revenue-generating research for which it is overly lax and being overly strict with Scott’s research helped the IRB members to feel better about themselves.

  59. bean says:

    Did you ever consider invoking Godwin’s Law on the IRB?

  60. baconbacon says:

    Two days ago you included this statement in your post

    So the state replaces this moral rule with the legal rule “don’t have sex with anyone below age 18”. Everyone knows this rule doesn’t perfectly capture reality – there’s no significant difference between 17.99-year-olds and 18.01-year-olds. It’s a useful hack that waters down the moral rule in order to make it more implementable.

    And then you write this piece.

    Rigid laws aren’t a hack that smooths implementation, they are a shield against accountability. If a person dies in police custody the first thing that happens is a check to make sure that all of the procedures were followed or not. If they weren’t all followed then the person that broke one gets reprimanded in some way, and if they were then the rules get reprimanded and a commission is formed to re evaluate and re write the rules. The goal is not zero deaths in police custody, it is to deflect any blame for deaths away from the system.

    Bureaucracy is the idea that rules are what matters. It doesn’t matter who wrote them, if they were written well, all that matters is that they exist and should therefore be enforced. To those who are trying to figure out how this situation occurs it is simply selection. Two bureaucracies are set up to approve studies. One tries to follow the spirit of the rules, prevent harm etc, and grants waivers when they can to grease the wheels. One sticks to every rule and only grants waivers when the rules specifically state that they should. A few years pass and there is a hearing to compare the effectiveness of the two approaches. The first group claims a great success, they aren’t backlogged and many more studies have been approved. The second group is hopelessly backlogged with all the additional paperwork they demand and have allowed 1/10th as many studies. After the first day of hearings everyone in the first group is sure that they will receive funding. On day 2 the sticklers demand an audit of all studies passed by each group. It is found that the first group has approved studies with literally hundreds of (mostly meaningless) violations on average. Some of the waivers are questionable, and then every single major negative news story about abuse or incorrect bookkeeping or whatever came from a study they approved. The sticklers yell “ah ha! All the success of the corner cutters is a mirage! We could approve as many studies if we only had the funding! Give us their funding and we will pass many more studies AND we won’t have any of the abuses that leaked out”, and lo the sticklers were granted funding.

    • Jiro says:

      Rigid laws aren’t a hack that smooths implementation, they are a shield against accountability.

      Rigid laws aren’t a hack that smooths implementation, they are a shield against selective and arbitrary enforcement.

  61. spandrel says:

    I submit protocols to IRBs at several institutions on a regular basis, and while these boards can be frustrating at times, at least at research institutions they are willing to engage with what you are trying to do if you have shown due diligence in doing the same. That means writing the protocol to address explicitly the concerns that the IRB might raise. In the case of Scott’s proposed study, for example, there is language and precedent that can be used to frame the study question so that an IRB will nod and say, okay, you are doing a validation study and have made a convincing case that there is minimal risk, so minimal or no consent is needed. Scott doesn’t say exactly what was in his protocol, but submitting one that suggests you have not considered the risks is generally a sure way to run afoul of the IRB; I don’t think this is a bad default position on their part.

    • baconbacon says:

      It is a terrible default position because, as Scott notes, it basically drives the little guys out. Don’t worry we will hold your hand through 2 years of paperwork and explaining what we want from you doesn’t bother an organization that is pumping out numerous studies and has staff that can learn the rules and then apply that knowledge to future projects. For a doctor who wants to start a line of investigation, and who will only be able to find the time to squeeze in a study every couple of years it is a non starter.

      • spandrel says:

        I’m saying if the little guy (or the big guy) sends the IRB a protocol which lays out the potential risks and what they will be doing to mitigate them, the IRB will be much more open to endorsing the protocol. If the little guy or the big guy submits a protocol which suggests they haven’t even thought about the risks, it raises red flags. This is just my experience, so it may not be generalizable, but as far as biases go, I don’t fault the IRB for being skeptical of someone who wants to do a trial and hasn’t bothered to write a few sentences indicating that they thought through what could go wrong and how likely/unlikely it is. Simply googling the definition of IRB indicates that the primary purpose is to protect human subjects; it doesn’t take a roomful of technocrats to realize that the most important part of the protocol to them is the section on risk to human subjects.

  62. Friendlygrantadmin says:

    I’m not an expert in IRB (although that’s kind of my point–getting to that), but I think your headaches were largely institutional rather than dictated by government fiat. Let me explain …

    I used to be the grant administrator for a regional university while my husband was a postdoc at the large research university <20 miles away. Aside from fiscal stuff, I was the grants office, and the grants office was me. However, there was an IRB of lonstanding duration, so I never had to do much other than connect faculty whose work might involve human subjects with the IRB Chair. I think I was technically a non-voting member or something, but no one expected me to attend meetings.

    This was in the process of changing when I left the university because my husband's postdoc ended and we moved. It was a subject that generated much bitterness among the small cadre of faculty involved. Because I was on my way out, I never made it my business to worry about nascent IRB woes. My understanding was that they had difficulty getting people to serve on the IRB because it was an unpaid position, but as the university expanded, they were going to need more and different types of expertise represented on the IRB. I can't be more specific than that without basically naming the university, at which I was very happy and with which I have no quarrel. I never heard any horror stories about our IRB, and I would have been the first point person to hear the them, so I presume it was fairly easy to work with.

    Anyway, the IRB auditing stuff you outline is just insane. The institutional regulations pertaining to the audits were probably what generated the mind-numbing and arcane complexity of your institution’s IRB. Add in finicky personalities and you have a recipe for endless hassle as described.

    So here's the other thing to bear in mind: almost everyone in research administration is self-trained. I think there are a few programs (probably mostly online), but it's the sort of field that people stumble into from related fields. You learn on the job and via newsletters, conferences, and listservs. You also listen to your share of mind-numbing government webinars. But almost everyone–usually including the federal program officers, who are usually experts in their field but who aren't necessarily experts in their own particular bureaucracy–is just winging it.

    Most research admins are willing to admit the "winging it" factor among themselves. For obvious reasons, however, you want the faculty and/or researchers with whom you interact to respect your professional judgment. This was never a problem at my institution, which is probably one reason I still have a high opinion of it and its administration, but I heard plenty (PLENTY) of stories of bigshot faculty pulling rank to have the rules and regulations bent or broken in their favor because GRANT MONEY, usually with success. So of course you're not going to confess that you don't really have a clue what you're doing; you're just puzzling over these regulations like so many tea leaves and trying to make a reasonable judgment based on your status as a reasonably well-educated and fair-minded human being.

    What this means in practice is almost zero uniformity in the field. Your IRB from hell story wasn't even remotely shocking to me. Other commenters' IRB from just-fine-ville stories are also far from shocking. Since so few people really understand what the regulations mean or how to interpret them, let alone how to protect against government bogeymen yelling at you failing to follow them, there is a wild profusion of institutional approaches to research administration, and this includes huge variations in concern for the more fine-grained regulatory details. It is really hard to find someone to lead a grants or research administration office who has expertise in all the varied fields of compliance now required. It's hard to find someone with the expertise in any of the particular fields, to be honest.

    There is one area in which this is not so much true, and that is financial regulations. Why? Well, for one thing, they're not all that tricky–I could read and interpret them with far greater confidence than many other regs, despite having a humanities background. The other reason is that despite their comparative transparency, they were very, very widely flouted until the government started auditing large research institutions around 15-ish years ago.

    I have a short story related to that, too–basically, when my husband started grad school, we would frequently go out to dinner with his lab group and advisor. The whole tab, including my dinner and that of any other SOs and all alcoholic beverages (which can't be paid for with grant funds aside from narrow research-related exceptions), would be charged to whichever research grant because it was a working meal. I found it mildly surprising, but I certainly wasn't going to argue.

    Then the university got audited and fined millions of dollars for violations such as these and Found Religion vis-à-vis grant expenditures.

    With regards to your story, I'm guessing that part of the reason the IRB is such a big deal is that human subjects research is the main type of research, so they are really, really worried about their exposure to any IRB lapses. However, it sounds like they are fairly provincial in that they aren't connected to what more major research institutions are doing or how they handle these issues, which is always a mistake. Even if you don't think some other institution's approach is going to work for you, it's good to know about as many different approaches as you can to know that you're not some insane outlier as your IRB seems to be. As others have noted, it also sounds like that IRB has become the fiefdom of some fairly difficult personalities.

    I already know how extensive, thorough, and helpful training pertaining to IRB regs is, which is not very. I remain deeply curious about the qualifications and training of your obviously well-intentioned "auditor." My guess is she inherited her procedures from someone else and is carefully following whatever checklist was laid down so as not to expose herself to accusations of sloppiness or lack of thoroughness … but that is only a guess.

    Even though I hate hearing stories like yours–there is obviously no excuse for essentially trying to thwart any and all human subjects research the way your IRB did–I am sympathetic to the need for some regulations, and not just because of Nazis and the Tuskeegee Syphilis experiments. I'm sympathetic because lack of oversight basically gives big name researchers carte blanche to ignore regulations they find inconvenient because the institutional preference, barring opposing headwinds, will always be to keep researchers happy.

    • Nancy Lebovitz says:

      Thank you for laying this out.

      I believe that even when people try to standardize, there’s still a lot of local variation. (Weights and measures seem to be an exception.)

      • Friendlygrantadmin says:

        You have no idea re: local variation. The flipside is that when you find your institution needs some new policy, it’s a good idea to google other institutions’ policies. I had to “help” (accomplish) the establishment of a committee related to a type of research in biology. Again, my background: 100% humanities.

        I basically did lots of googling and downloading the rules for other universities’ committees until I found one that I thought did a good job complying with the relevant regulations without being unnecessarily arcane. I called the head of that university’s grants office and asked if she’d mind if I blatantly plagiarized that document, which of course she did not because it’s a flattering request. The NIH approved the committee and its regs on the first try, so I patted myself on the back for my diligent googling. It would have taken me a very, very long time to create the document from scratch because I don’t understand the research in question and have only the vaguest sense of the safety concerns that made the government require said committee to oversee it–and I’m sure nothing I drafted would have been as good as the one I copied. Because of my humanities background, it was important to me to secure permission before copying, but I suspect most people in the field don’t bother–none of this stuff is really creative work in the sense that it is or should be protected by copyright law.

        tl;dr, lots of local variation but also lots of stealing from colleagues!

  63. John Greer says:

    For anyone interested, Ikiru (1952) is a classic film depicting the frustration and absurdity of bureaucracy. Directed by Akira Kurosawa of Seven Samurai fame.

  64. Hyzenthlay says:

    I sometimes worry that people misunderstand the case against bureaucracy. People imagine it’s Big Business complaining about the regulations preventing them from steamrolling over everyone else. That hasn’t been my experience. Big Business – heck, Big Anything – loves bureaucracy. They can hire a team of clerks and secretaries and middle managers to fill out all the necessary forms, and the rest of the company can be on their merry way. It’s everyone else who suffers.

    Just wanted to echo this because I feel like it’s an important point that doesn’t get made often enough.

  65. One more thing we can blame Hitler for: medical bureaucracy. Thanks Hitler!

    Big Business – heck, Big Anything – loves bureaucracy.

    This is the key insight that took me from a right-libertarian perspective to a (non-Catholic) distributist leaning one in favor of subsidiarity as a principle. Economies of scale may make big organizations more efficient in terms of cost, but once they are at that size, they tend to travel up the other side of the U curve as they accrue more bureaucracy and enter the zone of diseconomies of scale. I think pragmatically organizations should be as small as they can be while reaching the unit cost minimum, and that over time we should try and leverage technology specifically to try and lower economies of scale, so that the scale that the minimum unit cost occurs at is smaller.

    In the meantime, some monopolies have to be accepted and regulated according to natural monopoly theory, but it certainly looks like we could wield anti-monopoly law more than we already do, especially in cases where we have markets functioning just a few years ago with more sellers that have now been bought out. This is the case with banks to a great degree. A lot of the consolidation over the last 10 years in banking and retail is reversable. Scale and consolidation is a tempting thing for both the businesses involved and for consumers. It really might cost less if Amazon bought out all other retailers, but that’s only in the short run; you can rest assurred that size corrupts and that a single price maker has no market discipline, and can fleece people with no alternatives. This tendency is what draws organizations up the other side of the U shaped average unit cost vs scale graph. More important than a free market in an abstract sense, is a competitive market guarded against the twin evils of over-nationalization and rampant private monopoly.

    • Bugmaster says:

      Agreed; this is the primary reason why I don’t subscribe to libertarianism/anarcho-capitalism. Without regulation, there’s nothing to prevent monopolies from forming and summarily destroying all competition, leading to a much more stagnant market than the one we have now.

    • @Bugmaster

      Generally speaking, I’m arguing for as low regulation in terms of what firms have to follow as possible, but very strong anti-monopoly law. I think libertarians are right about regulation when we’re talking about micro-management. Free markets are good, but they don’t stay free (occasionally the free market must be refreshed with the blood of monopolies). Government shouldn’t be used to control economic activity directly, generally speaking, but should be used as a tool to trim the weeds and ensure that markets have many sellers, so that the price mechanism can function well.

      I don’t think leftists have it right either (but the important thing is that I’ve found a way to feel superior to both!)

      • John Schilling says:

        But you seem to be making that argument out of your distaste for the bureaucracy and inefficiency that inevitably comes from monopolistic scale. While sharing your distaste on that front, why do we need the anti-monopoly law (the enforcement of which will itself be the worst sort of monopoly), when we can just watch and point and laugh as the monopolies accrue bureaucracy and become hopelessly uncompetitive to the point where new entrants can steal all their markets?

        • [Thing] says:

          Economies of scale aren’t the only reason firms attain monopoly power, so diseconomies of scale aren’t necessarily going to cost them monopoly status within any particular timeframe. Firms can fend off that fate by means of anti-competitive practices like exclusive contracts, predatory pricing, cartels, etc. And then there are natural monopolies due to network effects or whatever.

  66. ProntoTheArcherist says:

    I remember reading something from Venkat Rao over at ribbonfarm about how bureaucracies exist to dissipate risk away from any one individual actor in an organization. As organizations grow and carry more risk, individuals within them would be paralyzed if all of that risk fell on their shoulders every time they had to make a decision, and therefore the organization would stall at some point/size. So we have forms.

    This IBR story is viscerally frustrating as a reader, so I assume going through it would take years off my life. Maybe adding too many lawyers creates a runaway bureaucracy by adding a proxy value for risk (lawsuit damages) that is far in excess, and sufficiently divorced from, participant/patient outcomes that it just breaks the system eventually.

  67. thedirtyscreech says:

    I understand this post isn’t really about your struggles on the one study, nor how to get around it, but…

    Since it’s just a screening test and not a diagnosis, is there anything stopping you from screening all incoming patients (whether you suspect bipolar or not) prior to when you perform a more in-depth diagnosis so that it’s all in the normal chart? Then you could later attempt to get consent to release the survey and diagnosis to a study (anonymized, of course). That route would seem to bypass almost the entire reach of the IRB, storage requirements, pencil vs. pen, pre-announcing to the patient what the screening test is for, etc.

    • TheViper says:

      This would have been the practical approach. (Source: former IRB officer.)

      An experienced researcher, or research coordinator, would have suggested this: administer the screening tool, and perform the in-depth diagnosis, as part of clinical management. Then, when you have enough records, apply to IRB for a retrospective chart review, which waives consent and HIPAA authorization.

  68. Buckyballas says:

    Does anyone else think this was weirdly one-sided for Scott? I mean, the experience sounds horrible and frustrating, but going full Nazi straw man seems uncharitable. This is the guy who taught me all about seeing the other side, Chesterton’s fence, and all that. Or am I off base?

  69. [Thing] says:

    Following a chain of links from the OP led me to this tweet,

    As an IRB member, I’m concerned about leaving decisions to researchers. That’s what led to creating IRBs.

    which I initially took for an inspired witticism riffing on the same theme as Scott’s running gag about Nazis. On closer inspection, it appears she meant it seriously, but I prefer my interpretation. 🙂

  70. alchemy29 says:

    While I empathize with your struggle immensely, I disagree with your conclusion. I think bureaucracy levels the playing field by making sure that everyone plays by the same rules.

    Have you thought about what the world would be like without bureaucracy? You could certainly have done your study in an afternoon and then wrote it up, but instead of your IRB nitpicking it to death, that role would fall to the hands of powerful players who don’t like your conclusions. And unlike IRB scientists, they wouldn’t tell you how to fix it; they would hire imminent scientists and statisticians to convince people that you’re an idiot who doesn’t know how to do science, and reputation slaughter you*. This happens already to some extent (it even happens to tenured professors) – but having third party adjudicators plays a valuable role mediating scientific disputes and certifying that the research meets some minimum standard. It’s possible that there is too much bureaucracy, or the wrong kind, but having few standards would not help small time researchers.

    *Probably not in this case, but maybe if the research was high impact and potentially disruptive to some party.

    • The Nybbler says:

      I think bureaucracy levels the playing field by making sure that everyone plays by the same rules.

      Rarely. Bureaucrats play favorites. Sometimes for a sack of cash left on the table, sometimes just to people they are personally friendly with, and everything in between.

      Have you thought about what the world would be like without bureaucracy?

      https://www.youtube.com/watch?v=m2VxpTMAbas

      • alchemy29 says:

        It was never going to be completely level. I feel like you didn’t address anything I said.

        Edit: I don’t know if you’re serious but a world without lawyers is one in which guilt is determined by public opinion and/or cash. You would really want to live in that world?

        • keranih says:

          I don’t understand – do you mean a world different from now, when we pay lawyers tons of cash to influence legal and popular opinion to determine things?

          • alchemy29 says:

            I suspected this response. Bureaucracy is the result of having rules and laws (organizational and at the national level) that are actually enforced. What we call bureaucracy is mostly the result of documenting that all of the appropriate laws and rules are followed. Rules, when we don’t understand them, are a pain in the ass – but the alternative to having rules is not getting your way all the time. Instead, the alternative is that only those with influence or money get what they want. For example – in a country with food safety laws, you have to follow all of the rules otherwise the health inspector will shut you down. In a country where those laws aren’t enforced – you merely have to pay the health inspector an appropriate bribe. No annoying checklists, no employee food safety training, you don’t have to get rid of the cockroaches in the kitchen. Bureaucracy is blissfully absent. To give a personal example – in a country with driving laws, you actually have to learn those laws and then pass a driving test. In my home country, there is no such annoyance – you merely have to pay the examiner an appropriate bribe and bam, you’ve got your driving license. If you didn’t know that, and thought the driving test was legitimate, well tough luck. This generalizes to many other aspects of life.

            Now in a country with laws and rules, do they get bent in favor of those with power? Of course – but the less corrupt a country or organization, the less this happens. In the US you can sue a large corporation if you have a legitimate grievance against them and win. This is impossible in most (all?) developing countries. Celebrities have actually gone to jail in the US. That is unheard of where I am from.

            So to answer your question – assuming you live in a developed country. Yes a world extremely different than they one you are used to.

          • ghi says:

            The problem is that “rule of law” has an even worse failure mode, sometimes called “anarcho-tyranny”. Where people still meticulously follow the rules whether or not they help with their function.

            For example – in a country with food safety laws, you have to follow all of the rules otherwise the health inspector will shut you down. In a country where those laws aren’t enforced – you merely have to pay the health inspector an appropriate bribe. No annoying checklists, no employee food safety training, you don’t have to get rid of the cockroaches in the kitchen.

            In an anarcho-tyranny the health inspectors will shut you down for a minor error on page 17 of form 12B, but will permit the restaurant across the street to operate even though there are cockroaches in its kitchen because it has it’s paperwork in order.

    • John Schilling says:

      And unlike IRB scientists, they wouldn’t tell you how to fix it; they would hire imminent scientists and statisticians to convince people that you’re an idiot who doesn’t know how to do science, and reputation slaughter you*.

      How does the existence of bureaucracy prevent or even mitigate this? I’m not seeing it.

      • alchemy29 says:

        In that respect journals are more important than IRB’s* and I was sloppy in distinguishing. Jumping through the hoops, as it were, and gaining third party approval gives you legitimacy. It also gives you a way of altering your work so that it is more resistant to criticism. If you get a Nature or NEJM publication, then it lends credence to your work so that it cannot be as easily dismissed. If journals didn’t exist, big players would still circulate their papers and simply ignore people like Scott. And should his work gain too much attention, it would be trivial to discredit it.

        *I object more to Scott’s putting down of journals than IRBs.

        • ghi says:

          What do you mean by “legitimacy” and “discredit”, and how to those terms relate to things like “truth” and “Bayesian evidence”?

          • alchemy29 says:

            That’s a bizarre question – legitimacy is a commonly used word. If you publish a paper in NEJM then people are more likely to take it seriously than if you didn’t. Clear enough?

          • ghi says:

            That’s a bizarre question

            But an important one, which you would do well to actually think about. Specifically how does “legitimacy” relate to truth and Bayesian evidence?

            legitimacy is a commonly used word. If you publish a paper in NEJM then people are more likely to take it seriously than if you didn’t.

            Why is that the case?

            What you call “legitimacy” is associated with the NEJM because historically there was a correlation with truth. If that correlation starts disappearing (as appears to be happening), the NEJM will and should loose its “legitimacy”. Further, the growing popularity of alternative medicine suggests that is in fact the case.

  71. onyomi says:

    This may sound far afield, but I wonder if “loser-pays” would help at all with this. It seems to me a lot of the ass-covering is a result of fear of litigation. People would fear frivolous litigation less if people were penalized more for engaging in it.

    On the other hand, I think most European countries have loser-pays and are not known for having more efficient bureaucracies than us? But are they at least less litigious?

    • Linvega says:

      Given that America is basically known as “the country of lawyers”, where you can get sued because someone is stupid enough to spill burning hot coffee unto his own lap, I’d say yes, definitely.

      Also, if I remember Scotts article/the answers to it correctly, most northern european countries don’t struggle as much with cost disease as america does, especially in health care and education. So in a sense they’re more efficient.

      Japan also afaik has a lot less lawsuits and in general doesn’t seem to struggle as much with cost disease either.

      I think it’s very likely that the justice system is related to the cost disease, though possibly more in health care than in education. We live in a highly litigious society, and it drives the cost of everything up because it forces everyone to prepare for every possibility, no matter how ridicolous.

      • ManyCookies says:

        you can get sued because someone is stupid enough to spill burning hot coffee unto his own lap

        That particular case wasn’t obviously frivolous. McDonalds was serving coffee substantially hotter than necessary, the plaintiff suffered serious burns because of this increased temperature, and McDonalds had received similar burn reports before and took no action.

        • rlms says:

          Additionally, people talking about this case often reference the millions of dollars the jury wanted to award in damages, but the parties eventually settled for a few hundred thousand. McDonald’s refused a pre-trial offer of $90,000 (their counter-offer was $800); Liebeck’s medical expenses were around $13,000.

        • shenanigans24 says:

          I’ve seen this retort so many times that I would say the dominant narrative is now that it wasn’t frivolous. Yet I see little evidence for this revision.

          The coffee was brewed at the recommended temperature for brewing coffee, and was not hotter than coffee anywhere else. McDonalds coffee is still hot enough to burn. There is absolutely no standard coffee temperature that is mandated safe. It comes off the pot hot, then cools dependent on how long it has been off. Nothing about McDonalds coffee was uniquely different from anyone else’s.

          If the coffee simply contains the same hazard coffee has always had than McDonalds is hardly negligent. That reasoning would be the same as saying a brick maker is responsible for someone dropping a brick on their foot with the reasoning that “well their bricks are heavy.”

          The lawsuit was frivolous.

          • The Nybbler says:

            Yet I see little evidence for this revision.

            Just good propaganda from the Association of Trial Lawyers of America.

          • Glen Raphael says:

            @shenanigans24:

            If the coffee simply contains the same hazard coffee has always had than McDonalds is hardly negligent.

            Right. Alas, people are bad at large numbers and are either not utilitarians or are bad utilitarians.

            Suppose McDonald’s annually sells 50 million cups of coffee and by making the cups sturdy and hard-to-open and putting “warning: hot” text on the outside of the cup they’ve already reduced the risk of serious burns to literally less than one-in-a-million. That means tens of millions of customers enjoy a nice hot cup of coffee – many seeking it out because it’s so hot – while mere dozens of exceedingly unlucky people get bad burns. The pain of the few is real, but so is the satisfaction of the millions more.

            If we effectively make it illegal for a company that “has received similar burn reports before” to keep selling coffee, all that does is effectively outlaw any single large continuous firm selling a *lot* of coffee. We’d have to go back to coffee being sold by tiny mom & pop operations. Which wouldn’t reduce the amount of customers being scalded – in fact, there’d probably be MORE total customers scalded – but it would reduce the number who were scalded in connection with any one particular firm. Make the firms a hundred times smaller and it starts being unlikely any specific firm would have more than a few such incidents.

            I think the problem comes from applying our intuition about tiny firms at an understandable scale (which would be expected to experience few or no one-in-a-million bad outcomes) to unimaginably vast firms which statistically would be expected to experience many even if they are really careful about it.

            (It’s similar to the Foxconn suicide issue – we imagine that a dozen suicides in a single year is a bad suicide rate because we can’t quite grok a single company having as many employees in a single location as the entire population of Wyoming.)

          • random832 says:

            Suppose McDonald’s annually sells 50 million cups of coffee and by making the cups sturdy and hard-to-open and putting “warning: hot” text on the outside of the cup they’ve already reduced the risk of serious burns to literally less than one-in-a-million. That means tens of millions of customers enjoy a nice hot cup of coffee – many seeking it out because it’s so hot – while mere dozens of exceedingly unlucky people get bad burns. The pain of the few is real, but so is the satisfaction of the millions more.

            And they can’t spare a penny out of the profits from each of the fifty million customers who doesn’t get burned, to pay for $500,000 in medical costs for the ones who do?

            We’re not talking about making it illegal. We’re talking about requiring them to pay. It’s not at all clear why that shouldn’t be strict liability.

          • Glen Raphael says:

            @random832:

            And they can’t spare a penny out of the profits from each of the fifty million customers who doesn’t get burned, to pay for $500,000 in medical costs for the ones who do?

            Let’s suppose that they can’t spare it. Then what?

            We’re not talking about making it illegal. We’re talking about requiring them to pay. It’s not at all clear why that shouldn’t be strict liability.

            Do you have any argument why it should be strict liability?

            One reason it shouldn’t is that if the customer is the lowest-cost accident avoider you’ll get fewer accidents the more responsibility the customer bears for the result of, say, pulling the top off while squeezing the cup between their thighs. Another reason it shouldn’t is that McDonalds can only capture the producer surplus from the transaction but there is also a consumer surplus that McDonald’s can’t capture…which means strict liability might produce inefficiently few enjoyable hot cups of coffee served.

            What underlying principle are you applying here? Would you apply strict liability to individual mom & pop coffee shops too?

            Would you also apply it to companies that sell knifes and saws? Razors? Motorcycles? If I buy a motorcycle and immediately get in an accident, does the Suzuki dealer have to pay all my medical bills? If not, how is buying a cup of coffee and immediately having an accident with it different?

          • ManyCookies says:

            @Glen

            The plantiff’s suit was on relative unsafety, that is MD’s hot cup of coffee was significantly unsafer than what a consumer would expect at other venues. Taking that as true, the motorcycle analogy would be Toyota having some small but potentially disastrous engine flaw compared to other brands. Assuming the flaw could be definitively linked to accidents (good luck!), would Toyota be liable based on the relative unexpected nonsafety of their product? Should they?

            E: Gotcha.

          • Glen Raphael says:

            @ManyCookies:
            I was responding to random832’s claim that strict liability should be the standard, in which case you wouldn’t have to demonstrate fault at all so it wouldn’t matter whether their coffee was less safe than anyone else’s. In that case, my analogy should apply as originally stated.

          • random832 says:

            My point is that accidents happen. It’s more efficient for a shop (even a “mom-and-pop” one) to carry insurance and pass the costs of their premiums on to their customers, than for each individual customer to buy (or not buy, and then they’re screwed) ‘hot coffee insurance’.

            Let’s suppose that they can’t spare it. Then what?

            It’s literally a penny. Let’s not.

          • po8crg says:

            I wonder if cases like this would get resolved differently if the US had universal health care.

            After all, the total medical bill I would get from the NHS for several months in hospital and skin grafts after a third-degree burn would be £0. I bet Liebeck’s was a lot more than that.

            Should McDonald’s pay that? Perhaps not, but someone has to, and if Liebeck can’t or isn’t insured, then who is going to?

            And if she is insured, then her insurance company would be negligent not to get her to sue.

            I wonder how many cases are funded by a health insurer on one side and a negligence insurer on the other – each trying to avoid having to pay a bill that, ultimately, someone is going to have to pay.

    • po8crg says:

      UK is less litigious than the US and has loser-pays for most cases.

      We also have a pretty strict frivolous litigation standard which makes it much easier to get rid of crap cases without settling – which also means that many more cases are determined by the court (which means having a winner rather than an NDA).

      We’re not perfect, the government has invented a new system for semi-voluntary regulation which is awful (if you’re compliant with the regulations and get sued, then it’s loser-pays; if you’re not compliant then you pay win-or-lose, though the frivolousness rule gets applied first).

  72. j2kun says:

    Thanks for writing this. I come here for your essays about life in the medical and psychiatric world, where your expertise lies, because I feel you provide a fresh perspective (for me) into that world and how it informs your views. This is a great example of that.

  73. Glen Raphael says:

    we had to keep the Results Log and the Secret Patient Log right next to each other in the study binder in the locked drawer in the locked room.

    […] so I asked Dr. W whether it made sense, to him, that we put a lot of effort writing our results in code, and then put the key to the code in the same place as the enciphered text. He cheerfully agreed this made no sense, but said we had to do it or else our study would fail an audit and get shut down.

    This restriction – the having two books part – actually makes perfect sense to me. It’s not really there to prevent someone from nefariously stealing the un-blinded data. Rather, it’s to discourage you from sharing the un-blinded data.

    If you do this study, eventually you’ll want to show your analysis to somebody else. Maybe you want somebody with extra math cred to check your statistical analysis, or some colleague you mentioned the study to wants to take a look, either to give you feedback or because they hope to do a similar study for which your work is relevant. If you already have TWO books, Book A with the analysis and Book B with the decryption/conversion, you can easily copy a few pages of Book A or let them glance at parts of Book A while it’s under your supervision with basically no chance they’ll accidentally know exactly who the patients are. Because you’ve already done the blinding stage there’s no extra trouble to blind the data for this purpose, in fact it would take extra trouble to un-blind the data before sharing it.

    Whereas if you didn’t keep two books from day one – if you just use patient names in all the records and only blind things at the stage where you’re ready to write the journal article – then for any such scenario you might be inclined to just let them see the raw non-blind data (“okay, but you have to pinky-swear not to show this to anyone ever!”) thereby multiplying the chance that unblinded data accidentally gets out.

    (It may be true that in your study it really doesn’t matter if the data gets out, but in the general case if we accept the premise that the release of personally-identifiable medical information is bad, keeping two books seems like a huge improvement with respect to reducing the risk of that bad thing happening. Even if the books are normally kept side-by-side in the same room.)

    • Douglas Knight says:

      Yeah, Scott should have focused a little less on the object level and more on the meta level. But he did say a lot about the meta level: the failure of the training materials and IRB to offer such explanation; the disagreement between the IRB and the auditor over whether the IRB had the authority and/or adequate reasons for the few exemptions that it did grant; the overall effect.

  74. Friendlygrantadmin says:

    I don’t know if this has run across your radar, but this sort of thing is the reason that we still need IRBs. That does not mean we need IRBs such as the one you encountered–just that Nazi Germany did not provide the world’s complete-for-all-time supply of unethical people interested in conducting research on their fellow human beings.

    • Toby Bartels says:

      So what is this supposed to show?

      This discussion could really use what your comment promised: an example of the bad things that can happen when people don’t follow the rules for oversight. But all that’s reported here is that they didn’t follow the rules for oversight; it doesn’t demonstrate any bad consequences of that.

    • shenanigans24 says:

      I don’t see why, the entire case the article is making is that there wasn’t oversight. They are not saying people were harmed. You can’t prove oversight is necessary by pointing at someone without oversight who appears to be perfectly safe and saying “see that proves we need oversight.”

  75. Richard Kennaway says:

    By what process are these regulations made? Is there any process (cue hysterical laughter?) for determining, after they have been made, whether they were a good idea?

  76. privatehelpme says:

    Without giving too much away, my current institution requires you go through the IRB process for research using secondary, de-identified data in the public domain that you used in the past but are no longer analyzing, so long as the articles are still not published, even if you had IRB approval from your previous institutions where you actually did the work.

    That’s right: when I joined my current institution, I had to retroactively apply for IRB for some 2 dozen projects that I had previously completed under other institutions’ IRBs and which were either sitting at journals or languishing as terminal working papers.

    I didn’t do it. I figure that one IRB approval should be good enough for anyone, so long as no new data are being gathered or even old data re-analyzed. And in our IRB system, you can see everyone else’s history of IRB proposals, and I noticed that no one else did it either. Proving that if you make the rules onerous enough, researchers will rebel and not do it at all.

    Currently, I am thinking of quitting my job just because of IRB. There is good research to be done, and I want to do it, not be sidelined.

  77. AddictionMyth says:

    I hate to break it to you – but ‘bipolar’ is just alternating meth and booze. Yes, really.

  78. Bram Cohen says:

    Aren’t IRBs the reason we don’t have studies of flu vaccines? There have been several articles lamenting that we can’t do studies to find out if flu vaccines are effective because it’s assumed that flu vaccines are so effective that it would be unethical to not give them to study participants to find out if they’re effective. Sounds like an IRB thing. Also a failure of the assumption that the value of the output of a study is zero. Even most serious vegans are okay with animal trials of potential chemotherapy treatments. Sure, harm is done in the study, but the potential value of the study makes it worth it. Such calculations seem to not factor in with IRBs at all: If the potential harm is greater than zero, it’s unethical.

  79. R Flaum says:

    The bit about not rigorously defining violent actually seems like a reasonable point to me.

  80. packersfan1984 says:

    http://www.overcomingbias.com/2011/03/against-irbs.html

    Robin Hanson theorizes generally about IRBs in this Overcoming Bias post from March 2011. Scott’s story seemed like a perfect real world example. Hanson thinks IRBs are mostly “concern signaling leading to over-reaction and over-regulation.”

  81. Kevin J. Black, M.D. says:

    My experience at a major research university is that most of the IRB hassle is actually people trying to do it right. I know that is not universal. But still, sometimes you have to (figuratively) knock them upside the head with a retort like “seriously? you want me to call them to ask their permission to get their phone number?”

    For your study, you’re right, it should not have been quite that hard. As you found out eventually, “An IRB may use the expedited review procedure to [studies that] involve no more than minimal risk.” 21CFR56.110(b). “Minimal risk means that the probability and magnitude of harm or discomfort anticipated in the research are not greater in and of themselves than those ordinarily encountered in daily life or during the performance of routine physical or psychological examinations or tests.” 21CFR56.102(i).

    As to the privacy risk, you probably could have done something like this: “questionnaire responses will be labeled with hospital number, and after the attending physician’s diagnosis is recorded at the bottom of the page, the hospital number will be cut off and shredded.” Then no PHI will be maintained for the long run. Similar regulations allow for patients not to have to complete a consent form, since their signature on the informed consent document would be the primary (confidentiality) risk of the study. Rather the regulations allow, in cases like this, for implied consent, i.e. you write at the top of the study page that they are not required to answer these questions; then if they answered the questions one can infer that they were OK doing it. (It’s a little more detailed than that, but not much.)

    The reason many people on the IRB are not researchers is intentional. The point is not to have the fox guard the henhouse. Rather, ordinary folks, and people from disadvantaged populations, should have input in order to limit excessive researcher enthusiasm.

    Your point about research being more regulated than clinical care is widely felt. I think it was in a NEJM letter that a doctor wrote, “I can give a new drug to all my patients with no supervision. But I need an IRB to give it to half of my patients.”

    And unfortunately, the commenter who began his/her reply with “Amateur” has a point, i.e. that there is often a legal way around the hassle that is nevertheless far from ideal.

  82. robotpliers says:

    So I shared this link with my wife, who, in a past life, did a lot of graduate research work in cognitive psychology and had to deal with IRBs. Her immediate response was “they were fucking with him.” One or more people on that board didn’t want to do their jobs or had a backlog of studies or something, but they didn’t want to send out a blunt “reject” on the project and risk appeals, further scrutiny, etc. So instead, they set up a series of ridiculous hoops for you to jump through in the hope that you’d just go away quietly. The requests to place the name of the study on the consent form, to have a fully locked room (instead of, say, a cabinet), and the whole pencil/pen thing are pretty obviously designed to be difficult to accommodate (especially given where you work and what you do) and to make you go away.