In this month’s American Journal of Psychiatry: The Efficacy of Cognitive-Behavioral Therapy and Psychodynamic Therapy in the Outpatient Treatment of Major Depression: A Randomized Clinical Trial. It’s got more than just a catchy title. It also demonstrates that…
Wait. Before we go further, a moment of preaching.
Skepticism and metaskepticism seem to be two largely separate skills.
That is, the ability to debunk the claim “X is true” does not generalize to the ability to debunk the claim “X has been debunked”.
I have this problem myself.
I was taught the following foundation myth of my field: in the beginning, psychiatry was a confused amalgam of Freud and Jung and Adler and anyone else who could afford an armchair to speculate in. People would say things like that neurosis was caused by wanting to have sex with your mother, or by secretly wanting a penis, or goodness only knows what else. Then someone had the bright idea that beliefs ought to be based on evidence! Study after study proved the psychoanalysts’ bizarre castles were built on air, and the Freudians were banished to the outer darkness. Their niche was filled by newer scientific psychotherapies with a robust evidence base, such as cognitive behavioral therapy and [mumble]. And thus was the empire forged.
Now normally when I hear something this convenient, I might be tempted to make sure that there were actual studies this was based on. In this case, I dropped the ball. The Heroic Foundation Myth isn’t a claim, I must have told myself. It’s a debunking. To be skeptical of the work of fellow debunkers would be a violation of professional courtesy!
The AJP article above is interesting because as far as I know it’s the largest study ever to compare Freudian and cognitive-behavioral therapies. It examined both psychodynamic therapy (a streamlined, shorter-term version of Freudian psychoanalysis) and cognitive behavioral therapy on 341 depressed patients. It found – using a statistic called noninferiority which I don’t entirely understand – that CBT was no better than psychoanalysis. In fact, although the study wasn’t designed to demonstrate this, just by eyeballing it looks like psychoanalysis did nonsignificantly better. The journal’s editorial does a good job putting the result in context.
This follows on the heels of several other studies and meta-analyses finding no significant difference between the two therapies, including, another in depression, yet another in depression, still another in depression, one in generalized anxiety disorder and one in general. This study by meta-analysis celebrity John Ioannidis also seems to incidentally find no difference between psychodynamics and CBT, although that wasn’t quite what it was intended to study and it’s probably underpowered to detect a difference.
(other analyses do show a difference, for example Tolin et al, but the studies they draw from tend to be much smaller than this latest and in any case are starting to look increasingly lonely.)
Suppose we accept the conclusion in this and many other articles that psychodynamic therapy is equivalent to cognitive-behavioral therapy. Do we have to accept that Freud was right after all?
Well, one man’s modus ponens is another man’s modus tollens. The other possible conclusion is that cognitive-behavioral therapy doesn’t really work either.
If parapsychology is the control group for science, Freudian psychodynamics really ought to be the control group for psychotherapy. Although I know some really intelligent people who take it seriously, to me it seems so outlandish, such a shot-in-the-dark in a low-base-rate-of-success environment, that we can dismiss it out of hand and take any methodology that approves of it to be more to the shame of the methodology than to the credit of the therapy.
But what about the evidence base for cognitive behavioral therapy over placebo? Or, for that matter, the evidence base for psychoanalysis over placebo?
Part of the problem may be what exactly is used as placebo psychotherapy. In many studies, it’s just getting random people to talk to patients. This makes intuitive sense as a placebo therapy, but it seems vulnerable to unblinding – people usually have some expectation of what psychotherapy is like, and undirected conversation about problems might not match it. Or if the placebo therapists are not professionals, they may be less confident in talking to people about their mental health problems, more awkward, less charismatic, or otherwise not the sort of people who would make it in the therapy profession. So now a lot of people are coalescing around the idea that all therapy studies done against these kinds of placebo therapy are fundamentally flawed.
Studies that compare what are called “bona fide psychotherapies” – two therapies both done by real therapists with real training – tend to have a lot more trouble finding differences. This has led to what is called the Dodo Bird Verdict, after an obscure Alice in Wonderland reference I feel vaguely bad for not getting: that psychotherapies work by having a charismatic, caring person listen to your problems and then do ritualistic psychotherapy-sounding things to you, but not by any of the exercises or theories of the specific therapy itself.
Then the question becomes: if the Dodo Bird Verdict and the active placebo problem and so on are equally true of all psychotherapies and all psychotherapy studies, how come everyone become convinced that cognitive behavioral therapy passed the evidence test and psychoanalysis failed it?
And the answer is the CBT people did studies and the psychoanalysts didn’t.
That’s it. It may be, it probably is, that any study would have come back positive. But only the cognitive behavioral people bothered to perform any. And by the time the situation was rectified and the psychoanalysts had (positive) studies of their own to hold up, “everyone knew” that CBT was evidence-based and psychoanalysis wasn’t.
This seems like another case of doctors not understanding that there are two different types of “no evidence”.
I should qualify this sweeping condemnation. I believe a few very basic therapies that address specific symptoms in very simple ways will work. For example, exposure therapy – where you treat someone’s fear of snakes by throwing snakes at them until they realize it’s harmless – is extremely and undeniably effective. Some versions of CBT for anxiety and DBT for borderline also seem to just be basic coping skills about getting some distance from your emotions. I think it’s likely that these have some small effects (I know a study above found no effect for CBT on anxiety, but it was by a notorious partisan of psychoanalysis and I will temporarily defy the data).
But anything more complicated than that, anything based on an overarching theory of How The Mind Works, and I intuitively side with the Dodo Bird Verdict. And I think the evidence backs me up.
EDIT: Do not stop going to psychotherapy after reading this post! All psychotherapies, including placebo psychotherapies, are much better than nothing at all (kinda like how all psychiatric medications, including placebo medications, are much better than nothing at all).
How does desensitization therapy (getting people closer and closer to snakes, while giving them time to calm themselves, until snakes are no longer a big deal for them) compare to exposure therapy?
Exposure therapy is an overarching term covering both desensitization (gingerly moving snakes slightly closer each session) and flooding (throwing lots of snakes at them all at once and shocking the system). I don’t know much about the relative efficacy of the two methods, although there is good evidence suggesting that flooding is way more fun to watch.
For example, exposure therapy – where you treat someone’s fear of snakes by throwing snakes at them until they realize it’s harmless – is extremely and undeniably effective.
Thank God, I’m not afraid of snakes, but if I were, and if you decided to ‘cure’ me by upending a barrel of snakes over my head, I’d be out that door so fast you’d never see me again – but not before I had broken a chair over your head.
Also, telling someone “snakes are harmless” isn’t true. It’s fine in Ireland, where we have no native reptiles (apart from the common lizard, and in my five decades of life I’ve never seen one), or in a therapy situation where you’re using carefully selected non-poisonous snakes and not using large constricting snakes and (it is to be hoped) monitoring the encounter, but in a situation such as, for example, somebody who is going to live in India, you don’t want them to be of a state of mind that has them going “Pretty snakies won’t hurt me, my therapy told me so!” and picking up a krait to kiss and cuddle.
Exposure therapy shouldn’t give you false beliefs about snakes’ safety, just the ability to not be paralyzingly terrified by stimuli that don’t justify it.
Nick, someone empties a bucket of snakes/spiders/cockroaches/rats/sporks (depending on what scares me) over me, they’d better hope I’m paralyzingly terrified, otherwise they’re looking at a busted nose 🙂
I understand about phobias going out of control to the point where someone is afraid of walking down the street because SNAKES! even if there aren’t any snakes in the region, or the snakes there are aren’t poisonous, or you don’t get poisonous snakes living in the cities. So yes, helping someone over that is a good thing, but I don’t see how a gigantic scare all at once is any good – fine, dump a bucket of snakes over Joe, and he (once he climbs down from the ceiling) realises he didn’t get killed by a snake – but if he’s living somewhere where there are poisonous snakes who get into houses regularly and there are stories in the paper every week about someone being rushed to hospital for antivenin treatment, it may be less a phobia and more prudent fear.
So a less drastic method of dealing with Joe’s phobia may be warranted?
After a couple of years of being depressed, I’ve recently started doing cognitive-behavioral therapy. For the last few months I’ve been feeling exceptionally good, i.e., what has been a series of waxing and waning depressive episodes has passed, but I thought it would be good to do the therapy anyway to prevent relapse. I guess I’ll update in the direction of being more skeptical of methods from the social sciences even the smartest people I know endorse if I haven’t reviewed the how the studies were conducted. However, I’m still concerned about my ability to cope emotionally in the future. My exit from depression was characterized by forming a new relationship, finding competence in a new work environment, being outside and exercising more, having a healthier lifestyle, being more social, etc. That’s not surprising.
So what do I do now? Do I only being active and doing things that boost my ego and self-esteem? Do I keep a gratitude journal because that’s a better method of staying out of depression? Do I ignore this data an act as if my CBT sessions are still science-backed awesomeness so I get the affects of “having a charismatic, caring person listen to [my] problems and then do ritualistic psychotherapy-sounding things to you”? Should I be asking for information about mental health in blog comments at all?
All psychotherapies, including placebo psychotherapies, are vastly better than nothing at all. Just like all psychiatric medications, including placebo medications, are vastly better than nothing at all. Stick with therapy.
Scott: thank you. Thanks go to Deiseach and Vassar for their clarifications and suggestions as well. Therapy has been and is going well, and I expect it to continue to do so. Note that I haven’t had a depressive episode for several months now, and at the time I made the comment I was succumbing to a (somewhat irrational?) fear of my current therapy session of being unable to prevent a relapse. However, I’m feeling great these days.
Sure, stick with therapy. Saying X is just as effective as Y isn’t the same thing as saying X isn’t effective at all.
I think that different therapies probably work better for different people, same as you would imagine aspirin, paracetamol and ibufprofen should technically all work the same, but some people find one more effective for pain relief than the other.
Gratitude journals seem pretty low risk and empirical.
I keep hearing good things about Ketamine too, though I Am Not A Doctor.
“However, when they had been running half an hour or so, and were quite dry again, the Dodo suddenly called out ‘The race is over!’ and they all crowded round it, panting, and asking, ‘But who has won?’
“This question the Dodo could not answer without a great deal of thought, and it sat for a long time with one finger pressed upon its forehead (the position in which you usually see Shakespeare, in the pictures of him), while the rest waited in silence. At last the Dodo said, ‘EVERYBODY has won, and all must have prizes.’ ”
full chapter here:
http://en.wikisource.org/wiki/Alice%27s_Adventures_in_Wonderland/Chapter_3
First of all, I was told that studies showed no difference between talk therapies and placebo therapy, except CBT, which was better than placebo therapy. Are you telling me that this quoted non-existant studies? (which is rather different than studies overturned by larger evidence!)
What’s wrong with unblinding? If CBT beats a placebo and psychotherapy doesn’t, isn’t that a meaningful difference regardless of blinding? When you complain about unblinding, I can’t tell if you’re worried about false positives or false negatives. Also, if untrained talk therapy is established to be better than nothing, it’s a real treatment that should be considered for prescription and using it as a control is comparing two real treatments. Conceivably, it could win on cost-effectiveness grounds, even if it loses in absolute terms.
Your link for “for psychoanalysis over placebo” doesn’t seem to mention placebos (PS – here’s the link to the real publication, not a mere mentions of a privately circulated manuscript)
All of these treatments are pretty far removed from their theories. In none of these cases is the efficacy going to have much effect on my judgement of the related theory. In particular, I had been working under the belief that Freud was right about theory and wrong about practice.
I second Douglas Knight’s question. This is what I remember hearing as well – that CBT alone was better than placebo therapy.
Freud could even have been right about practice, but wrong about industrial psychology and procedure/guild/cirriculum design.
There are so many conflicting studies here that I’m having trouble fully sorting out the information but I think this study might help reconcile things. It says that real therapy is only slightly better than well-designed placebo, but much better than poorly designed placebo (where “well-designed” means that placebo therapists were equally prestigious, equally well-trained, got just as much time with the patient, and allowed the patient to talk about their problems in a way that seemed consistent with real psychotherapy).
This is consistent both with my claim above that the reason therapies do better than placebo is because of poor placebos, and with the claim that therapies don’t always do better than placebos.
By unblinding, I mean that all psychotherapies – CBT and psychodynamic alike – have at least some studies showing they do better than placebo, but that these may be because patients feel better (and so get more placebo effect) if they think they’re in the real arm rather than the placebo arm.
Since CBT was originally the only group that did placebo-controlled studies, unblinding allowed them to get an early lead in looking “evidence-based”
Since CBT was originally the only group that did placebo-controlled studies
It is frustrating that you keep repeating this claim without acknowledging that other people make contrary claims.
So… after a hundred years, the best anyone can come up with are placebos? That’s not very encouraging to someone whose depression seems pretty resistant to false hope, like, say, me.
Don’t underestimate placebos!
I don’t think it makes sense to consider it a placebo. It might just be that the good part is having a compassionate person listen to your problems and provide you an outside perspective on how to solve them, and if we tried Yelling At People For Being Terrible Behavioral Therapy it wouldn’t work.
Huh. I could swear I heard this quite some time ago? And then again at intervals over the years? Maybe I’m a prophet, but I definitely already believed this before it was, apparently, discovered.
Whoops, meant to post that at the bottom…
Bingo. The Dodo Bird Verdict doesn’t justify applying the word “placebo” to CBT. Just because something is easy and obvious doesn’t make it placebo.
I’ve also heard that talking to untrained people is as effective as talking to professional therapists… and that keeping a diary is also just as effective. I have no idea where I actually heard this, though.
Lesswrong of course.
Too lazy to google on talk therapy being effective regardless of what practice the practitioner uses but here are some citations from a Peter Hurford post on the diary thing.
11: Lyubomirsky, Sonja and Chris Tkach. 2003. “The Consequences of Dysphoric Rumination” in Costas Papageorgiou and Adrian Wells (Eds.). Depressive Rumination: Nature, Theory and Treatment, 21-41. Chichester, England: John Wiley & Sons.
12: Spera, Stephanie P., Eric D. Buhrfeind, and James W. Pennebaker. 1994. “Expressive Writing and Coping with Job Loss”. Academy of Management Journal 3, 722–733.
13: Lepore, Stephen J. and Joshua Morrison Smyth (Eds.) 2002. The Writing Cure: How Expressive Writing Promotes Health and Emotional Well-Being. Washington, DC: American Psychological Association.
I think it depends what you mean by “untrained people”. The study I heard on this used a control of “college professors who were rated as approachable and good at conversation by students”.
These people don’t know psychotherapy, but they have two ingredients necessary for a good placebo effect – charisma and prestige. If you were to just grab someone off the street and make them a psychotherapist, it might not work as well.
“two ingredients necessary for a good placebo effect”
Also ingredients that make one a good listener? It seems plausible that having a high status person listen to one’s problems would do more for one’s mental health than having a low or medium status person listen.
Snakes on a Plane 2: Therapeutic Boogaloo.
People who have done studies deserve to be taken more seriously than people who haven’t. Lack of evidence of efficacy is evidence of lack of efficacy, although in some cases (we’ve done a lot of studies that didn’t show anything… more research is needed! or we did a lot of stuff but… decided not to publish, because the results weren’t interesting) it is stronger than in others (we haven’t done any studies because it just didn’t occur to us that might be important).
People who have done studies deserve to be taken more seriously than people who haven’t. Lack of evidence of efficacy is evidence of lack of efficacy [….]
It may be evidence of lack of caring about efficacy. But some lack of evidence of is just evidence of lack of studies.
One would hope that lack of caring about efficacy was evidence of lack of efficacy. Unfortunately in this field maybe it isn’t.
I wonder if the difference is that psychodynamic therapy works better for some people, and CBT works better for other people, and when they randomly assign people the two groups end up cancelling each other out and you get no significant difference. Not sure how you’d test that though.
I also kind of suspect that might be what’s going on.
To test it, make people take a giant battery of various personality and mental illness questionnaires before starting therapy, and then when it comes time to run the statistics you can try correlating therapy and outcomes with various personality/illness measures. Then do follow up studies for any variables that came up looking sufficiently significant (hey, I never said it would be an easy or fast test..)
I think there’s a confusing point of terminology here. The treatments being called “placebo” aren’t.
“Placebo” normally means something *known* to have no intrinsic effect. Compared to a pill that affects neurochemistry, a sugar pill is definitely placebo. But here, untrained talk therapy is being called placebo, even though any kind of talk therapy might have an effect on psychological problems and is different from what the patient normally experiences in their life. Then when you say that some talk therapy is or is not “better than placebo”, it’s quite confusing.
If a study was designed to test the actual *theory* behind e.g. psychodynamic theory (PDT), it could have a real placebo, as follows: assume the contrary of the major propositions of PDT, and build a talk therapy around that, but keep all elements unconstrained by theory unchanged. So there would still be a charismatic, caring, experienced person talking to you in a nice setting about your problems, but they would say different things.
I’d like to see a long-term study done comparing various forms of therapy with the Sacrament of Penance and Reconciliation. If the Dodo Bird Verdict is accurate and the real benefit comes from having a charismatic, caring person listen to your problems and then do ritualistic things to you, then I’d expect regular confessions to a trained priest to fit the bill just as well. Furthermore, to the effect the ritual and associated expectations produce a significant share of the benefit, I’d expect the relative benefit of confession vs therapy to correlate strongly with the patient’s relative regard for Catholicism vs scientific medicine.
It’s a plausible hypothesis, but it sounds to me tough to identify Catholics who are willing to have their rate of confession dictated by psychological experimenters. Especially if you also want them to be depressed. A comparison of through the grille vs in person might be easier to do, but with less expectation of a difference.
If you could find both priests and subjects who are agreeable, you could try doing the experiments with non-Catholic test subjects, or at least with the anecdotally very large pool of people who identify as Catholic but do not actively practice.
I wonder how much of this lack of differential efficacy is the Simpson’s paradox. Suppose, for simplicity, that for each type of talk therapy there are charismatic and uncharismatic practitioners, and within each category CBT > Freud > placebo (for example), but the effect disappears when both sets of data are pooled into one. Maybe there are more charismatic people inclined to provide amateur help, skewing the stats after pooling, or something. Maybe it’s easier to appear convincing if you talk in terms of Freudian ideas. I wonder how one would test for something like that.
Scott
Are you at all familiar with the work of Rudolf Allers? I found his book “Self Improvement” to be very insightful.
http://www.rudolfallers.info/figari.html
“Do not stop going to psychotherapy after reading this post!”
You mean, so we can be locked up against our will based on implicit association tests?
Hey, here’s a great idea: Let’s violate some more human rights and send the victims a bill! I’ll hand it to you, your kind has developed some impressive parasitic strategies, complete with propaganda and all!
Unstated assumptions, passive aggressiveness, no actual argument, loaded word choice. I’m going to assume that further dialogue would not be productive.
Do you have statistics on what proportion of involuntary commitments result from IATs and other things not controlled by the patient? I had the impression that (with the exception of actual psychosis) they were nearly always the result of the patient telling the therapist “Here’s my suicide plan” or “I’m going to stab my roommate tomorrow” or something like that.
I have 50% confidence that psychotherapy leads to fewer people being locked up against their will. Yes, sometimes psychotherapists commit their patients to hospitals, but other times they help control mental illnesses that would otherwise reach the point where someone else would commit the person to a hospital. In the end it probably balances out even if your utility function is a hospital-commitment minimizer.
I’m not a “hospital-commitment minimizer”. I reject your implication that it is a valid state function to lock criminally innocent, cognitively functional (not perception- or thought-impaired) people up against their will merely to protect them from their own choices, or that we need some process other than voluntary communication that makes them right-thinking pro-lifers.
Are you a troll, or do you seriously think pointing out that “they help control mental illnesses that would otherwise reach the point where someone else would commit the person to a hospital” is the same as “it is a valid state function to lock criminally innocent, cognitively functional (not perception- or thought-impaired) people up against their will merely to protect them from their own choices”?
Judging by your other comments here, I’m leaning towards “engaging in bad faith, but honestly”.
It is really not that hard to avoid getting involuntarily committed to a mental hospital. I’ve talked about suicidality and wanting to die in therapy extensively and the only time I’ve been committed involuntarily was when I tried to kill myself. (Which is really something you ought to expect.) If you don’t currently have a plan to commit suicide, they are probably not going to commit you. If you do have a plan, and have a strong preference against being involuntarily committed, you can pretend you don’t. Admittedly, lying to therapists is Bad, but if it’s a choice between lying and not getting psychological care…
Note that I’m borderline, white, upper-middle-class, and generally read as female, and I’m not super-confident that this advice generalizes to other groups.
So how much does this apply to psychiatric medication? I mean, easier to blind anti-depressants, I guess, but last I’d heard for anyone but severely depressed we can’t see any improvement. So should the depressed rationalist take pills as intentional placebos? (Does a placebo effect show up if you know it’s a placebo?)
There were some re-analyses of those studies that suggested there were also benefits for moderately depressed people.
Placebo effect does show up if you know it’s a placebo, but I don’t know if it’s as strong as a deceitful placebo effect. Also, the studies that found the placebo effect even when they patients knew involved prestigious researchers handing out very official looking pills and explaining that because of the placebo effect they were expected to treat the patient’s condition. That’s probably a different case than you swallowing a packet of sugar and saying “Yeah, my depression’s going to go away now”.
Oh, and “apply to psychiatric medication” probably shouldn’t be confused with “apply to antidepressants”. There are some really obviously effective antipsychotics, antimanics, anxiolytics, et cetera. Depression’s just a hard nut to crack.
Haven’t you just substituted in “charisma” for “psychoanalysis”? Why not just say “lucky”? How about “aware of and able to perceive and act on things that most people can’t”? Doesn’t it seem likely that practice and discourse about roughly those things might lead to such awareness? How does this differ from the claim that “trained experts in the use of telescopes are able to make predictions about what other trained experts in the use of telescopes will report, and that these predictions are more accurate than the predictions of lay telescope users, and that they are equally able to do this whether they characterize the planets as physical spheroids or mythological deities.” How does that differ from the findings of “Expert Political Judgment” by Tetlock?
How do placebo psychotherapies compare to placebo psychiatric medications? How does a combination of the two compare to either?
By “charisma”, I mean simple things like they are attractive, they smile a lot, they speak with a deep voice, and they have lots of diplomas on their walls.
Taking your off-the-cuff examples literally, that suggests that stage actors would make a good control group against which to measure the performance of psychotherapists.
I’ve written some in other places about the crusade against group selection, which was based on results from mathematical models that did not model group selection. Lately I’ve been looking at Lyme disease, where much of the “consensus” that chronic Lyme does not exist or cannot be treated by antibiotics turns out to be based on authors simply lying about what other studies concluded.
.
For instance, the CDC’s website http://www.cdc.gov/lyme/postLDS cites these three studies as having proved that antibiotic therapy leads to no improvement in chronic Lyme:
.
Klempner MS, Hu LT, Evans J, Schmid CH, Johnson GM, Trevino RP, Norton D, Levy L, Wall D, McCall J, Kosinski M, Weinstein A. Two controlled trials of antibiotic treatment in patients with persistent symptoms and a history of Lyme disease. New Eng. J. Med. 345:85-92, 2001.
.
Krupp LB, Hyman LG, Grimson R, Coyle PK, Melville P, Ahnn S, Dattwyler R, Chandler B. Study and treatment of post Lyme disease (STOP-LD): a randomized double masked clinical trial. Neurology. 2003 Jun 24;60(12):1923-30.
.
Fallon BA, Keilp JG, Corbera KM, Petkova E, Britton CB, Dwyer E, Slavov I, Cheng J, Dobkin J, Nelson DR, Sackeim HA. A randomized, placebo-controlled trial of repeated IV antibiotic therapy for Lyme encephalopathy. Neurology. 2008 Mar 25;70(13):992-1003. Epub 2007 Oct 10.
.
The first, Klempner 2001, excluded patients who tested positive for Lyme via PCR from the study, so their results don’t apply to patients with Lyme.
.
The second, Krupp 2003, concluded: “Patients assigned to ceftriaxone showed improvement in disabling fatigue compared to the placebo group (rate ratio, 3.5; 95% CI, 1.50 to 8.03; p = 0.001). No beneficial treatment effect was observed for cognitive function or the laboratory measure of persistent infection. Four patients, three of whom were on placebo, had adverse events associated with treatment, which required hospitalization. Conclusions: Ceftriaxone therapy in patients with PLS with severe fatigue was associated with an improvement in fatigue but not with cognitive function or an experimental laboratory measure of infection in this study.”
.
The third, Fallon 2008, concluded: “Across six cognitive domains, a significant treatment-by-time interaction favored the antibiotic-treated group at week 12. The improvement was generalized (not specific to domain) and moderate in magnitude, but it was not sustained to week 24.” The fact that the improvement was not sustained to week 24 is what is quoted in other papers, but they never mention that the antibiotic treatment was stopped at week 12. The reversion to baseline showed that the antibiotic therapy worked, not (as it is cited as proving) that it didn’t work. The conclusion goes on to say, “On secondary outcome, patients with more severe fatigue, pain, and impaired physical functioning who received antibiotics were improved at week 12, and this was sustained to week 24 for pain and physical functioning.” This half of the conclusion is never cited.
.
There is another study that is cited as having shown that antibiotics do not help patients with chronic Lyme, but the patients in the study did not have or claim to have chronic Lyme. There is another study that is cited as having proven that immunological tests are 100% sensitive in detecting Lyme, but the Lyme patients used in that study were selected prior to the study for having positive immunological tests.
.
I could also point to Halperin’s article dismissing the Connecticut attorney general’s findings against him, which said derisively that the attorney general had claimed Halperin’s committee violated interstate commerce laws, when in fact what the attorney general’s office had said was that Halperin “held a bias regarding the existence of chronic Lyme, to handpick a likeminded panel without scrutiny by or formal approval of the IDSA’s oversight committee… refused to accept or meaningfully consider information regarding the existence of chronic Lyme disease, once removing a panelist from the 2000 panel who dissented from the group’s position on chronic Lyme disease to achieve
“consensus”… blocked appointment of scientists and physicians with divergent views on chronic Lyme who sought to serve on the 2006 guidelines panel.” The attorney general also criticized Halperin for deliberately trying to make it appear that the conclusions drawn by his two committees, composed of largely the same people, were arrived at independently–an implication that Halperin made again in the very article dismissing the attorney general’s claims.
“Skepticism and metaskepticism seem to be two largely separate skills.”
Hm. I’m not sure if I like this terminology. It might be more like, “it’s easy to glom onto a rationalist community and believe everything they say, and if you do, you’re likely to be right a lot more often” and “but even if you’re better at it than average, it’s still hard to spot that something plausible-sounding doesn’t prove it’s not bunkum”. The first _looks_ like skepticism, and spotting flaws in skeptical thinking is arguably meta-skepticism, but I’d prefer to say, spotting flaws in ANY thinking IS skepticism, and believing skeptic-signalling things is (while possibly good) “being part of skeptic culture” rather than “thinking skeptically”?
People who enjoy thinking of themselves as smart get a thrill out of saying “Here is this thing that many people were taken in by, but not me.” Saying “No, really there is something to this point of view” is less satisfying. I relate this to the “attack culture” on LessWrong, where people would much rather point out the one flaw they found in one tangential part of a post than see what they can take away from the remainder of the post.
Pointing out flaws is a lot more visible than taking some information away from the remainder of the post, especially on a GUI with upvote buttons and within a culture where me-tooing is discouraged, but the two aren’t mutually exclusive.
Immediately after reading this post, I went to a seminar/lecture thingy that rather pointedly namedropped CBT and made some use of (simplified?) CBT techniques. Afterward, I ended up mentioning that I had just heard it was found to be, well, not real.
He said someone had told him that at all his previous speeches as well.
Nice to see the effect this sort of data has…
I think part of the point here is not that the effects of CBT are not real, but that the effects of PDT are *also* real and that their success is probably not attributable to their theory of mind (since that differs between them), but something that they share.
Pingback: Random stuff « Econstudentlog