Attitude 1 says that patients know what they want but not necessarily how to get it, and psychiatrists are there to advise them. So a patient might say “I want to stop being depressed”, and their psychiatrist might recommend them an antidepressant drug, or a therapy that works against depression. This is nice and straightforward and tends to make patients very happy.
Attitude 2 says that people are complicated. Sometimes this complexity makes them mentally ill, and sometimes it makes them come to psychiatrists and ask for help, but there’s no guarantee that the thing that they’re asking about is actually the problem. In order to solve the problem, you need to unravel the complexity, and that might involve not giving the patient what they want, or giving them things they don’t want. This is not straightforward and requires some justification, so let me give a few cases where Attitude 2 seems to me obviously correct.
1. A mother brings her 6 year old son to the doctor, complaining that he gets nauseous every morning. She wants the doctor to prescribe an anti-nausea pill. The doctor probes further and finds the kid only gets nauseous on school days. In fact, he only gets nauseous on school days when he has a particular gym class. The doctor asks the kid if there are any problems in that gym class, and the kid is reluctant to say anything. After a while, he finally admits there is a bully in that class. The mother calls the school, and the school takes care of the bully. After that the kid is no longer nauseous in the mornings.
2. A woman goes to a plastic surgeon asking him to fix her nose, which she insists is hideously deformed. The plastic surgeon thinks the nose looks perfectly normal and asks her to be cleared by a psychiatrist before surgery. The psychiatrist diagnoses the woman with body dysmorphic disorder, a delusional belief that one of their body parts is unbearably ugly. The psychiatrist advises the woman and her surgeon that plastic surgery does not work for this disease; if the woman gets her operation, she’ll inevitably either think that the new nose is just as ugly as the old one, or she’ll switch to focusing on something else like her ears or her mouth. He suggests she get psychotherapy instead. After several years of psychotherapy, the woman learns not to worry so much about her nose.
3. A woman goes to her doctor asking him how to taper off her birth control pills. The doctor is surprised at this request because he knows she is planning to break up with her boyfriend. The woman says that this is true, but she wants a child as a way to remember the relationship. The doctor probes deeper and finds the patient is very anxious and ambivalent about leaving her boyfriend and feels like if she has his child at least she will always have “a part of him” with her. The doctor refers her to therapy for her anxiety, and she is able to sort through her conflicting feelings about leaving her boyfriend. She chooses to stay on her birth control.
4. A man goes to his doctor asking for the strongest antipsychotics that exist, saying that he’s crazy and he’s going to hurt someone. The man converses very logically, and tells the doctor he’s felt like this for a year now and never hurt anybody. The doctor suggests that he’s not actually psychotic or violent, but might have an obsessive-compulsive disorder where he worries about becoming that way. The doctor recommends therapy for OCD.
5. A man comes to the psychiatric hospital saying that he’s suicidal and needs admission. The doctor knows him well, and remembers that he has been admitted five times in the past six months, each time after a life crisis, and that the patient has never actually attempted suicide and never even planned how he might do it. The doctor suggests that the man is using the psych hospital as an emotional crutch, and that instead of threatening suicide and going to the hospital whenever he is upset, he needs to learn more adaptive coping mechanisms.
Attitude 1 would have been the wrong choice in these five situations. If the doctor had just given the mother the anti-nausea pill she’d been asking for, the son’s stress about being bullied probably would have just caused some other symptoms. If the surgeon had just given the woman the nose job she wanted, she would have been dissatisfied with the surgery and wanted it changed again. If the third doctor had just told the woman how to get off birth control like she wanted, she might have had a baby for the wrong reasons and regretted it later, leading to heartache all around. If the fourth doctor had just given the man an antipsychotic, he would have unnecessarily exposed him to a potentially life-long course of very strong medication. If the fifth doctor had admitted the man to the hospital, he would be using up scarce resources and discouraging the man from learning better coping strategies.
Any halfway decent psychiatrist uses both attitudes at different times, but most people I know tend to lean to one side or the other. The 2-leaning doctors stereotype the 1-leaning doctors as simple-minded and gullible. The 1-leaning doctors stereotype the 2-leaning doctors as antirational paranoiacs with sledgehammers.
I remember a textbook talking about a case study by a famous psychiatrist. The patient had come in talking about how her husband was being borderline-emotionally-abusive to her. The psychiatrist interrupted her and said that she was perpetuating this dynamic to feed her own narcissism. The patient said this was absolutely not true and she wasn’t narcissistic. The psychiatrist said she would never be able to get over her provoking-her-husband problem until she admitted the depth of her narcissism. The patient refused to keep seeing the psychiatrist after that, and the psychiatrist commented that it had been a hopeless case from the beginning – the extent of her narcissism was so great that she would never acknowledge that somebody else might know more than she did.
And the textbook was very wishy-washy about this – it acknowledged that the famous psychiatrist was brilliant and was doing the right thing in trying to confront the woman with evidence for her narcissism, but then it said that maybe he should have taken a more compassionate tone. Meanwhile, I couldn’t help thinking that the famous psychiatrist was a jerk, that his only evidence the woman was narcissistic at all was a snap judgment from one or two easily misinterpretable things she said, and that call me narcissistic if you want but I wouldn’t have kept attending therapy with this guy either.
That right there is the failure mode of Attitude 2; when we get out of the perfectly safe cases I mentioned above and into the more extreme versions, it starts looking a lot like making snap judgments about how all of a patient’s problems reduce to a single personality flaw, and then interpreting everything about the patient in that light. Narcissism is probably the most popular, but other such flaws include “patient is regressing and wants to act like a child and have other people take care of her”, “patient is just looking for attention”, and “patient is obsessive and demands complete control over everything”. The problem is, once you make one of these judgments every possible piece of data becomes further confirmation.
For example, suppose that a patient says he is having side effects on his new medication.
If you already believe the patient is a narcissist, you can dismiss the patient by saying that he wants to be special, he’s not happy on the same medication as everyone else, he’s trying to control the interaction by making you feel bad because you gave him an inferior medication. The solution is to teach the patient that he can’t always have his way by continuing the medication.
If you already believe the patient is regressing, you can dismiss the patient by saying that he’s throwing a temper tantrum, that instead of dealing with the side effects like a mature adult he wants someone else to step in and make everything magically better. The solution is to teach the patient to deal with his own problems by continuing the medication.
If you already believe that the patient is looking for attention, you can dismiss the patient by saying that they’re just trying to get the doctor’s attention by complaining. You can teach them that this is a maladptive social strategy by continuing the medication.
If you already believe that the patient is obsessive, you can dismiss the patient by saying that he’s getting all neurotic over minor side effects and has worked himself into a frenzy over perfectly ordinary minor hiccups because he can’t tolerate anxiety. The solution is to reassure the patient that everything is fine and continue the medication.
If you already believe that the patient is a witch, you can dismiss the patient by saying that they’re trying to confuse and upset you so that you will be easy prey when they try to kidnap you and sacrifice you to their lord and master, the Devil.
So it’s pretty easy to dislike 2-leaning doctors. Also, fun. Also, quite often justified. So sometimes I give in to the urge and dislike them.
The problem is, sometimes they’re right. I remember one time I had a patient who complained that Geodon was making her hallucinate. Geodon is an anti-hallucination medicine, so the chance that it makes someone hallucinate is pretty slim – but I’ve read all the usual social media posts where people complain about their evil psychiatrist who just dismisses their deeply felt pain as fakery because they had a problem that wasn’t listed in the textbook, and I didn’t want to be that guy, so I went along with it. I asked her to take some Geodon right there in my office. She swallowed the Geodon pill, and sure enough, about two minutes later she said she was starting to have all of these terrible hallucinations.
So I explained to her that oral Geodon takes at least an hour or two before a reasonable amount gets into the bloodstream, and there was no biological way that it could cause hallucinations two minutes after she took it. Then we talked about why she might be scared of the Geodon, and whether she felt any ambivalence about really wanting to get better. Eventually she agreed to try the Geodon again and didn’t hallucinate any more.
Here I felt okay because I had biological impossibility on my side. But I always wonder how many cases I’m letting slip just because my patients’ stories are merely possible-but-unlikely.
Everything’s a tradeoff between Type I and Type II errors. If I err too far on the side of Attitude 1, then my patients will like me and I’ll never inspire a “my doctor said I was just making up my side effects for attention, and later on I got neuroleptic malignant syndrome and died!” horror story. But I will occasionally be doing the equivalent of doing plastic surgery on a body dysmorphic disorder patient, giving unnecessary and harmful medical care while ignoring the true problem.
If I err too far on the side of Attitude 2, then I always get to feel like a hard-headed non-gullible investigator digging down to the root of the problem – but occasionally I’ll end up like that famous psychiatrist in the textbook and tell people that the reason their foot hurts is because they’re narcissistic, and it has nothing to do with the fact that they stepped on a nail and the only reason they’re even bringing up the nail is their deep-seated narcissism.
I tend to lean way toward Attitude 1. I’m not sure I can justify it. Part of it is my personality: conflict scares me and I want to be liked. Part of it is that I read too many horror stories on social media about how much patients hate their Attitude 2 psychiatrists. Part of it is that Attitude 2 has a lot of its philosophical grounding in Freud, and I really don’t trust Freud.
This is a lucrative attitude nowadays. We are all supposed to be biological psychiatrists, all the old psycho-babble is no longer covered by insurance, and The Customer Is Always Right. I am lucky insofar as my natural tendency is also the socially more acceptable one.
(I suppose an Attitude 2 psychiatrist would say I’m not lucky at all, and that my unconscious desire for social approval and success has led me to adopt Attitude 1, and also I am a narcissist)
I may or may not be a narcissist, but I am definitely neurotic. And when my neurosis gets to “maybe I’m a terrible psychiatrist”, this is what it usually settles upon to worry about. Attitude 2 and the various arts associated with it are opaque to me. I can pass tests on them when I have to, but I don’t feel them in my bones. When I’m with a whole conference of doctors nodding their head and going “Yup, that guy’s a narcissist”, I’m always panicking, thinking “Wait, I’m not even close to convinced he’s a narcissist, and also nobody really knows how to treat narcissism, and I would feel a lot more comfortable if this conversation would shift to comparing and contrasting the various subtypes of dopamine receptors.” I am bad at it, and what’s worse I don’t even know if I should be better at it, and I don’t know how to solve the bad-at-it part without worrying that I’m sending myself and my patients down a blind alley.
The problem with Attitude 2 is that once you dismiss what the patient has told you directly about his mental state, you have to deduce what his mental state actually is based on fairly slim evidence, when there are multiple choices, some of which are correct, and others wrong. Getting it right requires a reliable intuition, but many people’s intuition isn’t reliably correct.
If you know your intuition about other people’s internal mental states isn’t reliable, stick to Attitude 1.
The problem with doing that is that you risk giving up the ability to improve your poor intuition about your patient’s mental states.
If you take Attitude 2 and you assume they’re a narcissist, and everything only further suggests that it’s true, you won’t improve your intuition. If you take Attitude 1 and treat what they tell you, if you get better you know they weren’t a narcissist. If they don’t, you’re still unsure. It’s not perfect, but it seems like you’d improve more that way.
And that’s not taking into account that narcissists (and others) can have real problems as well. Maybe the lady in the textbook was a narcissist, but was also in a bad marriage. Cutting her off with a curt dismissal about provoking her husband by being a nagging wife in order to feed her need for drama is only half the story and is sending her home to put up with a bad marriage plus the blame of “it’s all your fault”.
I disagree.
You’re giving up *that* chance to improve your poor intuition about your patients’ mental states. This is only a problem if you don’t have repeated interactions with the same patients.
The thing about Attitude 1 is that if you fail to correct an underlying problem, a new manifestation of that problem will almost invariably present, and the patient will seek help again. When they do so, they are highly likely to seek help from a doctor who did exactly what they wanted on their previous visit.*
In all of the cases you listed, the patient would likely have returned for a second visit (or a sixth visit, in the final case). The stakes are high in most of these cases, but that’s a key feature of why you chose them as examples; in most cases, the stakes for believing a patient about their characterization of a problem and desired treatment are low.
So I’d argue that you’re taking the correct approach – during early interactions with a patient, particularly when the stakes of failure are low, lean heavily toward Attitude 1. We tend to improve our intuitions when presented with evidence of our mistakes, and virtually all incidences of Attitude 1’s failure will produce such evidence. So it makes sense to wait for that evidence to surface in any particular interaction chain before you integrate that into your gut reactions.
Conversely, as you point out, Attitude 2’s failure mode tends to produce no evidence. Instead, it can self-reinforce during failure.
When the stakes of failure are high – when someone is adamant about plastic surgery, or insistent about going off of birth control, or demanding strong psychoactive medication – then it’s completely reasonable to shift toward Attitude 2, and to dig for more information before agreeing to a treatment course. But you already do that.
* Exceptions exist, in cases where a patient is intentionally deceiving a doctor, e.g. to receive a prescription for an addictive pain medication. Patients like that tend to move from doctor to doctor, and ideally end up on watchlists.
I agree, if you’re uncertain about your intuitions, use made up statistics!
You should be able to estimate your P(Attitude 1 error) and P(Attitude 2 error) by writing down your predictions and certainties. Are you doing that? Once you’re well calibrated about the chance of making a mistake either way, you can multiply each by the cost of failure (e.g. 20% unneeded nose job vs. 30% harmful medication). The cost of failure should include the cost of repeated failure, which is probably lower for Attitude 1 because like people mentioned, it’s the one that gives you a higher chance of correcting your mistakes.
Bottom line, once you quantify the outcomes of each attitude even vaguely, you can make better decisions and be more confident that you’re doing the best by your patients.
Think again about example 4. Once you’ve given the guy antipsychotic medication, you can’t collect evidence that that was the wrong thing to do. There’s no way to evaluate whether you were right.
That seems unlikely, Michael.
In general, when a psychiatrist prescribes a drug, regular check-ins and monitoring are part of the package. If the person in question wants refills on his powerful antipsychotic drugs, he’ll have to come back in, usually every few months in perpetuity.
There are still several ways the patient could present that would give a psychiatrist a strong hint that psychosis wasn’t really the root problem – a patient with OCD is likely to manifest other obsessive-compulsive behaviors, for example. Further, antipsychotics aren’t likely to reduce obsessive thought patterns or anxiety, so the patient is liable to come back and complain that the antipsychotics that were prescribed weren’t strong enough.
But you have an actual point, if I steelman your argument.
Even if I disagree that example 4 was one of them, there exist cases where incorrectly adopting Attitude 1 doesn’t produce evidence of failure.
That’s a difficult problem to solve.
Re. “Exceptions exist…” Not so simple. For instance, it’s relatively easy to get prescribed a habit-forming benzodiazepine–Xanax, for instance– that has street value, and can be sold for cash or traded for your favorite illegal drug. This especially happens in communities with high rate of drug abuse.
I’m thinking type I and type II errors from Stats.
My sympathies.
Isn’t a Bayesian approach possible here, if you find yourself suspecting someone is narcisstic, ask yourself what possible observations could convince you you are wrong in your hypothesis?
If you are testing a hypothesis in statistics, you try to first minimize the possibility that you reject a valid hypothesis and only then among those tests which do that you minimize the possibility that you don’t reject a false hypothesis. That choice is obviously not carved in stone but it makes perfect sense to me in most situations where the right choice is more important than a fast choice. This seems to be the case in psychiatry too. And the second attitude, as presented by Scott, seems to involve jumping into conclusions way too fast, or at least comes with a stronger tendency to do that.
“If you are testing a hypothesis in statistics, you try to first minimize the possibility that you reject a valid hypothesis”
That’s easy. Accept all tested hypotheses. That reduces the chance of rejecting a valid hypothesis to zero.
Note the word “first” in the sentence you quoted.
(apologies if sarcasm was intended; I’m not able to tell just from your words in this post)
I think the point is that you can’t minimize two orthogonal values; “probability that you rejected a correct hypothesis” and “percent of accepted hypothesis is correct” are values that have a tradeoff; if you make one the best, you can’t also make the other the best, except for in trivial cases.
Correct.
Point taken, what I wrote made little sense. I should have written “you choose a value and make sure that the probability of rejecting a valid hypothesis does not exceed that value and then minimize the probability of not rejecting a false hypothesis among the tests that work that way”….which of course raises the question of how to choose that value when you actually do the testing in practice.
Wouldn’t a more reasonable approach be to attach a cost to errors of each side and then choose the combination of probability of each that minimizes the expected cost?
” the extent of her narcissism was so great that she would never acknowledge that somebody else might know more than she did.” – am I the only one who though that quote was going to be a textbook example of irony?
Regardless of whether or not it’s intentional, it is quite literally a textbook example of irony (or at least an example of textbook irony).
No, I was expecting it, too. I almost got whiplash when it was played entirely straight.
I think one is definitely right to err on the side of Attitude 1, as Attitude 2 is kind of infantilizing and patronizing. And, for better or for worse, the patient is the doctor’s customer. I think with mental health in particular, there is an expectation that doctor is both health care provider and guru. But I think the “guru” part is both supererogatory and shouldn’t be forced on anyone who doesn’t want it (assuming they aren’t an immediate danger to themselves or others).
That said, I am very sympathetic because of the analogies I see to teaching: students like easy teachers who don’t surprise them, don’t challenge them, don’t make them uncomfortable, don’t push them, etc. etc. And being this kind of teacher is both less work and more often rewarded with good student evaluations. Yet one can’t help but feel that non-challenging teaching isn’t want students need, even if it’s what they claim they want.
On the other hand, physical doctors and physiotherapists can wind up diagnosing a symptom as being caused by a totally different bit of the body (eg misalignment of legs causing neck pain, or weakness in one arm being caused by a stroke in the brain) or doing things like not prescribing antibiotics to treat a viral infection. All of which seem to be more similar to attitude 2.
That and my parents think I suffered a bad bout of osteomellitis because the first doctor we saw quickly made up his mind that it was a growing pain and didn’t prescribe antibiotics. I do recall he decided that very quickly based on just a poke or so.
Err, I should hope they don’t prescribe antibiotics to treat a viral infection! That would be totally ineffective!
Osteoymelitis generally is a bacterial infection for which antibiotics are part of the usual treatment.
Addict: imagine doctors *not* prescribing antibiotics to a patient because they *think* it’s not a bacterial infection — Tracy’s post makes more sense that way.
If you are not sure and don’t do proper testing, arguably you should give antibiotics in case the infection is viral, and arguably you shouldn’t lest you spread antibiotics resistance.
This may depend on the students. I relied heavily on webbed teacher evaluations to choose my courses, and while I mostly did pick teachers who did not make me uncomfortable (I fit your model there), I aimed for – and, I think, got – teachers who did challenge me and push me (in terms of assigning lots of (interesting) homework/different kinds of homework than I was used to/things I would not have read on my own/etc.) and I did this while sticking almost entirely to teachers with high student evaluations.
Then again that was at a school that spends most of its time signalling “WE ARE NOT EASY” at the top of its (metaphorical) lungs, so that may have some effect.
I second that. I usually disregarded the grading of lecturers (those numbers don’t tell me much, especially since I think I might have different opinions about lecturers than the average student) and looked for actual comments (in Prague, one could add comments to the lecturer evaluation and choose whether to post it anonymously or not…the non-anonymous were fewer but also more useful for me because I could estimate the reliability of that commenter and judge from his other comments if his “tastes” in lectures align with mine). There were a few lecturers I chose specifically based on someone writing something along the lines of “demanding but will teach you a lot”.
But onyomi’s model probably fits on the majority of students, I would say something like 75% of them if I were forced to make a guess. I am not sure whether it is a good idea to force these people to study with a more demanding approach because I am not sure the result will actually be people knowing more. It is true that some people may be “borderline” – they actually are too lazy to start such a demanding course but end up enjoying it when they do. For those probably makes sense but a question is how many of them really are there.
Another analogy to medicine: the students most willing to challenge themselves tend to be those least in need of challenging, at least assuming that the goal is for everyone in the class to improve, if not end up at the same level (impossible).
I have an uncle who has horrific teeth and refuses to go to the dentist. Part of the reason he refuses to go to the dentist is he knows they’re going to recommend a bunch of time-consuming, expensive, possibly painful work.
Maybe, but I think the analogy is not perfect in the sense that you will expect a lot of people to become students to get a degree (as opposed to knowledge). You don’t get anything when you go to a psychiatrist other than the medical care, so the fact that you indeed do go is already some evidence that you “want to challenge yourself”.
I also relied on comments more than numbers – but found the teachers with good numbers also tended to have comments I liked, and the teachers with bad numbers often had comments critiquing things I objected to. (“Goes too fast” could be fine for a subject I was comfortable in, but “is very unclear, is regularly late to lectures, changes requirements on you at the last minute” tend to be bad across the board.)
Seconded on using negative comments about a teacher being “very demanding” or having high expectations or assigning lots of work as a sign that I should take the course, though. Chicago had all comments anonymous, but you were encouraged to add them to the teacher evaluations, and I would guess maybe 25-40% of students did, which usually gave a reasonable number of different viewpoints on the same teacher.
… probably agreed on the majority of students? But I would also say student knowledge in advance matters. The one class I can remember taking that pretty much had “this is a distressing and difficult class” pasted all over the introductory materials so no one could get in without knowing (which was, for the record, an entirely accurate description) was also one in which I can’t remember hearing any grumbling, while the perfectly normal Italian department course that ended up assigning 80 pages in obscure dialect to be read by the next class got a lot of grumbling. (Not from me, but I was in my very idealistic phase at the time, and it was fun material.) I think telling students one’s expectations in advance helps – students probably know which category they fall into, and can sort themselves.
I won’t say students who are too lazy to start a demanding course, but will end up happier with an unexpectedly demanding course than if it’d been what they signed up for don’t exist, I don’t have nearly enough experience to say that – having never been a teacher. But for whatever it’s worth I don’t think I’ve ever met one. Chicago seemed to have more of the other sort – people who aim for demanding courses and then panic because it turns out for whatever reason to be more demanding than they can handle.
Sure, it is always better if the students know the difficulty level of the course. But mostly you can just ask students who have already either taken that course or another one by the same lecturer and I think that most people do that. It might be an issue in the first semester when you typically don’t know anyone save for the other freshmen.
Asking former students is an excellent idea, but it’s a social task, and otherwise good students can easily be bad at it (I’ve known examples).
the patient is the doctor’s customer but if Henry Ford had asked his customers what they wanted, they would have said faster horses.
Thumbs up to the Henry Ford comment.
Furthermore, if Attitude 1 is the proper attitude to take, then why do we need psychiatrists at all? If I say I’m depressed, it would be far more efficient to go to an antidepressant-dispensing machine at CVS than to make an appointment with a doctor who is just going to give me the antidepressants I want.
Actually, I was wondering what Scott thinks his profession might be like if everything were available over the counter. That way, if you are absolutely convinced you know what you want to take, you will just go buy it. Only people looking for actual medical advice would see doctors, in which case they’d be expected to value the doctor’s estimation of the situation.
Right now, we have a problem where patients are “customers” of doctors for two different, albeit related, services: 1. giving of permission to buy drugs 2. medical advice. If all you really want is 1, then a type 2 doctor is an arrogant nuisance who thinks he knows how to run your life. If you actually want 2, then one can’t complain if the doctor challenges your preconceptions–indeed, that’s kind of what you’re going to him for.
Actually, I was wondering what Scott thinks his profession might be like if everything were available over the counter.
As am I. But I’ll hazard a guess that, in this hypothetical, the practice of psychiatry will involve an awful lot of cleaning up the messes that come from misusing powerful psychiatric medications.
From a consequentialist standpoint that would mean balancing the harm done by using the wrong medication against the harm done by seeing the wrong type of psychiatrist. Or none at all because fear of the wrong type of psychiatrist. I’m going to lean towards the virtuous libertarian solution of giving the patients good advice and then letting them do whatever damn fool thing they want, but there’s plenty of unhappy endings to go around however we handle that one.
I think the thing is, most people don’t go to a psychiatrist absolutely convinced they are depressed. From what I’ve seen, the average person really does not know much about mental illness, like what the symptoms of depression even are.
Also, there’s the fact that if people who thought they were depressed could get an antidepressant over the counter, they could very easily overdose, because there was no real in person evaluation or check in with them.
I think the thing is, most people don’t go to a psychiatrist absolutely convinced they are depressed. From what I’ve seen, the average person really does not know much about mental illness, like what the symptoms of depression even are.
Really? It seems to me that someone would have to have been living in a cave for the past few decades to not have any clue what depression is. They may not be able to rattle off the specific DSM guidelines, but I’ve never met a person who hadn’t heard of clinical depression and didn’t have some basic grasp of what it was.
And SSRIs are not easy drugs to OD on. It’s not impossible, but there’s plenty of OTC stuff that’s just as (if not more) dangerous.
@Hyzenthlay
I believe you when you say that everyone you know has this information, but you most likely live in a social bubble where this information happens to be present. (Most people live in bubbles.) There are other bubbles out there where this information is certainly not present and comments about suspected depression are replied to with suggestions of “manning up”, “snapping out of it”, and “just have hope!”. I’ve heard each of these in person when one friend came out about it. (There’s also the difficult problem that around half the population of the planet is below average IQ.)
I usually treat doctors like number 1, and only 2 if I’ve used up all my good ideas, and they unsurprisingly hate it. The ones who like it are incredibly valuable.
I did have to basically be shaken and told THERE IS SOMETHING WRONG WITH YOU AND DON’T FEEL GUILTY BUT TAKE THESE PILLS when it came to depression (among other issues). I wasn’t always a grown-up, or diagnosed. It’s especially weird when you’ve been healthy for most of your life and all of a sudden you’re not.
I want my relationship with a doctor to pretty much be collaborative, but that’s a hard sell when you seem like an unemployed layabout.
When none of my half-assed layperson’s remedies work, I go to a doctor, and expect the first half of what they say to be something I already know or have tried. I get the impression that doctors don’t realize that almost 100% of the decisions I make about my health I do on my own. It’s not the easy ones I’m bringing in. If it’s easy, I just do it.
I think in 80% of cases the vending machine thing would go great if done well (eg some really smart people design the machines and packaging to minimize people’s ability to take things wrong – think of those people who design the antibiotic packaging so that it’s very clear on what day you take what stuff).
In 10% of cases people would make stupid errors like taking a whole week of antidepressants at once and then never taking any again because they don’t understand that they only work over the medium-to-long term.
In 10% of cases people would make very dangerous mistakes like taking antidepressants for bipolar disorder and ending up even worse.
(Here I’m assuming that only relatively safe medications like SSRIs are in the vending machines; if lithium and MAOIs were, the streets would run red with the blood of the slain)
Other effects: millions more people would get medication who hadn’t had access to it before, pharma companies would actually be under pressure to decrease the price of their medications because presumably health insurance companies aren’t paying for this.
“In 10% of cases people would make very dangerous mistakes like taking antidepressants for bipolar disorder and ending up even worse.”
That happened to a friend of mine who had bipolar 2 (bipolar with very small manic phases, easily mistaken for depression) with her prescription meds.
This was a while ago. Is that mistake less common these days?
@Scott,
Other than dealing with more aftermath of peoples’ ill-advised self-medication, do you think having more psychiatric drugs available OTC would allow you to skew a little more towards type 2, and would that be a good thing, insofar as you’d spend more time offering real medical advice and less time acting as a rubber stamp (though obviously telling people, “yes, I think what you are suggesting is, in fact, a good idea,” is itself a valuable function)?
In Kaiser, you can use your GP as a anti-depressant-dispensing machine if you’re so inclined. He will also encourage you to take classes, but that’s about all. If you *want* a therapist, she’ll probably refer you, but Kaiser isn’t good about individual therapy.
This is a very interesting point, and, to the extent that claims that Asians are not “creative” enough (there is much handwringing in China about why they can’t produce more Steve Jobs types, though I think some of it may just be a time lag) have any validity, I think it’s because when you look at the history of say, the Qing Dynasty, Choson Dynasty, and Edo Japan, they seem to be very good at making faster horses.
Chinese medicine, for example, feels like what would happen if you had just worked on making humorism as sophisticated as possible.
I have a hard time seeing how inventing the compass, gunpowder, paper, and printing, among many other things, doesn’t count as “creativity.” Especially gunpowder, which was a pretty much completely unprecedented technology, and was discovered by Chinese alchemists trying to create an elixir of immortality who persevered in spite of their experiments literally blowing up in their faces on numerous occasions. It’s hard to get more creative and mad scientistish than that.
Since you brought up Japan, there is also the glaring exception of Shigeru Miyamoto. More generally, Japan has had plenty of Steve Job types in recent decades. There’s a reason that everyone in the 1980s thought that Japan was going to take over the world.
When a culture repeatedly gets credit for inventing “gunpowder” and “gunpowder weapons”, I’m inclined to see in the excessive syllable count and precise phrasing, a conspicuous deficit in the range of their creativity.
Paper, yes. Printing, but movable type. There’s something interesting to explore there.
“Asian people can’t be creative” is obviously false as a blanket statement.
Nevetheless, the idea that they are less creative that Europeans/Americans is an interesting idea. I don’t know if it’s a true idea, but just based on the stereotype it’s something to investigate.
On a superficial level, Japanese culture seems to be significantly more conformist than American culture. And China has the history of civil service examinations and mandarins running everything based on rigid adherence to an intellectual orthodoxy.
Of course, Europe has the Catholic Church in that latter respect. Are southern Europeans less creative than northern Europeans?
“compass, gunpowder, paper, and printing”
That’s why I mentioned the Ming, Qing, Choson, and Edo periods. All those things had been invented long before. For most of history, Asia was ahead of the West. The question is why, in the last 500 years, the West pulled ahead, while the East kept thinking of “faster horses” instead of cars.
Also, there’s something to be said for faster horses, especially as compared to early cars, so don’t get the impression I don’t like Ming, Qing, Choson, or Edo cultural products. In fact, that’s mostly what I study. I am actually kind of fascinated with what you might call the “really fast horse” (super refined version of an old thing).
@Vox Imperatoris
Are we just talking about capitalism now? I feel like we are, since I’m pretty sure that the answer to “How do we get people be creative?” is capitalism. (Let the monkeys just throw stuff at the wall to see what sticks. Reward the stickiest ones.)
In which case, there was enough of a visible difference between Protestant and Catholic Europe’s prosperity for Max Weber to think that the former was what led to capitalism. But I’m not sure what the current consensus is on his thesis.
The Japanese somehow manage to be conformist and wildly culturally creative. I have no idea how that works.
@ Jaskologist:
Capitalism and an individualistic culture, yes (and the former tends to encourage the latter, as far as I can tell).
And I was definitely thinking of Max Weber. But like you, I’m not one-hundred percent confident he was right.
@ Nancy Lebovitz:
In some ways, it’s creative. And that tends to be the part most exported to the West.
But in other ways, they are really conservative. Even in things like musical tastes. I was stuck while watching the anime adaptation of Rose of Versailles, made 36 years ago, that the theme music is indistinguishable from something that might be made today. And this is a general trend: there’s just a lot more consistency between decades in what style of music is popular in Japan.
“In which case, there was enough of a visible difference between Protestant and Catholic Europe’s prosperity for Max Weber to think that the former was what led to capitalism.”
Dierdre McClosky’s argument is basically that it was social attitudes about trade that made the difference, with the Dutch leading the way in not despising people just because they got rich through trade.
Traditional Chinese culture was very suspicious of merchants and wealth that didn’t derive from land, hereditary titles, or civil service, so that may be part of it. As were late-Ming and Qing restrictions on trade (the early-Ming had Zheng He).
I notice that “500 years ago” almost exactly coincides with the onset of European colonization of the Americas, which gave Western Europe two entire continents to explore and exploit, as well as first pick of the New World’s goodies. Sure, the civilizations of the Americas had little to offer Eurasian civilizations in terms of technology, but the importation of New World crops had a tremendous impact on Europe by increasing food security and thus population growth. Also notable is quinine, a traditional Peruvian medicine derived from the bark of a South American tree that was later developed into the first truly effective anti-malarial medicine, enabling the European conquest of Africa.
Of course, that raises the question of why Europe and not China colonized the Americas, but that might simply be due to luck. No matter what, someone* was going to get there first. On the other hand, after reading about how destructive the Mongol conquests were, I can’t help wondering whether they might have set civilization in the East back a century or more.
Of course, cultural factors may play a part as well, but I’m generally wary of assigning broad cultural traits to regions as large as “the West” and “Asia.” Each of these regions contains a multitude of different peoples with wildly varying cultural values. The myth of China as eternally united and monolithic is exactly that: a myth. Take a look at the sheer length of a list of Chinese civil wars and consider that even today movies released in China are often subtitled as well as dubbed because speakers of one regional dialect of Mandarin might have a very hard time understanding a different regional dialect.
* That is, someone backed by a civilization with the capacity to undertake large-scale colonization projects. I’m well aware of the Vikings’ adventures in Canada.
New World crops may have been important, but they made it to Asia pretty fast. Both Maize and Sweet Potatoes reach China in the 16th century.
@Vox Imperatoris
“Of course, Europe has the Catholic Church in that latter respect. Are southern Europeans less creative than northern Europeans?”
The Renaissance would seem to say no.
Traditional Chinese culture was very suspicious of merchants and wealth that didn’t derive from land, hereditary titles, or civil service, so that may be part of it.
Yet in much of SE Asia, e.g. Malaysia and Indonesia, ethnic Chinese have traditionally dominated the merchant community. Differential migration of commercially-inclined Chinese driven by hostility at home? Surrounding communities whose similar attitudes prevented the emergence of a domestic merchant class but didn’t have the same effect on immigrants already tainted as barbarian outlanders? Sounds like I’ve heard something like this before.
I notice that “500 years ago” almost exactly coincides with the onset of European colonization of the Americas…
European colonization of the Americas isn’t a thing that arbitrarily happened. It is the direct result of Europe suddenly getting very creative and ambitious with things like shipbuilding, deepwater navigation, and mercantile exploration. China had, as of ~1400, better ships, better maps, proto-compasses, and some lucrative archipelagos close at hand to serve as stepping stones to the unexplored and unexploited continent nearer to them than the Americas to Europe.
@ John Schilling
China had, as of ~1400, better ships, better maps, proto-compasses, and some lucrative archipelagos close at hand to serve as stepping stones to the unexplored and unexploited continent nearer to them than the Americas to Europe.
But what incentive to go further? Vintners, precious, yanno. Especially if their lucrative archipelagos were the same spice islands that Columbus was looking for.
Speaking of whom, and assuming the Chinese maps showed a round world, how big did they think it was?
That protestantism correlates to economic success is false.
You can clearly see that it’s false if you look at Europe at a more detailed level.
Southern Germany is the richest part of Germany, together with North-Rhine-Westphalia. Those regions are mostly Catholic.
The northern half of Italy is richer than most of Western Europe. This will surprise some people because usually statistics bundle the rich north and the poor south of Italy together.
Taken together, northern Italy and southern Germany, the surroundings of the Alps, form the most impressively prosperous region of Europe, combining superior GDP per capita with large population. I remember noticing that in a detailed map of wealth and population in Europe. You couldn’t miss it. And that area is all Catholic. An big, Catholic transalpine blob of high wealth.
While there are rich protestant areas (the London area, the Netherlands), there are also poor protestant areas (northern England, parts of Germany).
Looking at history, protestantism can’t possibly have triggered European economic superiority, because the reformation only occurred in the 1500’s, and Europe was already the richest part of the world by then.
(and northern Italy was the richest part of Europe, perhaps together with the area today we call Benelux – think of how the international currency in Europe was the “florin” of Florence).
I already argued, in a recent thread on this blog, that 1200’s Europe was already technologically impressive by world standards.
(good luck finding my comment, towards the end of the page: https://slatestarcodex.com/2016/02/16/links-216-n-acetyl-selink/#comments)
The case for this is even easier to make if instead of the 1200’s, we look at the 1400’s.
https://en.wikipedia.org/wiki/Global_spread_of_the_printing_press
https://en.wikipedia.org/wiki/Matchlock#History
https://en.wikipedia.org/wiki/Prague_astronomical_clock
https://en.wikipedia.org/wiki/Plate_armour#/media/File:Italian_-_Sallet_-_Walters_51580.jpg
https://s-media-cache-ak0.pinimg.com/736x/ab/67/3d/ab673d4b9a53596cdad11e39161f91ba.jpg
I mean, come on, I think everyone knows it, I don’t think I have to go on.
More to the point:
https://en.wikipedia.org/wiki/List_of_regions_by_past_GDP_%28PPP%29_per_capita
suggests that Europe in 1500 was already the richest part of the world.
There is no way that this is the consequence of the discovery of the Americas, or of the protestant reformation.
The colonial population of the new world was really low during the age of sail. People tend to overestimate the relative importance of the new world back then.
Regarding creativity and the pre-1500’s West, let me repost this piece of Wikipedia:
“Cultural and costume historians agree that the mid-14th century marks the emergence of recognizable “fashion” in Europe. From this century onwards Western fashion changed at a pace quite unknown to other civilizations, whether ancient or contemporary.”
I don’t know the explanation of this, but it’s interesting.
The thing about Western maps versus Eastern maps, is that Western late medieval nautical maps were much more *accurate* than the Chinese ones I have seen, reflecting superior navigational expertise.
In the depiction of the Mediterranean, those medieval maps look almost modern. I didn’t see any Chinese map that depicts the Chinese seas with comparable accuracy.
This is a separate issue from the *extent* of the maps, since there are controversial claims that the Chinese were the first to map the new world.
Also, as David Friedman noted, new world crops reached the East quickly.
Not that I know anything about it, I just learn it from his comment.
I just learned from wikipedia that Copernicus wrote an early outline of his heliocentric theory before 1514.
I’m pretty sure that it wasn’t influence of the barely discovered new world that made him do it, nor that of the yet to happen reformation.
My personal, libertarian-friendly theory for why Europe pulled ahead of China (and I do think they were already kind of doing so before they discovered the new world/the fact that they discovered the new world was partially a result of, not just a cause of, their pulling ahead; that said, 16th c. Suzhou seems like it was at least as wealthy and nice to live as 16th c. London) is political decentralization:
Europe had all these little states and princedoms, all further undermined by the Pope, though that influence was pushed back against by Henry VIII, etc. This made erecting trade barriers and the like more difficult, and people could move among states more easily as well; some people may have hated the Dutch and the Portuguese for gaining all this wealth though trade and exploration, but no one could really stop them (or at least, didn’t bother to).
Imperial China was not as unified as Emperors liked to think, but it was still run by a giant bureaucracy (biggest in the world till the 20th c., most likely) with a unified writing system and a lingua franca of sorts, and you couldn’t easily escape the long arm of the Emperor’s law. Further, institutional Buddhism had been largely cowed and the Emperor was simultaneously secular leader and head of Confucian hierarchy, so in some sense an emperor-pope.
Political decentralization>political centralization for development, trade, creativity.
@Vox Imperatoris
I won’t argue the point that Japan tends to be rather conservative in some regards, but anime theme songs are really quite different today from what they were. The style used in the 70s and (at least early) 80s is quite distinct:
Rose of Versailles (1979, youtu.be/has-Ru1lGTM)
Ashita no Joe 2 (1980, youtu.be/MRoukLur38w)
Queen Millennia (1981, youtu.be/9q0o8XhUu0Q)
For comparison, I picked, more or less randomly by looking at thumbnails, three theme songs from current shows I’ve never seen before (no samples were picked and discarded):
Gate: Jieitai Kanochi nite Kaku Tatakaeri (2015, youtu.be/xPVVTSof-_k)
Oshiete! Galko-chan (2016, youtu.be/hIdSjtvAKHo)
Shouwa Genroku Rakugo Shinjuu (2016, youtu.be/Q4nAnJAgfE8)
The theme songs of older shows definitely have more in common with each other than with the newer ones, or the newer ones with each other at that. If you cluster musically, you would probably get certain archetypes of songs among current shows that producers abide by more or less closely (you might go as far as calling them genres), but they are quite different from your regular anisong from 36 years ago. That’s not just true for anime music. Regular Japanese pop music back then was quite different from what you hear today too.
That’s the worst misspelling of “Shuji Nakamura” I’ve ever seen.
Also: someone mentioned movable type below; that was first invented in Korea, which is part of the allegedly-uncreative civilization/culture/region.
Maybe we need to distinguish between invention (gunpowder) and innovation (guns).
Well, that’s obviously not the distinguishing factor, since we’re holding the U.S. up as an example of high creativity.
Woah: China invented the listicle! Wait, does that count as creativity or anti-creativity?
On a more serious note:
We’re looking at the creativity of cultures, not individuals. Culture is how you put individuals together as a system. Just as you can make a reliable system out of unreliable components, you can make an uncreative culture out of creative individuals. Excessive deference to hierarchy is an excellent way to do this: the group is unable to be more creative than the guy at the top, and this happens recursively all the way up the tree (“OK, Qeng Ho, that was cool, but now let’s all stay home for a couple centuries.” “Yes, Emperor.”)
Two reasons you should listen to me on this: (1) making organizations more creative (at solving their problems) is part of my job, and (2) in my work, I deal with people from a particular ethnic group all the time; the ones raised in the home country are excessively hung up on hierarchy and won’t speak up until they know what the boss thinks, whereas the ones raised in the U.S. have no problem challenging authority (generalizations, but accurate 90% of the time). Culture is the problem.
IIRC, Yuval Harari’s theory (from his Coursera “History of humankind” course, and book) explains Europe’s dominance with its “discovery of ignorance” — a rather interesting idea.
The general pattern is: (most) cultures generally assumed everything worth knowing was already known to the ancients, hence learning was just trying to recover that.
Indeed, China started a huge exploration mission (IIRC 300 ships rather than Columbus’s 3), but lost interest soon enough.
But (for instance) Columbus discovered the Americas, that the ancients clearly didn’t know. Because of this and other factors, people got serious about figuring out new stuff.
Harari also talked a lot about capitalism of course, but that needn’t explaining here.
Japan in the 1980s had nothing at all to do with creativity, but work ethic and refinement. Japan actually is horrible for creativity, with a reliance on poor homegrown solutions and their general conservatism. This is why Japanese cellphones and the cell network never took off outside of Japan, and why Japan really tends to lag behind on things like programming and web design. But they are incredible at perfecting processes and products others innovate.
Re: Anime music
It..well I agree and disagree. It can sound pretty similar sometimes, but it varies a lot too. You mention Versailles, but Bubblegum Crisis and Bubblegum Crisis 2040 are like night and day. Sometimes everything sounds the same, but then you get something like Love Hina’s opening, or Welcome to the NHK’s Puzzle, or Last Exile’s Cloud Age Symphony.
Or like how 70’s anime tends to be really really 70’s with lush music and symphonies, and 80’s is happy dreampop and fusion jazz. It really depends I guess.
Even if a lot of Japanese creativity is manifested in anime, toys, and snacks (I expect there’s more, but that’s what I’ve heard of), even what I’ve heard of is a lot of creativity.
“Even if a lot of Japanese creativity is manifested in anime, toys, and snacks (I expect there’s more, but that’s what I’ve heard of), even what I’ve heard of is a lot of creativity.”
Yeah, by any standard, it’s strange to say that the Japanese are not creative, even just based on the small subset of Japanese creativity which reaches our shores. Yet it also seems accurate to say that they show patterns of creativity very different from our own–being weirdly super creative in some ways and weirdly not at all creative in others (the Japanese internet still looks like Myspace).
The standard line, even repeated here, is that they are good at perfecting things others invent but not at inventing new things. This is, of course, an overgeneralization, but I will say that much of Japanese creativity seems to stem from an extreme attention to detail: https://s-media-cache-ak0.pinimg.com/564x/60/93/0d/60930d844cbec6a3d4da90a44ed7f4d3.jpg,
and one can imagine that, perhaps, a culture which encourages this, as Japanese culture does (“reading the air,” i. e. picking up very subtle social cues is a highly valued skill), might unsurprisingly produce different patterns of creativity than others.
houseboatonstyx wrote:
As of 1500, the Chinese assumed the world to be flat. They only learned about the world being spherical from Catholic missionaries in the 17th century. See for example https://en.wikipedia.org/wiki/Flat_Earth#Ancient_China
I suspect we fail to notice what we aren’t being creative about.
Still, it could be that in Japan, there are subcultrues which have developed a habit of accepting creativity, and other subcultures which haven’t. Industries can be subcultures.
I’ll grant that the Japanese tend to be better at putting a fine polish on things than most westerners, but it’s still true that there are Japanese being wildly inventive about what they do with details of presentation.
It’s possible that we shouldn’t be leaving “creative” floating out there as a vague abstraction. It should at least include as question of “creative about what?”.
It would help to have some information about what it’s like to be Japanese in one of their fields that has a lot of innovation. In addition to the three I listed, I’ve heard they come up with small home appliances which generally don’t get out of Japan.
@ Jon Gunnarsson:
That’s fascinating! How in the world did they not notice things like ships appearing at the horizon sails-first? And the many other things which the Greeks noticed?
I do think it’s interesting that Greek philosophy (especially early Greek philosophy) seemed always to be concerned with metaphysical questions about the nature of the world first and social questions secondarily. While from what I know about Chinese philosophy (considerably less), it is far more concerned with ethical and political questions.
“That’s fascinating! How in the world did they not notice things like ships appearing at the horizon sails-first?”
It can be a challenge to realize that what seems like a routine feature of your environment has logical implications.
I always interpreted the comparison of earth and sky to an egg to mean they thought heaven and earth met at some point, presumably at the tops of tall mountains or something, with heaven being a kind of firmament encapsulating earth, which could have been considered relatively flat as compared to the sky.
But Chinese metaphysics, due to yin-yang cosmology, has a very strong bias towards perceiving everything as a marriage of opposites: if the sky is round, then the earth must be flat, because that would be seen as some kind of harmonious marriage.
Their culture isn’t that creative at all. You don’t have a culture with high levels of social conformity manifesting high levels of creativity, because creative types grow at the fringes and margins of culture. This is a big problem with conservative Christians, for example, who have suffered tremendously by not being able to provide decent art which illustrates their culture. Japan in general doesn’t have that, and even their outcasts conform to their own code tightly.
Like anime. If you watch it seriously, you realize that it’s incredibly conformist apart from a few auteur-style directors. One person innovates, and then the market is flooded with derivative copies and fan works until the new trend is discovered. Like the current “self-aware fantasy world based on MMO tropes” for one, or the Infinite Stratos style combat harem (which itself is a throwback to Tenchi Muyo, over 20 years old!)
Japan really is ace at borrowing and working hard, but they are a culture that literally borrowed an entire religion, Buddhism, and adapted it to the Japanese mindset. Without that, they tend to struggle. Look at Japanese videogame development, it’s tanked and the JP in general lag far behind the west now in developing great games.
Japan merely took a different approach; instead of spreading creativity among all their people they concentrated it in the person of Hayao Miyazaki.
How, exactly, is this any different from how Hollywood and other Western entertainment industries work? Just glancing at the box office rankings for 2015, I notice that 7 out of the top 10 grossing movies were sequels, spin-offs, or remakes of existing franchises. Of the remaining 3, 2 were adapted from books (but then a lot of anime is adapted from manga), leaving only 1 original story in the entire top 10. I’m not seeing an overwhelming amount of creativity on display there.
Because it’s not like Europe has ever borrowed a religion invented in another part of the world and adapted it to their local mindset, even to the point of absurdly depicting the religion’s founder as a completely different ethnicity from what he actually was.
Seriously, outside of India, the Levant, the Arabian Peninsula, and China (though even there they had some pretty close calls), can you name a single major culture that didn’t either voluntarily or by force import a foreign religion and adapt it to their local mindset? In fact, Japan’s traditional religion has survived far better than those of Europe (though that may simply be due to Buddhism being more compatible with animist/polytheistic belief systems than Christianity), so again this seems like a really weird thing to hold against Japan.
“Great” is suggestive, but even though Japanese games aren’t selling that well nowadays, it is the height of absurdity to claim that games like Katamari Damacy aren’t creative. But then creativity often doesn’t correlate strongly with commercial success, as seen in both the Western (Call of Duty, all the sports games with full-priced annual roster update releases) and Japanese (the endless parade of sequels in Capcom series like Resident Evil) video game industries.
Regardless, whatever the fortunes of the Japanese game industry are like nowadays, we can’t ignore the fact that Japan pretty much ruled the entire global video game industry for most of the 80s and 90s, and a lot of their dominance was due to the ability of people like Shigeru Miyamoto to invent entirely new genres of games.
Where’d that get started, anyway? I’ve seen it all over the place in the last couple years, but there doesn’t seem to be an obvious big series that kicked off the trend, the way Dragonball inspired a million high-powered martial-arts shounen series or Sailor Moon built the current magical-girl formula.
(The “pathetic nerd gets reborn or transferred into secondary world, kicks ass” variant is especially annoying. The “Den” comics in the Heavy Metal anthology series, way back in the Seventies, mined out all the good that concept had to offer.)
Record of Lodoss War might be a spiritual predecessor, but that’s like thirty years old, and based on D&D (actually a Japanese knockoff, IIRC) rather than MMOs.
.hack probably started it a while back. The current trend seems to have been started by Sword Art Online.
The Chinese were run by an entrenched bureaucracy for thousands of years. Bureaucracies flatten out conditions so everybody on average does ok, and stability is maintained above all else. They also, by pulling down the top, stifle those innovations which would raise the average condition.
Alexis de Tocqueville, from Democracy in America:
Alexis de Tocqueville would seem to be another on the illustrious* list of 19th century Westerners who accepted Qing propaganda as an accurate description of China. The power of the central administration even at that specific time was in many respects more aspirational than real (hence the collapse that was soon to come). Certainly this is absurdly inaccurate as a description of Chinese history, which involves tremendous changes and is hard to safely generalize about. Notably, China has more often been disunited than united, however much the official propaganda may have insisted throughout the ages that there is only one China (insisting on that is a case where the modern Communists are continuing a tradition that dates back to Confucius, who wanted there to be only one China, so there wouldn’t be multiple states to make war on one another, and so, following some sort of strategy of “fake it until you make it,” pretended that things were as he wanted them to be).
* Not being sarcastic; lots of otherwise extremely sophisticated people on that list.
Taking Victorians at face value whatever they might say seems to be rather popular around here. I’m glad not everyone’s falling for it.
(insisting on that is a case where the modern Communists are continuing a tradition that dates back to Confucius, who wanted there to be only one China, so there wouldn’t be multiple states to make war on one another, and so, following some sort of strategy of “fake it until you make it,” pretended that things were as he wanted them to be).
Huh, that’s an interesting perspective. Technically, “China” would have been Confucius’s worst nightmare, as Chin hegemony was the result of one little state becoming ruthlessly efficient by adopting Legalism.
Pedantry aside, though, what you’re implying is that the Western Zhou and the Shang before them are basically a fiction invented by the Confucians. Not that they didn’t exist, but that there were no “dynasties” with a monopoly on force in the Yellow River valley civilization before Chin.
Should we be translating the Chinese word used for pre-Chin “kings” as “pope” instead?
“Should we be translating the Chinese word used for pre-Chin “kings” as “pope” instead?”
No, but it has been argued that the second half of the word “emperor” (di, as in huangdi, which would have had an -s suffix in Old Chinese), shares an etymology with Latin “deus.”
Well, they’re not exactly fictions (though the Xia may have been), and obviously Confucius didn’t call what he wanted China, but he did want and try to argue that it was natural for there to be a single government over a large region, and he definitely included territory that had never been Zhou (never mind Shang) in the territory of his hypothetical unity, while pretending that this would just be restoring the way Shang and Zhou did it.
@onyomi: That’s fascinating on two levels.
1) Indo-European loanwords in Old Chinese? When did that happen? Maybe when steppe nomads brought chariots to the Yellow River?
2) So King Zheng of Qin literally declared himself “God Emperor” of “All Under Heaven”, huh?
@ Le Maistre Chat:
An open mind is like a fortress with its gates unbarred and its walls unguarded.
Probably via Tocharian. Here’s a paper (Lubotsky 1998) if you want more information on these loanwords.
Did you hear a few years ago about the unearthing of a Chinese scroll that was essentially “The 10 things to do while vacationing in the Roman Empire, that country we have diplomatic contact with.”?
The seventh one will shock you!
The parallels in Chinese and European history vis a vis unity are fascinating.
Folks have been proposing China’s “unity” stifled competition and discouraged innovation compared with “disunited” Europe for, well, basically ever. It’s not an awful argument to make, either.
Consider:
For most of history, China was actually far more disunited than united. The Zhous spent most of their existence as kings in name only, the Qin lasted only one generation, and the Han only made it 200 years, or 400 depending on how you count (and let’s be honest – Later Han was basically a dynasty in name only).
So, if you were to look down at the world in the year 600 or so, look at the two civilizations at the extremities of Eurasia: Europe on the one end, and China on the other.
The western end of Asia had been more or less united for roughly a thousand years – the Achaemanids succeeded by Alexander, succeeded by, er, the Successors and then the Romans. Sure, Roman control of the more backwater areas like Gaul and Hispania had slipped in the last century or two, but the core area of civilization around the eastern end of the Mediterranean was still firmly under central control, and no doubt the authorities would soon re-establish their grip on the rebellious provinces (a process gotten well under way by Justinian).
China, by contrast, had been only briefly united by the Qin and Han dynasties, and the Han had sputtered along for 200 years only vaguely exercising any sort of authority. That had been followed by 300 years of bitter warfare and disunity, looking pretty much like the centuries of division and warfare that preceded Qin.
So, if you were to assert which civilization was more “naturally” united in 600, odds are you’d have to pick Europe. One could easily justify this, too – the Mediterranean is a natural highway, facilitating control of all the hinterlands around that vast sea by maritime cities like Athens, Rome, and Constantinople. The easy water communications and pleasant climate make possible vast empires, which you can’t get in mountainous, land-locked China.
Even moving our timeline up a bit, say to 1000 AD, you’d still be challenged to choose between the two. In China another period of union had ended in dissolution and disaster as the Tang came down (and, again, the last half of their reign was a long slow crumble), while in Europe the natural unity of the East continued, while the Franks had taken great steps to unite the West. Only the Muslim conquests and the no-doubt temporary infighting amonst the Karlings had shaken Europe’s unity.
Really, it’s not until the Roman Empire stubbornly refused to be resurrected, despite centuries of attempts from Justinian down to Mussolini, that Europe gained a reputation as a place naturally divided by culture and geography. Meanwhile, China as we understand it was only rarely unified before the Mongol conquest – and since then it has been rarely divided.
However, it’s also in that last millennium that you really see Europe’s technological lead over China develop, while before then China had previously been tops. So, maybe there’s something to the “disunion facilitates innovation” narrative after all.
Chevalier: fascinating points, but I think it’s a stretch to say that the West was centralized before the Roman empire. Before Alexander, the Greek world obviously escaped centralization; after Alexander, there were too many poles of powers on the Mediterranean – 3 rival successor dinasties, plus Rome and Carthage, plus plenty of minor states that could survive thanks to the rivalry between those powers.
I suspect that the initial lack of centralization is precisely what allowed classical antiquity as we know it to develop its splendors, and that after the Romans took over, the trajectory of development began to gradually bend downwards, until a few centuries later there was a noticeable collapse.
I think Chevalier is overstating the early China decentralized case, as well as the early Europe centralization case. I do, however, think it’s roughly accurate to say that, during the period from about 1000 to 1800, China was mostly getting progressively more centralized and authoritarian, while the West was either getting more decentralized and liberal, or, at least, not more centralized and authoritarian in most cases. The Qianlong Emperor, whose reign arguably constituted the highpoint of Chinese imperial authoritarian power, actively fetishized political centralization.
And to go back further in time, I think the Chinese Tang Dynasty was a lot more centralized than anything in Europe during the same period, though I pick roughly 1000 as a starting point for Chinese bureaucratic authoritarian meritocracy, because the Song Dynasty was when the Chinese government first took a form approximating the Western stereotype of Chinese empire. Prior to that it was much more aristocratic and feudal.
Not sure I’d go that far. Certainly there weren’t too many large-scale political changes in Europe between when Cnut’s empire fell apart and shortly after the Black Death, but even then it wasn’t getting any more decentralized and liberal outside the political realm; that period witnessed several large-scale heresies (most famously the Cathars in Languedoc) crushed in favor of Catholic hegemony, for example, while in the economic realm the scale of action shifted upwards from local artisans to guilds and cartels and occasionally stuff like the Hanseatic League.
Later, the period 1450-1700 saw the fall of (basically weak and decentralized) feudalism and the rise of absolute monarchy. The movement didn’t last long without modifications in Britain, but on the Continent I’d say it represented the single greatest expansion in centralized state power since the Romans.
I agree that it’s strange to say that europe wan’t getting any more centralized. However it never reached the point that there was a giant centralized “world” empire like Rome or Yuan or Ming China (the Catholic Church wasn’t one because it was only half of the church-state duality, and it often undermined potential centralizers).
In the end however, I’m not sure there is much of a point to arguing why one side of the world developed earlier. This kind of questions only makes sense if you’re discussing neighbouring countries, because then you’d expect the technology in one country to influence the other, and you have to come up with an explanation if one country falls behind. But with countries at opposite sides of the world, there’s no reason they should be at the same level.
So it’s the question that I’m afraid is wrong. It’s like asking why one tree is taller than another tree. Why not? One of the two must be ahead and it may very well be random (in the sense of not having a single interesting explanation).
“the period 1450-1700 saw the fall of (basically weak and decentralized) feudalism and the rise of absolute monarchy.”
Maybe being behind China in authoritarianism means being ahead of it in everything else… which would make sense on my view that history is a struggle between governments and economies, both of which have their own logics and trajectories, neither of which ever completely “wins” or “loses,” but the balance of power between which determines whether tyranny and stagnation or relative freedom and innovation are the result.
I originally wanted to combat the “just-so” stories of geographic determinism that some people think suffices for historic explanation, but actually as I looked at the record I think I find myself agreeing with onyomi’s original point about China’s relative stagnation being a result of the power of the bureaucracy.
On the broadest terms, Europe’s relative stasis under the Roman Empire, and its relatively rapid development following the year 1000, lend credence to the theory. Similarly, China’s relative disunion prior to 1000 (up to that point it had spent more time divided than united, remember), and its dominance at the top of the technological tree, is followed by a millennium of imperial more-or-less unity, and stagnation.
Obviously, this can be exaggerated – the Qing, for example, were I think relatively innovative when they were still the Manchu. At the very least they innovated enough to overthrow the Ming. However, I think it does suffice as at least a partial explanation.
(Regarding the relative union/disunion in classical times, I did exaggerate early European unity, but I still think the Roman empire alone suffices to make the point, since its only Chinese peer was the Han dynasty, which had 200 good years and 200 years of not much. An observer living during the reign of Maurice or Sui Wen would probably conclude that Europe’s natural state was unity around the Mediterranean, and China’s natural state was fragmentation among the mountains and river valleys. Beware geographic just-so stories just as much as you do evolutionary ones!)
I always thought that the Romans pulled the brakes on classical antiquity and its vitality.
“I always thought that the Romans pulled the brakes on classical antiquity and its vitality.”
I think this is actually a key point: it is very, very easy to mistake the glorious, decadent empire which tends to result after decades or centuries of decentralized creativity for the cause of, rather than the result of, that prosperity.
@onyomi:
It seems odd to me that people aren’t mentioning what the bureaucracies did.
The Romans built things for the populace. Roads, aqueducts, sewers, public works of many stripes. Entire new towns. They ran an empire, and they did it with competence.
My sense of the Chinese bureaucracies is that they spent a significant amount of time finding ways to enrich themselves.
“My sense of the Chinese bureaucracies is that they spent a significant amount of time finding ways to enrich themselves.”
One can’t generalize about such a vast swath of time, but the Chinese bureaucracies did do a lot of public works projects, many on a scale unknown in the West. Leaving aside the likely pretty useless Great Wall, the Grand Canal greatly facilitated trade between the North and the South, and foreign observers were consistently surprised by the speed and reliability of their postal system, as well as with the speed with which travel could be accomplished in general.
Ease of travel and a higher level of connectedness does seem to be the advantage of empire almost wherever you find it. But it seems to have some serious tradeoffs in terms of local creativity. Empires tend to concentrate everything around the sources of political power. Contrast Greek and Renaissance Italian city states, or, indeed, China’s Warring States and Six Dynasties periods, widely viewed as political disasters, but producing a lot of interesting culture and innovation.
The problem with the Romans building useful things for the people is that those initiatives can also be interpreted as extravagantly wasteful.
The Romans created an extravagant network of acqueduts, public fountains and public baths, which actually facilitated the spread of disease, and could be seen as having no economic purpose. To do this, they had to take resources from somewhere else. A libertarian type of thinker could easily criticize the Romans for this.
The romans famously distributed free food to the masses. To do this, they had to take food from someone else.
In an ancient world in which “to feed one’s family” still would have been interpreted literally (as opposed to today, in which it refers to gasoline and other luxuries), this equates to giving people an entire livelihood, and it must have created a parasitic, improductive class of people.
The Romans also entertained the urban crowds with public games, including bloody gladiatorial ones. Again, the resources for this had to be taken from somewhere. Including lives. Again, a libertarian type of thinker could easily criticize the Romans for this.
The main goal of the Roman roads was not trade, but for Roman rule to survive – to allow legions to put down rebellion everywhere. Keep in mind that before the invention of railways, trade happened mostly along waterways – rivers and seas – because it’s far cheaper to move things on boats. To the extent that the roads had a public usefulness, maybe it was worth the expenses, and maybe it wasn’t.
I’m not saying that all these things were necessarily bad. But they can’t be used as evidence that Roman rule benefited the people of the empire.
I’m sorry that I don’t have a link, but I remember reading an essay by some historian or economist of libertarian inclination which eviscerated the Romans of the imperial period for the way they increased and increased taxes, the printing of money, and limits on private economic activity, without understanding that they were destroying their own empire in the process.
I think with mental health in particular, there is an expectation that doctor is both health care provider and guru. But I think the “guru” part is both supererogatory and shouldn’t be forced on anyone who doesn’t want it
This is a problem Carl Jung spent a lot of time thinking about.
I suspect that some cases of narcissism are actually an agnosia for the desires of others. Normal people’s brains pay an inordinate amount of attention on what others believe, wish for, intend to do etc. If the part that keeps accounting of the needs of others is out of whack or seriously diminished, you end up with defective social cognition. Your intuition about exchanging goods with others may tell you that the good things you have in your life are the things you manage to extract from others minus the things you give away. You fail to interpret the social world as a network of exchanges, and perceive it as a spiderweb with yourself in the middle, desperately trying to fulfill your social and material needs in an environment that is reluctant to invest in you.
A different mechanism might not involve thelagnosia, but a failure of the reward system to anticipate pleasure/generate pleasure through helping others.
Many of the other aspects of Narcissism (charisma, inflated self-image, unsteady relationships etc.) may be part of a coping strategy, since the normal social needs of the narcissist are usually not met, and everybody on their social graph tends to become hostile after a short while.
If this interpretation is correct in some cases, treatment might start by explaining to the patient that they are not suffering from a character flaw but a specific attentional defect. A solution could be the establishment of a regular social bookkeeping routine, and cognitive workarounds that help investing in others in the absence of an innate urge to do so.
That’s actually a really interesting idea. I say that as someone who has serious facial agnosia – I routinely recognize people by their voice, not their face, which has its own interesting complications. It’s really hard to read emotions off others’ faces when you can’t even recognize faces (that sounds more extreme than it is, but for me evaluating faces is a brute-force skill, not a natural one), but if you’ll just put it into words, I can read your emotional state immediately.
Tbh, I think the idea of social agnosia applies much better to the autism spectrum than narcissism.
“a failure of the reward system to anticipate pleasure/generate pleasure through helping others”. That just sounds like a plain lack of empathy. I have my doubts that pointing out someone’s attentional deficits to social exchange is actually going to make them more empathetic.
Narcissists are notoriously resistant to therapy from what I understand. I read one article that said continually pointing out the detrimental effects of their behavior on relationships is one way to try to get them to acknowledge the need for change.
Reading this, I had a thought – most people are self-centred but some people don’t get that everyone is like this, and get offended when their wants and needs aren’t prioritized over those of others.
Narcissism means not being able to comprehend the narcissism of others. I think Bierce said something like that.
I think some cases of narcissism might be learned helplessness from abuse.
Abused children have the experience of never being what their parents wanted. They often can’t figure out what it is their parents want. They couldn’t help their parents, and they internalize that. Their parents certainly wouldn’t help *them*, and they internalize that, too. This teaches a confusing world where emotional demands are insatiable, and so a world where there’s little point in empathy exercises. It also teaches that the only true helping relationship a person has is the relationship they have with themselves internally.
This is all highly speculative; with that note given, I’ll continue. If they have a positive relationship with themselves and are introverted, this predicts a schizoid personality. If they have a positive relationship with themselves and are extroverted, this predicts a narcissistic personality. If they have a negative relationship with themselves, this predicts a “borderline” personality. Notably, this idea predicts that schizoid and narcissism exist on a continuum differentiated by introversion/extroversion, which is probably the best attack angle for falsifying the idea. Introverted narcissists and extroverted schizoids would, if not neatly convertible, illustrate that the speculation is questionable if not entirely in error.
As a side note, does anyone else think that “borderline” is a terrible name for a disorder? Borderline to what?
“does anyone else think that “borderline” is a terrible name for a disorder? Borderline to what?”
Yes, I’ve never really understood the name.
The name is supposed to indicate that the person is on the borderline between psychosis (delusions) and neurosis (negative emotions).
If you conceive mental illness as a continuum from psychosis conditions like schizophrenia and neurosis conditions like depression/anxiety, then borderline is supposed to be roughly in the middle, containing elements of both.
Many acknowledge that the name is bad because it doesn’t convey anything unless you know this continuum idea in psychiatry.
Thanks for the explanation, though I don’t think I understand how this is a continuum. I would imagine that being depressed and anxious would make me more likely, or, at least, not less likely to become psychotic, so I don’t understand how these are on a continuum.
I also don’t really understand what it would mean to say that one is located on a continuum between anxiety/depression and psychosis. Does it mean you’re a little depressed and a little psychotic?
I have read that borderline patients have more intense emotions than most, and tend to engage in black-and-white thinking. This sounds pretty similar to bipolar to me, though I understand they are very distinct patterns.
Based on what little I know of borderline, it sounds like it should be called “emotional regulation and perspective disorder” or something.
According to some theories the introverted narcissist is a “covert narcissist”. I am not sure how widely accepted this category is.
Also, narcissism is highly heritable-maybe even as much as IQ.
The last word of the essay is misspelled.
“Attitude 1 would have been the wrong choice in these four situations.” five*
Also, maladptive
No, it’s “alley”.
I wonder how much you can avoid the negative parts of attitude 2 by using it only with patients you like? It seems like most of the failure cases you described are cases where the psychiatrist also disliked the patient and wanted them to be making shit up.
Of course, committing to treating patients you like differently sounds like a terrible idea. But it is interesting.
As a side note, I think that the fact that I’d tend to lean slightly towards option 2 probably describes most of the (admittedly rare) cases where I disagree with you.
“Attitude 1 would have been the wrong choice in these *four* situations.”
And what if the party says that there are not four situations but FIVE? How many are there then Winston?
THERE ARE FOUR LIGHTS.
Nice.
“Did I murder my neighbour in Belize? Was I manufacturing illegal drugs in Central America? Was I having sex with underage girls and was I using bath salts? I can answer a resounding no to all three of those questions.”
https://www.youtube.com/watch?v=hx3yTWkN3fI
To some degree you can test narcissism with response to preposterous flattery.
That’s an incredible observation. It might be the single smartest thing I’ve ever heard.
You might want to think more generally about how the psychiatrist considering attitude 2 could test his conjecture. Just as in other cases, you want him to specify, preferably in writing even if only for his own use, his protocol and what outcome has what implications.
You gave one example with the pill that was supposed to cause hallucinations—give her the pill, if the hallucinations appear in less than X minutes they are bogus.
Part of the reason for doing it this way is that we are usually biased in our favor, likely to interpret evidence as supporting what we already believe. So you need to work out the implication before you know what the evidence shows. And you need to devise a test such that, if you are wrong, the evidence is likely to show you wrong–as in your case, assuming the patient did not know how long the pill was supposed to take to start working. An even stronger version would be to have given her a placebo in place of the real pill.
What a moment we’ve had here folks.
I’m still not getting the joke, but I’m guessing it’s a cultural reference.
“preposterous flattery”
No, Scott responded using preposterous flattery. I’m just disappointed that Michael didn’t respond to that by agreeing with Scott, which would make it unclear if he acknowledged the joke or is a narcissist himself.
Aapje, thanks. I’d not only completely missed Scott’s joke, I’d decided that “what a moment we’ve had here” was a cultural reference.
*slow clap*
Long time lurker. First time poster.*
*First time poster laughing so hard he’s crying.
Damn, it took me like an hour to realize the joke. I’m an idiot.
I didn’t get the joke until I saw MP’s comment.
Even worse, I didn’t get it until I read anon85’s comment. 🙁
The lady in the textbook might only have been looking for attention, but not because she was a narcissist; because she was in an unhappy marriage with a borderline emotionally abusive husband and she wanted a little reassurance that no, she wasn’t a nagging bitch and a crazy ugly old broad. (Probably she was doing things that provoked her husband, but that’s because it takes two to tango and there’s very rarely a clear ‘he/she is the villain and she/he is the suffering saint’ in these situations).
Sounds like a little emollient advice about maybe trying marriage counselling first rather than a flat “You’re an attention-seeking ho” would have gone better, but if it was in a textbook, it probably was a story from the Good Old Days when doctors were gods, and women were hysterics who needed a firm hand when they started up with their fancies and notions.
Good point. When I’m in an interpersonal situation where I feel like I’m being mistreated, I’ve caught myself secretly hoping that the person will go a little bit further so that their behavior becomes Unambiguously Wrong instead of wrong at the level of “I strongly feel deep down that this treatment is unfair, but it’s for me to hard to pin down a definitive argument as to why”. It’s not much of a stretch for me to imagine deliberately (or perhaps subconsciously) provoking someone in these situations.
lol, oh my
It took me way longer than I am comfortable admitting to realize that was a joke.
I thought the test for narcissism was inability to own mistakes and failures or admit/apologize for wrongdoing.
Isn’t that a very very common trait in interactions with strangers?
With strangers, yes. However in intimate relationships you’re expected to have more give-and-take, which narcissists will struggle with.
Speaking of narcissism, I’d like to see Scott do a post on the claim that young people nowadays are more narcissistic than in the past, analyzing how reliable the evidence for that theory is.
I’d like to see that, too.
Maybe I have misunderstood this idea, but generalized application of psychological diagnoses don’t seem like a useful exercise.
http://time.com/3083178/narcissism-ask-question/
Is there a psychiatric equivalent to “Trust, but verify”? I’d hate for my therapist to dismiss my view of my problems out of hand — after all, I’ve been living with them for months, and they’ve just met me. But it seems that better patterns would start to emerge the longer we’d been working together.
I guess, I feel as though “how well you feel you know them” would be a better proxy than “patients you like.” And “have you considered that you might have dysmorphia?” would come across much better, *even if it is wrong*, after you’ve established a relationship of trust and respect.
The Last Psychiatrist seemed more attitude 2, what with the cultural diagnoses of narcissism.
Hanson too, if we’re mapping bloggers, what with his whole “x is not about x” thing.
House is probably the biggest attitude 2 proponent, his default assumption was that patients always lie.
Try to make a drama out of Attitude 1.
Ironically, from my somewhat-but-not-in-great-depth-reading, Erickson might have been the Spiritual Avatar of Attitude 2 while having the outward appearance of being the Fleshly Personification of Attitude 1.
As another hypnotist with a not-in-great-depth reading of Erickson, I find that to be an interesting way to describe it. The attitude 2 part is pretty obvious, but I’m curious exactly what you mean by “appearance of being the fleshy personification of attitude 1” – there are a few different ways I could interpret that.
In one sense it almost seems *necessary*, since a pure attitude 1 approach is too naive to get any results save for by fortunate coincidence, but if you don’t at least *look* like you’re doing some attitude 1, then no one is going to want your help.
It’s been a while since I’ve read Erikson, but as I recall, sometimes he’d say (or at least say he said) “If I can’t help you, nobody can”. This isn’t taking charge of the diagnosis, but I found it breathtaking arrogant and potentially dangerous. The source is probably My Voice Will Go with You.
@jimmy:
I meant by it that Erickson always acted like he was engaged and sympathetic, to the point where he could pretty much charm birds out of trees. But the sense I get from reading him was that he didn’t really care what the patient thought the problem was or wanted – he would decide what it “really” was, or just ignore it and “fix” what he thought needed fixing. He cared what the patient was thinking/feeling just enough, and just long enough, to get inside their heads and start rearranging the furniture.
@Nancy Lebovitz:
Another thing Erickson was pretty much the avatar of was, “It isn’t arrogance if you really are that good.” Doesn’t make it less obnoxious, but the man was a godsdamned magician.
Of course, that being said we don’t read about his mistakes and mis-steps much. How many problems his attitude caused versus how many it fixed, there’s no way to know.
This helps pin down what often irritates me about Hanson. It leads to some very interesting observations, but a number of his posts make me want to shout “Sometimes it’s just a cigar!” He has the James Burke tendency to discard first causes just because they’re too simple, even if they’re 80% of issue.
Understatement of the century! I think TLP is literally attitude 2 personified.
Indeed, even the most extreme example of approach 2 scott gave didn’t give the diagnosis before he even met the patient.
I miss TLP.
I think describing him as Attitude 2 is insufficiently charitable, however.
It’s all about understanding the reasons why he laid things out as he did – was he trying to describe the absolute truth of how things are, or to model a toolkit for the reader; to show, by example, a type of analysis that has the capability to reveal truth? Even if you believe he was arrogant enough to attempt the former, from a postmodern perspective, does it matter, so long as his readership was benefiting in the latter manner?
When we’re analyzing a culture, the stakes of the analysis are also low. We’re not in a position of power, the way that a psychiatrist or psychologist is with a patient; we do not have the capability to prescribe antidepressants for America, or to recommend that everyone try some mindfulness exercises with a reasonable expectation that society at large will give them a go.
When we’re analyzing a culture, we are very nearly guaranteed that our analysis will accomplish nothing of note. In that case, encouraging Attitude 2 is virtually risk-free. Almost no downside, and a potential upside of a vastly greater understanding of yourself and the culture you live in.
Believing what you are told about what is going on and why it’s happening is the default, and requires no further elaboration. Looking deeper only happens if someone takes the time and effort to try to untangle what’s happening. And as long as you don’t approach that process as an absolutist – as long as you understand that it’s an attempt to generate additional possible perspectives instead of finding the One True Perspective – that’s overwhelmingly a good thing.
Cultural diagnosis? Isn’t this what Breivik did when he diagnosed Norway with Cultural Marxism?
I don’t like what he prescribed for it.
Sounds like Breivik is becoming the new Hitler, just when comparison to Hitler has started to cease being effective.
We need a Hitlers to Breiviks exchange rate.
How does 1 Breivik = 15 nanohitlers work as a starting point?
Are you trying to scam me, a Holocaust denier or don’t know your SI units? (Not sure which is worse.)
Canonically, 1 hitler is defined as 6.0*10^6 deaths (6 megadeaths), or 4.14*10^13 dollars (41.4 teradollars).
One breivik would then be 12.83 microhitlers, or 531.3 megadollars.
Breivik killed way more than 15 billionths of the number Hitler killed.
I prefer kilonazis. With ~10 million HSDAP members and ~20 million deaths directly attributable to Nazi aggression, Brevik comes in at 0.039 kNz. Compared to a hypothetical offspring of Sauron and Cruella DeVille at 4.7 kNz.
I see we’re approaching this differently. Breivik most definitely killed way more than 15 billionths of the number Hitler killed, but is he more than 15 billionths of the boogeyman that is Hitler?
but is he more than 15 billionths of the boogeyman that is Hitler?
He’s been mentioned here as an Icon of Pure Evil more than once, and Hitler hasn’t been mentioned sixty-seven million times, so I’d say yes.
But by that standard, Hitler’s evil would only merit, what, 150 mSJWs or so? May have to give this some more thought…
I think the Hitler=150mSJWs problem comes because SJWs are still around and Hitler isn’t. How long would people keep using them as boogeymen if they vanished? You need some sort of time factor where the longer its been seen the boogeyman was free and alive the higher they are on the chart.
@TrivialGravitas:
I think you’re on to something, but it’s still incomplete. Before Hitler, the default Icon of Ultimate Evil in western civ was the Pharoah of the Exodus. After some twenty-three centuries, he should have had enough concentrated boogeyman mojo to have gravitationally collapsed into a singularity of pure evil. Yet he was almost completely displaced in a few short years by a lightweight who was still alive and busy killing people at the time.
@ John Schilling:
Are you forgetting to take population size into account?
Yes, Pharaoh was mentioned for a longer period of time, but the 20th century has had a far higher population. It’s quite possible that these balance out.
As a narcissist, I have to say that much of what he has talked about is *really* not about narcissism, and in fact seems pretty common to me even looking at people from places/countries TLP was entirely ignorant about? His obsession with cultural explainations made him even more of an asshole.
I would say that pragmatically his approach is just moralizing and self-serving too. I mean, of course given his strong views on… well, virtue and such, he would be scornful of harm reduction. But I’m not interested in changing myself and not being a narcissist, while harm reduction is definitely something I can practice. If I tried to rely on his writing for advice and support, he would have just further alienated me. Working Through Your Awfulness can be a very unhelpful fetish, however righteous it sounds.
Wasn’t a big part of his shtick to come up with a definition of narcissism that was rather different from how most people use the word (definitely how most laypeople use the word) and then find it in everything?
I think this is exactly wrong. TLP’s prescribed remedy for narcissism is precisely harm reduction:
The whole point is that you should be worrying about actual consequences in the world rather than your own personal identity/brand/virtue.
Oh, but in other posts he goes on to say much more! What goes into Faking It The Correct Way, etc etc! Family but also various other stuff. That’s al the bad parts. It is about Making You Right, even if it’s not a stereotypical ~treatment~.
I know he is worried about virtue, even Virtue with a capital V, because come the fuck on, duh, it’s transparent. Hell, he’d probably have some choice words about something as innocuous to most here as polyamory.
(also this comment seems obviously correct about the big problem with equivocating “faking it” and “doing it for others”. one can totally fake it for oneself too.)
(oh god there’s an early stage of the Nice Guy Discourse in there too :/ )
I got very tired of TLP. After a while, I hadn’t seen a example of any thinking TLP considered to be non-narcissistic.
He was just the expert on the ill-defined thing he thought was wrong with everybody, and at great length. You might be able to guess what diagnosis I’ve got for him.
If I remember correctly, TLP believes that narcicists rarely marry each other, so the spouses of narcicists would be examples of non-narcisists. TLP thinks they tend to be borderlines instead.
Here is an example of TLP talking about some non-narcissistic thinking, albeit in the context of discussing how we’re training everyone to be narcissists.
Edit: Here’s another good example:
It seems pretty obvious to me what he means by narcissism: it’s thinking (and, more importantly, behaving) in a way that is self-centered rather than other-oriented. He would say that the reason you’re confused about where the boundary lies is precisely that narcissism is so pervasive and so easy to rationalize from the inside, which is kafka-trapping a little bit but also true. Think about it: how would you know if you (along with your generation) never developed a particular cognitive faculty (watching out for other people, playing by the rules as they are, accepting authority unless there is some specific reason not to) and collectively forgot that it used to be normative? Or, to flip the question on its head, what kind of evidence would it take to convince you that you’re missing something and don’t know that you don’t know it?
Thank you.
I still think TLP is doing too much hostile diagnosiing at a distance and failing to note that that sometimes people are pushed to be stoic for someone else’s convenience, but at least he isn’t as bad as I thought.
Well, if the idea of non-narcissism is extended to this, then it would seem that I owe my life to being a narcissist! (as do like millions of queer people presumably???)
this reminds me of the Mountain Goats/neotenygate thing between some rightist types on tumblr and LW-on-tumblr, a while back. went like, “whiny self-obsessed anti-family degenerates like John Darinelle are what went wrong with society”, and it was a lot like the above paragraph. then some people objected that John Darinelle has actually been really helpful to them and was a factor in their survival.
I think he’s the modern example I would use for attitude 2 (Freud, obviously, being the historical one). Obviously he’s capable of attitude 1 thinking (he does mention prescribing the obvious treatments for things like anxiety and depression), but he’s awfully happy to talk about society-wide transference in stuff like the hipster articles.
Is Freud really an example of attitude 2? I mostly know Freud from his writings, and what he says in his writings often seems very different from what his critics attribute to him (he definitely advocates listening to patients).
Yeah, TLP is very Attitude 2 and I’ve had trouble appreciating that aspect of his work (not saying it’s bad, just literally saying I have trouble appreciating it)
I’d strongly defend approach 2. You find truth through conflict, making bold claims and having them battle it out. So say the woman drops the arrogant psychologist giving the narcissism diagnosis. What happens? She goes to a different psychologist, and either the problem was with the husband or the problem was with her. She either correctly solves the problem with the husband, or, after seeing a ton of psychologists who all seem to think it’s more a narcissism thing, comes to the conclusion that maybe it is narcissism. I guess having a single medical file following you around would ruin that though.
I’d always be suspicious of approach 1, unless you can be really sure people are incentivised to tell the truth, why would you ever assume they would if they didn’t have to?
If people have a problem and want it to go away, and believe that telling their psychologist what their actual problem is is the thing most likely to make it go away, isn’t that a pretty strong incentive?
(This may be impossibly naive of me, but… all the original examples seem like cases where patients are telling the truth as they know it, they are just misinterpreting the situation/missing part of the truth. And we’re mostly discussing people who are seeing a psychologist in the first place, aren’t we? So assuming it’s voluntary, that seems like evidence that they do want the problem solved.)
I think this is correct; people are incentivized to tell the truth to doctors in pretty much every case, psychiatric or otherwise, with only the one obvious and specific exception.
If a psychiatrist tried to make me battle him over my symptoms and problems, he’d get $0 of my money and 0 minutes more of my time. (I have a strong reaction to this idea! No criticism to the grandparent poster; it’s an interesting perspective; I just can’t imagine benefiting from or tolerating such a dynamic in my therapy.)
So Scott should just use Attitude 1 only, because if he doesn’t, patients will just leave and find a doctor who does anyway… (Tongue slightly in cheek with the reasoning, but not the conclusion. 🙂 )
You assume they know the truth. If someone tells you she’s frequently obsessed with whether she turned off the car headlights even though her car would ding at her if she left them on, and whether she locked the front door when she left home — that may be all she knows until, with some help, she works out that the attacks come when she’s stressed out.
I have lurked comments for years, quite literally, and I always thought “yeah, Mary makes a great point!” when I saw your posts… So it’s a shame I must disagree with you now!
Well, not actually disagree so much as point out that this is exactly what R.F. said — that the people in the examples try to tell the truth as they know it, which may not always be complete.
If you’re in a situation where you finally capitulate and go to a psychiatrist with a problem, the last thing you want is a gladiatorial combat of “So! Let us battle out the competing truths, last man standing is the winner!”
You don’t have the goddamn psychic energy to spare fighting tooth and nail every inch of the way with an antagonist who believes the last truthful thing you said was your name (and they wanted you to provide an official birth certificate as proof even for that). You’re likely to say “I knew this was a bad idea”, leave, and maybe not go looking for more help.
Yes, I see the value of challenging the narrative the patient has created for themselves in order to have a tidy story to tell for why they have problems, but the fact remains that in the first place the patient has come for help; the “truth through conflict” model is like interrogating someone in the water screaming for help about “Do you really need me to throw you this life buoy? Are you lying about not being able to swim?”
“gladiatorial combat”
“fighting tooth and nail”
“screaming for help”
Okay, so I shouldn’t have casually used “battle it out” as a metaphor since it’s clearly got some major negative associations for you. A collaborative fact finding investigation? Making a claim, doing evidence for, evidence against, working together to find a solution? Maybe “friendly sparring” would have been a better term for a confrontational approach to trying to find the truth?
“Battle it out” is not helpful because mental problems are tiring. It’s hard to describe the lethargy and energy-draining effects of depression at its worst, so when you come in to the therapist the absolute last thing on earth you are looking forward to is “Ah, the bracing battle of wits! The combative cut-and-thrust of intellectual sparring! A foeman worthy of my steel!”
When you’re well enough to enjoy that, it’s a different matter. When you’re grimly dragging yourself in (and I mean that literally; some days it takes me forty minutes to walk a fifteen minute trip to work/into town because I keep stopping still and going “I don’t want to do this, I don’t want to be there” and it’s not that I want to be elsewhere, I’d stand in the same spot and do nothing for an hour or more if I didn’t force myself onward) to see the therapist, you don’t have the psychic resources to cope with “a confrontational approach to trying to find the truth”, when it took you all you could do to get dressed and comb your hair and set out for the appointment.
Yes, we have entirely different understandings of my metaphor. Forcing yourself to workout anyway, even when you don’t want to, was exactly what I was thinking of. You’re pretty tired by the tenth round of sparring, but you have to do it anyway, because you can’t always fight when rested and enthusiastic. Fighting is not like the movies, it’s not all swashbuckling.
And believe it or not, I am actually aware of what depression is and what it feels like. To repeat, I shouldn’t have used the metaphor with you because you have a very cinematic and extremely specific idea of what confrontation is. So I won’t use it with you anymore since I am obviously being misunderstood.
We’re talking about mentally ill people here.
They’re not going to competently carry out their side of the “battle”, and if you take it as a working assumption that they’re wrong and your bold claims are right until such time as they do, you’re going to perform poorly as a psychiatrist.
This doesn’t even work very well when gatekeeping outside of mental health treatment. In general when there’s a couple of standard deviations difference in IQ, the smarter person can probably shoot down any arguments made by the stupider person but not visa versa regardless of who is correct, sufficiently to feel justified in not conceding. Less might be enough. If nothing else, the instinctive behaviour of just sticking on complexity to models to explain away anything contradictions is probably sufficient when dealing with anyone who isn’t able to confidently express the Occam’s Razor/epicycles argument.
It’s a decent enough strategy on the societal level, where there’s intelligent people looking to jump in on anything correct-looking, and if you trust it to work stochastically over a long time and often through the death/retirement of the advocates of the old bold claims. As a way to decide who gets what support, in a one to one conversation with no one else coming in to make a case, and no long term to let the truth settle out stochastically? Your error rate would be through the roof compared to if you tried to evaluate both ‘sides’ yourself fairly.
I think it is more straightforward. Both Attitude 1 and Attitude 2 are encompassed by the patient’s conscious state and invariably that results in the patient having a theory about what their problem is. That theory is why it is very easy for a psychiatrist to not dismiss a patient. After the usual elaboration of symptoms and syndromes a discussion of the patient’s theory, how likely it is to be right or wrong – or more importantly all of the other possible theories that account for what is going on can be both diagnostic and therapeutic. It can be applied irrespective of how globally ill the person is who is seeking help. The goal is an explanation of the patient’s theory that is as close as possible to Sims definition of empathy: “In descriptive psychopathology the concept of empathy is a clinical instrument that needs to be used with skill to measure the other person’s internal subjective state using the observer’s own capacity for emotional and cognitive experience as a yardstick. Empathy is achieved by precise, insightful, persistent and knowledgeable questioning until the doctor is able to give an account of the patient’s subjective experience that the patient recognizes as his own.”
Until that happens the patient and the psychiatrist might be at odds. Certainly the day of checklist psychiatry is in direct opposition to this process and may result is speculative interpretations of he patient’s behavior that is wildly off the mark. There are also the time and practice setting factors, but Viederman showed that useful interpretations could occur even in crisis intervention settings.
To your point on narcissism, I saw a famous Hollywood director on Letterman one night laughing intensely about an analytical interpretation of the director’s back pain as “repressed narcissistic rage”! 30 years ago, I think these wild psychosomatic interpretations were commonplace.
As has probably occurred to you, Attitude 1 is the position that feels natural to a libertarian—who is likely to see Attitude 2 as paternalism. It would be interesting to see how the division of psychiatrists between the two attitudes correlated with other issues that have a similar feel—unschooling vs discipline of kids, Basic Income vs conventional welfare, non-interventionist foreign policy vs conventional foreign policy, Drug legalization vs war on drugs.
Hmm, I got to libertarianism from a attitude 2 perspective: you can’t just trust anyone, not even written laws that seem clear. Attitude 1 seems like High Modernism to me.
I’m not sure it has much to do with trust, since most of these cases don’t involve lying. If you mean that you can’t trust people to know what they need, that seems very un-libertarian — kind of like saying the libertarian position on government is to advocate for a powerful, authoritaruan nanny state, because people can’t be trusted to make the *right choices.*
Authoritarian, even. I have big thumbs, alright?
I don’t trust people to make good decisions for their own lives.
I trust them even less to make good decisions for other peoples’ lives.
Excellent point.
No offense meant to Frog Do, since I think he was expressing his honest view and not trying to be snarky, but it seems like a fair number of people have an almost comic book villain idea of libertarianism.
Yeah, I wasn’t trying to be snarky, I do genuinely think libertianism is mostly correct for the reason I stated (unless someone wants to Attitude 2 me!).
In the world where you cannot trust anyone and everyone is just trying to cheat everyone else for their own benefit, so more or less in a world where the size of the pie is kept constant, socialism, even up to the command economy socialism is way more attractive – in the worst case you get someone control all the cheating from above and cheat himself, in the best case you get someone who will “make the society great again!”.
The strongest argument for libertarianism from me is that by and large, if you let people do their own thing, they will end up being trustworthy and act for the mutual benefit of everyone, not out of good will and compassion (although being trustworthy is something selected for in that system), but because it is the best strategy to benefit yourself in the long run. But this is a complete starry-eyed nonsense if you believe that basically everyone is just trying to rip-off everyone else and it is a “dog eat dog” world out there.
Exactly.
Except I think this has more to do with individualism vs. collectivism than liberty vs. coercion per se.
It’s what is called in Objectivist circles the “harmony of interests” theory. If you think that everyone’s self-interests are basically compatible, you’ll support a system were individuals can pursue their own interests in an uncoordinated way.
If you think that people’s interests conflict in major ways, you’ll be very concerned with collective action problems and “defection” from group norms. Indeed, it may not only be that the interest of an individual conflicts with that of others within his group but also that the interests of one group conflict with other groups. So you have to put in mechanisms to ensure loyalty to one’s own group in spite of temptations to pursue individual interests.
In practice, individualism and liberty go together, as do collectivism and paternalism. But in theory, not necessarily. For instance, you could say everyone should pursue his own interests, but the government knows best what they are, so individuals ought to follow the government’s advice or be forced to do so. Or, in the opposite way, you could say that everyone should serve the group and not himself, but that people will just naturally tend to do this on their own.
Most of your comment is very insightful and I applaud it. But!
Collective action problems and conflicts of interest between group members are very different things. A conflict of interests is something like Yudkowski’s classic “billions of humans vs paperclips” version of the prisoner’s dilemma – collaboration improves the group’s totals, but adds less to each member’s total utility than defection given any particular choice by the other player(s). In a collective action problem like the fishery example, you have more states possible, because you have more players:
-total defection (pretty bad for everyone)
-significant defection (worst case scenario; some people are spending resources to collaborate, but there aren’t enough of them for anyone to benefit from their collaboration)
-significant collaboration (best case scenario for the group, but unstable; enough people are collaborating to produce a significant benefit both to themselves and to the group, but defectors benefit even more)
-total collaboration (easiest to enforce, but there is significant room to improve both (and only!) total utility for the group and utility for random members)
The easiest solution to construct for each problem is the same for each, though: an external enforcer, real or imagined. Sturdier solutions will also have to change the game in some way in order to have an effect. (I thought that was a big insight at first, but it seems trivially obvious now)
I’m not sure I actually managed to capture the difference there. Let me try again:
With divergent interests, individuals trade their interests strictly against those of other members of the group, not against the group as a whole. Often, the group can’t really be said to have interests at all. With a collective action problem, group interests and individual interests are basically the same (everyone, including collaborators, benefits from the existence of collaborators), but defectors benefit more than collaborators in any state.
@ Guy:
You are right. I was imprecise.
Both cases call for some form of coercion, but there is a significant difference, which I tried to touch on but not in much depth.
In a collective action problem, what’s best for the private individual is not the same as what’s best for the whole. But all individuals are in the same position vis-a-vis one another. So everyone can in principle agree to limit his own action in return for others doing the same. This leaves everyone better off.
In the conflict of interests scenario, there is no possible solution that is acceptable to everyone because interests conflict in a fundamental way. A compromise may be possible in restricted “human lives vs. paperclips” scenarios, but the ultimate “solution” is war where one side wipes the other out.
***
On the other hand, even the collective action problem can in theory be pretty similar. If there’s nothing stopping him, each individual would like to be dictator of the fishery and take it all for himself. But he doesn’t try because he knows everyone else would oppose it.
“In the world where you cannot trust anyone and everyone is just trying to cheat everyone else for their own benefit”
and
“so more or less in a world where the size of the pie is kept constant”
seem like unconnected statements to me, other than the fact they are both Not Ideal. My beef with a lot of modern libertarian discourse on the internet is get there are no free lunches with government policy, but think that social trust and markets and rational behavior and all of that is a free lunch.
Everyone trying to cheat everyone else for their own benefit would seem to inhibit production and create a zero-sum or negative-sum world, no?
Either cooperative behavior is rational from a selfish, individual perspective, or it isn’t.
The liberty-coercion axis is: “Do people know what their interests are, and do they tend to act upon them?”
The individualism-collectivism axis is: “Are the interests of individuals opposed to those of the group?”
So we have four extremes:
Liberty-Individualism: people know what their interests are, they act upon them, and they don’t conflict with other people’s interests.
Coercion-Collectivism: people know what their interests are and tend to act upon them, but this produces class conflict and many other problems, so they must be forced to act in the general interest.
Coercion-Individualism: everyone’s interests are compatible, but people don’t know what their own interests are, so they ought to be forced to act to serve them.
Liberty-Collectivism: people’s interests conflict, but removed from “structures of oppression” they naturally tend to act in the general interest anyway.
My view is that people tend to act according to their interests, but very imperfectly. And while this could in theory be improved by coercion, it rarely works in practice.
My VERY off-the-cuff remark is that this seems really rationalistic in a way that’s divorced from the human experience? Historically, the world was a nasty place and yet the pie managed to grow eventually, capitalism is known for being hilariously cutthroat, etc.
@ Frog Do:
Part of the problem is that “cutthroat competition” is an extremely vague, misleading term.
In the literal sense, it implies actually using violence to undermine your competition by killing them.
But in the more usual sense, it means being really “aggressive” (same type of terminology) in lowering prices and always seeking to do things in the most efficient way possible. Always trying to get one up on the competition and drive them out of business.
However, is driving your competition out of business “competition” in the law-of-the-jungle sense? No, on a social level it is cooperation. The fundamental context is cooperation, to provide things to customers, and the competition is over who can provide more and better things.
The opposite of “cutthroat competition” here would be forming an industry-wide cartel to fix prices. These aren’t stable (when not set up by the government) precisely because companies tend to defect from them to maximize their individual profit, even at the cost of a lower rate of profit for the industry as a whole. Yet this “defection” serves the good of mankind.
This is my major objection to “Meditation on Moloch”: the general point is sound, but many of Scott’s examples show how uncoordinated, “competitive” action produces better outcomes for everyone than coordinated, “cooperative” action.
***
I agree that people tend to view the world in zero-sum, fundamentally antagonistic terms. The whole project of economics, in particular, has been to show them that this is, by and large, not the case.
I feel like the libertarian case for attitude 2 should be fairly easy to make, if one were interested in doing so. A person who knows what’s wrong and wants to do something about that can go and buy themselves their pills if they are so inclined; by going to a doctor you’re essentially saying that you’re there to defer to someone you assume knows better than you what is good for your health. You could well tell such people that if they didn’t want to be told what to do, they should’ve stayed home.
I agree it’s correlated with this distinction, but I feel like it’s different. A psychiatrist who says “I know you’re anorexic, but you should eat more because that’s what’s good for you” is being paternalist. A psychiatrist who says “Your anorexia is probably just repressed narcissism” is being…well, in the old days, they would have explained this to the patient in a very convincing-sounding way, the patient would have maybe agreed with it, and then they would have worked on the narcissism together. What’s that?
Everyone individual decision is subject to that tradeoff, but often when people invoke this broad principle, they’ve left out the possibility of pursuing more information. Of course, the decision to pursue more information is, itself, subject to the tradeoff.
The hard part is decision-theoretically trivial, but psychologically difficult: holding two possibilities in your mind while actually pursuing new information, rather than going through the motions while looking for “evidence” to justify your decision.
*nods*
One of the hardest problems in optimization is deciding how long to spend trying to optimize.
Agreed — I too thought that was off in an otherwise nice piece. I once tried to lay out all of the different trade-offs. A few of these trade-offs that apply here:
– Speed vs accuracy of decision-making
– Exploration vs exploitation (at the meta-level, how much time/energy to spend testing Attitude 1 vs 2)
– Sensitivity vs specificity (aka Type 1 vs Type 2 errors)
– Saving vs savoring (whether Scott should focus on making the world better or making himself happy, which he touches on briefly)
http://andrewtmckenzie.com/the-canon-of-trade-offs
I feel like there’s a disconnect between your definitions and your examples.
In your list of 5 cases, the patients did know what they want (e.g., not to hurt people, not to be crushed by a breakup), and the doctor did the right thing by following the traditional steps:
i. Patient tells doctor what’s bothering them.
ii. Doctor honestly tries to figure out the problem.
iii. Doctor proposes a treatment.
The alternative course of action you caution against would be approach 1* — “Just give the patient whatever treatment they ask for, don’t even try to understand what’s going on” — which is clearly not what good doctors are supposed to do.
And the famous psychiatrist in your textbook seems to have used approach 2* — “Decide what’s wrong with the patient, then use that diagnosis as a weapon in the status game of therapy”.
But the good approaches based on attitudes 1 and 2 are not so diametrically opposed, I think:
1** — Honestly try to understand the problem, even if the patient seems unsympathetic.
2** — Honestly try to understand the problem, even if the patient seems sympathetic.
So… epistemic rationality FTW? 🙂
Exactly. In software engineering, this is the difference between a requirements doc and a design doc. The patients in the examples are asking for implementations of particular designs, because people in all sorts of problem-solving situations naturally jump to suggesting particular designs when what they should be doing is articulating requirements first. And the doctors, like good software architects, are asking them to step back and focus on requirements first instead, or back-inferring requirements from their design and then coming up with a better design that still satisfies the requirements. What they *aren’t* doing is ignoring the (implicit or explicit) requirements or paternalistically assuming that the patients should have different requirements.
Another analogy would be that, generally, one starts with the high level, broad outline, this-is-what-benefit-the-system-should-generate-for-who business goals, which could be considered sort of requirements, and then go down from there to specify more specific requirements on functionality, behaviour, constraints (but still not defining how they’re to be implemented).
It has some similarity with being handed more specific requirements off the bat, without a good explanation of the business goals. If you’re suspicious that these requirements might not actually be a good way of accomplishing what appears to be their business goals, it is generally responsible to probe into exactly what their high-level goals are for the system, and decide whether you need to suggest alterations.
(The most common case is probably someone asking for something which is very expensive/slow/difficult to do, and you strongly suspect that this is mostly because they don’t understand that it is very expensive/slow/difficult and that alternatives would be much cheaper, in which case bringing the matter up is wise.)
I agree with your general point, but paternalistically telling patients what they should want is part of what a therapist does.
Patient: “I want to get my nose changed because it’s ugly.”
Doctor: “You shouldn’t want that. It won’t solve your problems, and it will just make you feel worse.”
This is just terminal vs. instrumental values. And both “design docs” and “requirements docs” are instrumental. No one wants a cool website just to have it. They want it because it does some thing for them. However, you’re right that the design is lower on the hierarchy than the requirements.
But paternalism itself isn’t a matter of changing people’s terminal values. It says you ought to force people to eat their vegetables because they want to be healthy—or would want to be healthy if they knew the facts—but they’re too stupid or lazy to take the necessary steps on their own.
Even the Inquisition is like this. If Christianity is true, then everyone would want (if they knew this) to be saved from hell. Therefore, it’s okay to torture them and force them to convert; it’s in their own interest, whether they know it or not.
There’s a similar divide when being in charge of people. There’s a time to be kind, understanding, and listening, and there’s a small number of times you need to be an asshole.
Most people tend towards either the first or second, whether or not it was appropriate. And, contrary to popular belief, it can be corrosive to be too nice–everybody has had a useless coworker that the boss won’t discipline or fire. I was always the first type, even when it was not good for the organization. There were more than a few times where I walked out of a meeting with a subordinate and realized, “There. That moment right there was the time to be an asshole, because they were making excuses.” I always seemed to let the moment pass without realizing it, though.
Absolutely right- and even beyond that, once you’ve become the kind of manager that can use kindness (almost always) and confrontation (when needed), it’s still easy to lapse into the “wrong” action based on the mood of the day, fatigue, intra-office role-modeling, etc.
I have a very nice midwestern colleague (a manager) that has learned to step it up and be firm and confrontational when needed, through long effort. If he’s not had his coffee and had a long night, he’ll lapse into passive kindness all day.
Is this actually the popular belief? Virtually everyone I know says “there comes a time to be an asshole.”
But virtually everyone shies away from conflict, because conflict is uncomfortable. At least for most people. Certainly for most Midwestern people, anyways! They don’t call it “Midwestern Nice” for no reason.
Come be an Accountant. Argue with Insurance Companies. Feel the cynicism grow. Cure yourself of Niceness! 🙂
Everybody thinks the boss needs to be an asshole sometimes. Nobody thinks that the time is when the boss it talking to them.
However, many people err on the side of avoiding conflict. Then you have others who get drunk on power and be assholes because they can. The good leaders are the ones who can consistently pick the right attitude for the situation.
Feel the cynicism grow. Cure yourself of Niceness!
Work in local government Social Housing provision. Develop a hardened crusted attitude of cynicism about “Everyone is a lying liar” that makes Dr Gregory House look like a bleeding heart pushover who still believes in the Tooth Fairy and Santa Claus! 🙂
As a consultant, Attitude 2 is always true and Attitude 1 is always profitable.
+1
What kind of consulting? Business Consulting a la the Bobs in Office Space?
Both are profitable if you’re charging by the hour.
+1
Explain?
Clients rarely have the expertise to understand their own problems in $DOMAIN. That’s why they are hiring someone from the outside. Often it’s because they’ve read that such-and-such technique is the hot new thing, or whatever, and think it can help their business.
Imagine you are hired to provide software for a payroll company. They need a new database for storing (client) employee information. They are worried about security, and they’ve heard that blockchains have something to do with security, so they ask “can you give us a blockchain to handle our data?”
Now you could, in theory, incorporate a blockchain into this, somehow, to absolutely no productive end, and at the end justify a huge bill on the basis of this being what it costs to use the cutting edge in blockchain security technology. They may even be thrilled with you for having given them something no one else was willing to sell.
Or you could sit down with them and figure out what the hell it is they actually want, and then give them that. Which will be cheaper (for them, aka less profit for you) and better.
This is an exaggerated example (except… I suspect it actually has happened somewhere) but the thing is almost all interactions between non-specialists and specialists are going to take this form. It is a professional’s obligation to actually interrogate the client to figure out what they really want.
> This is an exaggerated example (except… I suspect it actually has happened somewhere)
I work for a blockchain software company (I’m not even joking). This is not really an exaggerated example. It’s never been actually payroll (as far as I know), but — from what I hear, I’m not directly involved — it seems like like a bunch of what people come to us asking about is “can you help us use a blockchain for X” where they really don’t understand what a blockchain is and it wouldn’t really help. As far as I know we’re not seriously considering sticking any blockchains where they won’t help, even if people want us to, because that’s really not what we’re about. But I can see where someone could be tempted.
What about keeping your eyes open for “attitude 2” type consideration, and then just not stubbornly pushing them on people that don’t want to hear it? What about not allowing yourself to even be sure you’re right until they’re willing to sign on the dotted line “you’re right, it is my narcissism causing me trouble and I’d like to work on that” – or “you’re right, it is body dysmorphic disorder”, as the case may be.
I’d never be so arrogant and dismissive as to tell people I know what’s wrong with them when they disagree and don’t feel like I get them, but that doesn’t stop me from being an intentionally-over-the-top attitude 2 guy. For example, I’ve told a friend that the reason she was struggling with dieting was that she *wanted* to be fat so that she could avoid male attention. Did I believe it? Well, not yet, and I certainly didn’t claim to. It was a test, like the kind Michael Vassar suggested for narcissism. Since it happened to hit home, it gave us something to work with. If it didn’t, it’d have just been a funny joke.
Personally I don’t see any reason to believe having/testing/believing “attitude 2” things has to make you a dismissive jerk, as long as you’re not also a dismissive jerk.
Maybe my perspective is skewed due to the fact that I’ve been trying to resolve the same mysterious medical problem for several years, but I find this attitude to be highly wise. When I go to a doctor and I want to be better, I don’t actually want them to listen to five minutes of patient history from me and then assign the “treatment” that corresponds to the the first related thing that pops into their head.
I want them to ask probing questions. I want them to come up with experiments – “If this improves your symptoms, that’s evidence that X is wrong with you, but if it worsens your symptoms, which is possible, it’s evidence for Y.” Hell, I’m even open to experiments that are probably going to worsen my symptoms, as long as that worsening would serve as a powerful confirmation of a diagnosis.
The construction of the examples in Scott’s post suggest a very rigid formula – the patient comes in and expresses a highly specific desire based on some kind of self-diagnosis, and the doctor, without really probing them at all, has to decide if they are going to comply with the request, or conversely, find some complex reason to deny the request based entirely on information they’ve passively gleaned from the patient. I think there’s more “space” between Attitude 1 and 2 than Scott’s examples suggest.
I think Attitude 2 only really becomes a problem when it becomes adversarial, and the job of the doctor is partly to avoid letting it become that way.
The problem for me is that patients often say “No, I’m not a narcissist!” and then you have to figure out whether or not to believe them. You can spend all day probing for more signs of narcissism, but eventually you’ll find something just because it’s easy to imagine patterns here. I agree that if after your first probing question the patient says “Yes, in retrospect my original complaint was not true and I have a secret complaint behind it” everything is easy, the question is how far to keep trying to investigate that in the face of resistance.
Ah, now I get where you’re coming from. On the second read, your last paragraph makes it clear too, but I somehow missed it the first go ’round.
If I didn’t “feel it in my bones”, I’d *absolutely* lean heavily towards attitude 1. I wouldn’t even work on attitude 2 if it meant I had to risk the associated failure mode.
I don’t think that’s necessary though.
If I put myself in those shoes, as a psychiatrist who honestly can’t tell if this person’s problem is what they say it is, or if it’s really their narcissism – and knowing that no matter how hard I probe, I’m not going to be able to tell whether I’m picking up on real patterns or if I’m just imagining them – I imagine saying something along these lines:
“You say that you have a problem with your abusive husband. That’s certainly a terrible thing, and I’d hate you to have to keep going through that. If I knew I could treat it like that and help you, I’d do it in a heartbeat. However, I *also* find myself unsure of whether that’s all there is, or if there’s more to the problem. In one of my textbooks, it describes a seemingly similar case where the woman came in with complaints of an emotionally abusive husband, however it turned out that her narcissism was a big contributing factor, if not *the* factor. The thing is, I’m working in the dark a bit here. There are many women that suffer emotional abuse from their husbands, and the *last* thing I’d want to do is accuse them of being narcissistic and bringing it on themselves. At the same time though, narcissism isn’t fun, especially when it causes you to perpetuate unhealthy dynamics, and I *also* wouldn’t want to ignore that possibility and leave you to suffer because we tried to treat the wrong problem. I’m here to help you, not to judge you or shame you or force my decisions on you. If you still think the problem is entirely 100% him, let me know and that’s how I’ll treat it. Is that what would be most likely to help you?”
Of course, that doesn’t mean you’ll get it right. If she said “yes” without doubt or hesitation, I’d probably just roll with it – even though I know full well that it could still be BS. However, you can get more on the margin. If I noticed a hesitancy or something, I’d probably ask if she wants to look into that possibility a little more, and maybe she’ll agree that narcissism is part of it – or maybe you’ll learn that it’s not.
Now, even here you may get attacked unfairly (“are you accusing me of being a narcissist!? RAHHH!” (to which the answer is “no, I am not. is that how it came across?”)), and even put this way you’ll often get ambiguous and potentially dishonest answers. It can definitely be stressful, and not always worth it depending on your tolerance for this kind of stress.
The upside is that if you do want to explore attitude 2, you can do it without risking actually being that jerk that doesn’t listen to his patients. Since it’s not in your bones, you *won’t have* good answers to “how long to investigate it?”, but when the stakes aren’t so high, that’s more okay. You don’t have solid answers to work with the weak hunches that you might have in an appropriately calibrated way. Once you can do that, you can work to up your “attitude 2 skills” on the margin without taking the leaps of faith and being the evil paternalistic psychiatrist.
I have recently come to believe that my own absent-minded, generally long form, self reflection is a function of narcissism. The doubt that this brings has ruined my confidence in my critical thinking ever since I’ve become aware that I am able to justify any argument to myself and that I have horrible difficulties in distinguishing my own sound arguments from the sort of Type 2 flaws described above.
I suspect that this current, specific consciousness of my own narcissism is due to, at least in part, a difficult relationship and trying to be a better, less selfish significant other.
Scott, and others: have you witnessed a situation in which someone reacts poorly to a personal attempt to deal with their own narcissism and is there any advice you could give given your external perspective?
I can relate to some of this and I wish you luck.
Consequentialism? So long as your narcissism isn’t impacting your quality of life too badly, I don’t see it as a thing that must be eradicated.
Especially when it comes to relationships, learning to resolve the arguments and getting on with life together is more important that who is publicly right, and therefore who is right within your mind does not have to match who is publicly right. Besides the dangers the of bottling and that manifesting through passive-aggressiveness, it’s not inherently wrong to keep thinking that your significant other is potentially wrong about a bunch of things, but conceding to them anyways because the relationship is more important. And then you can keep updating your beliefs by seeing how often their way turns out okay. The consequences of doing the “wrong” thing in most domestic arguments are rarely that dire. Go with the flow a little.
I’ll often obsess over the most ideal configuration of how a fun day out should go, drawing up optimized schedules and maps and such. Usually when with groups of friends, these optimizations tend to get quickly abandoned. Sure, I might still think the day would have gone better if we had followed my plan, (and I have lovely supposed “evidences” of that belief in some cases) but I’ll still enjoy the time I spent with my friends otherwise.
(This is weird. I go by “AG” in a lot of other places.)
Your successful Attitude 2 examples all seem to involve using contextual information about the patient’s life in which they’re experiencing their stated medical problem to understand more nuances of the problem than they initially explained in their problem-report-to-the-doctor. Acquiring this information probably requires small talk (an art often beyond my skill, but perhaps not beyond yours).
Your unsuccessful Attitude 2 examples have the doctor exclaim “Ha! Here is an explanation that fits the data!” and leave the patient scratching their head wondering “Why do you have priors that even brought that explanation to your attention…?”
I’m sure there is a lot of middle ground between the example-sets you chose, though.
People who are narcissistic or whatever probably sometimes have medical problems unrelated to their narcissism, so even if you get a perfectly reliable narcissism detector, best not to just turn away all narcissists.
Here comes the jerk who criticises the example instead of the general principle.
This is Sister Y’s nightmare: A world where people can get help to stay alive, but not to have lives worth living. Is this the real world? If so, empty suicide threats are good.
You feel bad enough you want to die, but you’re strong enough to live. Fake it! Those cunts have set up a deranged system where literal cries for help are useless, and you have to cry for help through suicide attempts. Your honest options are to quietly endure your misery, or to genuinely become suicidal. Be dishonest. Claim fake plans, and, maybe, if you can very carefully engineer it so it doesn’t backfire and kill you, fake a suicide attempt.
So I guess what I’m saying is: even if you’re correct that the patient needs something other than they want, you should provide what they need, not kick them out of your office.
I like you.
I have to kind of disagree here with you, Leo. This is exactly what I’m afraid of – not being told “you’re using this as a crutch”, but that I would use this as a crutch.
Okay, over-sharing ahead, skip this if anyone does not want the contents of my head or too much information about my personal life dumped on them.
I think there’s a difference between wishing you were dead/being suicidal/planning suicide/really going to attempt it. I think (in some cases, I need to festoon this with qualifications because everyone is different) that feeling suicidal or having suicidal ideation does not necessarily translate into ever actual doing anything about it, not even making plans or attempting it.
I think Attitude 2 is the attitude I need, but the problem here is the suggested solution: “learn more adaptive coping mechanisms”. That’s not going to do anything unless you put those mechanisms into practice, and there’s the rub.
I’ve never turned up at the local psychiatric hospital asking for admission, so I’ve not yet sunk to that level of being pathetic. Last year was the year I did finally go to a doctor. And, even though I’ve complained on here about it, my GP was probably right; she exhibited good Attitude 2 skills, refused to give me a prescription for anti-depressants, referred me to the new counselling service.
Which I then blew off because I didn’t think I was able to go for counselling right then, and besides if I didn’t need pills I was okay enough to function, right? And then about four months later I was bad enough again that I went crawling back and swore this time round if she referred me again I really would make the appointment, keep the appointment, and go.
And I did, to the first assessment meeting, and that actually helped a bit, helped me to see a bit more clearly what my real problem was. I didn’t commit to the full ten sessions of treatment because, under this scheme, you had to sign a ‘contract’ saying you definitely would turn up for all of them and as I explained, I could not commit to that; I could not guarantee continuous attendance, that it might happen that next week I wouldn’t be able to force myself out of the house to go to the appointment(s). So that was the end of that, because if you don’t sign up to say you’ll show up for every session, you don’t get the counselling. (Too much demand, not enough places, why waste one on someone who isn’t going to make use of it?)
Which, ironically, seems to have triggered the change in my GP from Attitude 2 to Attitude 1; now she is offering me a prescription for antidepressants and/or a referral to a psychiatrist if I want (I don’t know how America works but here you can’t just ring up a psychiatrist and make an appointment, you have to be referred by your doctor).
She said it was because the counselling service telephoned her after my first (and last) appointment and they were concerned about me. And that’s where I tried to reassure the therapist that no, even though I wanted to be dead and felt suicidal (at that particular date my barriers had slipped a lot and the things generally holding me back weren’t so strong), I wasn’t actually going to do anything about it because I never had in the past and I couldn’t really see myself throwing myself into the harbour (one popular method round here) or slashing my wrists, etc. (I’ve never cut or self-harmed in that way). The first time I felt I wanted to be dead and made half-assed plans about how would I do it, I was twelve, so I’ve felt this way for decades and not done anything yet, therefore I’m hardly likely to really try anything (even the ‘cry for help’ not really planning to be lethal suicide attempt).
The problem is, though, that I think the original Attitude 2 was correct; I know I would use medication, if I went on it, as a crutch. I know my real problem is I need to “learn more adaptive coping mechanisms” and really carry them out and use them and change my way of thinking and doing things, and I equally strongly know I won’t do any such thing, because my real problem is I’m lazy and unmotivated and I want “to act like a child and have other people take care of her”.
So if I give in on this and take the antidepressant prescription I’m being offered, I will use it as a crutch: “See, I have my pills now, so I don’t need to change!” And that is useless.
So, although it’s like being jabbed in the ribs with a pointy stick, yeah Scott is correct – the attitude there in the example (“you don’t need pills, you have to stop showing up at the hospital pretending you’re going to try killing yourself, you need to change your behaviours”) is precisely what I would need if I did go to a psychiatrist. I might be willing to take up the referral offer from my GP if I could be sure I’d get an Attitude 2 person, because I know I could bullshit well enough to make an Attitude 1 person think “yes, she has real problems, it’s not because she’s bone-idle and infantile”.
Okay, but it seems like getting on meds would be the most adaptive of the ACTUALLY PRESENT coping mechanisms you could find? I mean, the choice as you’re describing it is using a crutch vs. getting nowhere at all. Evidently you haven’t virtue’d yourself into coping better yet, so do you have a more desirable option?
Very wise. Also, just trying the pills doesn’t mean you have to keep taking them. They might jiggle things enough for you to get a better idea about something.
I’m guessing that’s not just a budget-saving/prioritization tool for the healthcare system. It’s also a way to give you that friendly shove you want — i.e., you might feel lousy and want to stay home, but then you’d think “oh, but I can’t miss the appointment — I promised I would come!”
And that’s probably why they called your GP — their attempt to help you backfired, leaving you with neither counselling nor pills.
That was the trouble: I couldn’t in good faith give a promise – I mean, I could have said “Sure, Counsellor, I’ll sign on the dotted line!” and had mental reservations about “Unless I really don’t feel like it and then I’m not coming”, but that attitude would not have worked because if I wasn’t going to be serious about doing it, then it would have been useless to turn up for two or three sessions then skip a couple.
Also, I am very stubborn and I think – because this was during a bad patch – the last bloody thing I needed right then was more coercion, even well-meant “this is to give you impetus to turn up and not allow you wiggle room to get out of it” coercion, which is what the “you have to sign up and agree to turn up for all the sessions” felt like to me. Ironically, I probably would have responded better to “try and turn up next week and we’ll see how it goes from there” approach, but the “and now we come to the mandatory required by the Department of Health signing of the declaration that you will attend all sessions, no take backs” put my back up and I went “To hell with this then, I am not going to be pushed or forced or have this last scrap of control or choice taken away from me” and so it was “Okay, this was great but no thanks”.
My mother used to quote an expression “I’ll be led but I won’t be drove” and that sums it up 🙂
As for the phone call, I can understand why they did it, which is what makes it so difficult to discuss suicidal ideation. It’s not a case of what Scott has described, of people being afraid of involuntary committal – I’m at the state of “So you want to lock me up in the loony bin with the psychos – I can’t even get worked up enough to care about that”- it’s that on the one hand, if you underplay it, it sounds like the man in example no. 5 above – making fake exaggerated claims, with no real intention or planning, in order to get attention – and on the other hand, if you’re honest about what you’re feeling and thinking then it’s very hard to say “Ha ha, but sure, just because I kind of have a plan and I no longer care about the things that previously prevented me from attempting it, that doesn’t mean I’m going to throw myself into the quay tomorrow” and be convincing, because they have to take seriously something that sounds serious (if they take your word for it you’re not going to try something, and then you do throw yourself into the quay tomorrow, that’s trouble for everyone all round).
Is there any compromise option of using the crutch to help yourself get to the counseling?
Not really. Argh. This is easy to say when it’s only pixels on a screen to an audience (mostly) an ocean away in a different continent, but impossible to say in real life to someone face-to-face in the same room.
Last year, when I broke down and went to my doctor and said “I think I need antidepressants because I really don’t feel well mentally” was the first time I ever spoke to anyone (and I mean anyone) about mental/psychological problems or difficulties.
It was very, very hard to do. It felt humiliating. It was humiliating. And then I got “No, this instead”. And rationally I understand why, and as I said, the Attitude 2 style probably was the best approach.
But emotionally, or non-analytically, or on a gut level, it feels like being judged for weakness. Because I wasn’t badly enough off to qualify for medication. And I wasn’t prepared to attempt suicide simply to get pills (that sounds very melodramatic, and it is, which is part of what is so damn frustrating about the whole thing: this irrational, stupid, butthurt “Okay, so unless I try slashing my wrists I don’t get antidepressants? Right, now I know how the game works” idiocy that my hurt feelings are pouting about).
So now I feel as though I have exposed my vulnerability and got the brush-off, so any offers now of antidepressants are too late. You told me I didn’t need them, and I think I’m managing okay without them (at least, it’s the same as it ever was), and I’m not going to even make a half-assed attempt at an overdose or “cry for help”, so I am not going to be that pathetic beggar taking the prescription and leaning on the crutch of medication.
I’ve made it this far, I might as well have kept my mouth shut because I humiliated myself to no avail, so I’m not going to say anything anymore or try the merry-go-round of “Take this for six weeks and if it doesn’t seem to do anything we’ll try upping the dose and then we’ll try another one for another six weeks, rinse and repeat”.
I opened the door a crack and it got slammed back in my face, so I’ve learned my lesson.
Yes, this is completely irrational, but that’s part of the trouble with depression or mood swings or whatever this is: reason only goes so far and can’t convince the stupid whiny parts of the brain to shut the fuck up and man up and take it on the chin.
The combination I’m generally functional on is 10,000 IUs of Vitamin D + St John’s Wort (don’t take with certain contraceptives, if I recall your gender correctly), plus WTA e-juice (a tobacco extract which seems to contain the MAOI that tobacco contains).
Switched from nicotine e-juice to WTA when I realized I had backslipped quite a bit on depression after quitting smoking.
As someone who’s struggled with depression, this is how I’m reading you.
“I got into a freak cheerleading accident, and severed a bunch of ligaments in my left leg. I went to my doctor to ask for a crutch but she referred me to a physiotherapist instead. When I got there, the specifics of my injury were such that I couldn’t complete the treatment course – it was too painful, and my leg muscles were still recovering from the trauma. My physiotherapist called my doctor to let her know that I was too badly hurt to continue physio. My doctor prescribed me crutches to make by until I get well enough to go back to the physio.”
There is nothing shameful in needing treatment. Your doctor didn’t want to give you medication because a successful treatment without medication would have fewer side-effects, but we’re clearly beyond that right now. Your refusing treatment is probably more a function of your illness than anything else – you hate yourself, and you believe that you don’t deserve to get better.
Hang in there. You’re a wonderful person, and I love seeing your posts. I’d be really happy if you got better.
If it’s any consolation it sounds like you’re able to take a step back and look things pretty well which is more than most can do.
I think this is what my SO means when she talks about the internalized stigma around mental health.
If you’d gone to your doctor for a bad knee looking for a knee operation, they’d tried sending you to physio on the off chance that physio would work more safely and effectively and you’d found you couldn’t handle the physio and they’d then agreed to go ahead with the operation you initially asked for it’d be far less emotionally painful than the equivalent cycle with depression.
@Mammon: “Hang in there. You’re a wonderful person, and I love seeing your posts. I’d be really happy if you got better.”
Yes, exactly, me too!
Thank you to everyone for the kind words and advice. I’ll definitely think about it (and I’m not just saying that) 🙂
“The problem is, though, that I think the original Attitude 2 was correct; I know I would use medication, if I went on it, as a crutch. I know my real problem is I need to “learn more adaptive coping mechanisms” and really carry them out and use them and change my way of thinking and doing things, and I equally strongly know I won’t do any such thing, because my real problem is I’m lazy and unmotivated and I want “to act like a child and have other people take care of her”.”
Are you sure that the lack of motivation isn’t being caused by clinical depression?
That’s the chicken-and-egg question!
And it does not help that one is considered an ilness and one a moral failure.
Either way, I strongly reccomend talking to someone about it, whether a professional or someone close to you that you can trust.
@Deiseach:
As someone who has battled depression, what I have found is that all of those feelings of laziness and worthlessness get in the way of learning (and executing) the coping mechanisms.
So, a way to think about it is that if you find the right anti-depressant, then you CAN commit to going to 10 therapy sessions that you need.
” because my real problem is I’m lazy and unmotivated and I want “to act like a child and have other people take care of her”.”
Do you (or anyone) have ideas about what’s going on with that sort of self-hatred?
It can be an amazingly attractive line of thought. I think I have a handle on mine– if I relax some muscles in the back of my neck, it breaks the self-hatred sound track and generally makes it not plausible to continue, but it took me years to find the method. Also, I have the good fortune in a limited sense that my self-hatred is in the second person (“You piece of shit. If you were any goddamn good, you’d kill yourself”), so that at least I had a better chance to see it as a problem rather than the truth.
When I say that self-hatred is an attractive line of thought, I don’t mean that it’s fun, but somehow it can be very tempting to repeat and continue.
Well, I think it’s tempting because moral perfection is pretty damn rare, so most people do have fairly big flaws with which they can justly rebuke themselves. And some people’s flaws are very extensive: there are people out there who are so lazy and childish that they’ve never accomplished anything of importance in their entire lives.
And even if you’re better than them, it’s doesn’t mean you’re good or okay from any objective standard.
The question is whether contempt is a good strategy for fixing moral failures, either in yourself or in others. And I tend to agree with Nathaniel Branden, in his essay which I have linked to several times, that it is not:
@ Vox
The question is whether contempt is a good strategy for fixing moral failures, either in yourself or in others.
For boring childhood reasons, I waste a lot of time trying to adapt myself to situations I should walk out on. When thinking “I deserve better than this” is beyond me, thinking “I’m such a terrible no good person that I don’t deserve even this much, I should just give up and leave” — is more believable. Though the first two clauses are often pronounced inaccurate by the people of the outside world, once I do leave the unsuitable situation. (Getting thrown out is helpful, too, and more fun.)
As others have pointed out, this sounds like “Doctors who think I should get treatment are wrong. I am a bad person who doesn’t deserve treatment, I should be yelled at instead.” So I think you have the attitudes switched: an attitude 1 doctor would give you the swift kick in the rear you think you need, an attitude 2 doctor would call it low self-esteem and encourage you to get one of the treatments you can access.
And yeah, okay, sure, seeing doctors makes you pathetic, half of us here are pathetic, you are only finitely better than me and Scott, how will you ever cope with the shame.
I agree with your point that someone may feel suicidal for a long time and yet be at very low risk of attempting suicide. My point was that a person in this situation is still miserable. Miserable enough that, if help is available only to acutely suicidal people, they’re entirely justified in pretending to be acutely suicidal.
I’ve gone to therapists a few times, and I’ve become quite unimpressed by their lazy Attitude 1.
I come in with a symptom, and they make no effort to find out *why* I have that symptom, they just mechanically try to fix it. If I go to a medical doctor, they try to find a diagnosis and fix the root cause. The therapists just take my “diagnosis” on faith, and mechanically try to fix it.
I don’t go to therapists anymore.
What about psychiatrists? Same deal?
I personally feel that the best service you can get from a doctor is their assuming you’re not a liar or a moron, and then actually trying to fix what you say you want fixed. I’m fine with symptomatic relief because symptoms are all I perceive — maybe the *cause* is just being me?
If you complain of anxiety and they fix anxiety, that’s an improvement, at least! (If they can’t fix it because of an underlying cause, presumably this will become apparent and they’ll look for that. You’d hope, anyway.)
I’ve had very similar experiences to Squirrel of Doom, *especially* with psychiatrists and psychologists.
If you go to a doctor and say “My leg hurts, and I’m not sure why”, and they say “Here’s a painkiller”, that’s not so good. Your expectation as a patient, in this case, is that you don’t really understand why your leg hurts, and that you’re paying an expert to help you figure it out and then fix underlying problem. A surface fix, in this case, is often harmful – walking on a broken leg will make it worse.
Most psychiatrists have a list of antidepressants, antipsychotics, anxiolytics, etc., all of which are mostly interchangeable in the psychiatrist’s eyes and training. When a patient comes in and complains of depression, the psychiatrist prescribes an antidepressant. It doesn’t matter which one, because there is no subtlety to the approach. The psychiatrist will simply prescribe antidepressants, one after another, until the problem is solved.
This is incredibly inefficient, because it typically takes at least a month, and sometimes more than three, to determine whether an antidepressant was effective. It’s also inherently problematic, because patients’ mental states necessarily arise from their interaction with their environment, and that means that *sometimes*, the environment is the root problem. Giving someone an antidepressant or anxiolytic can mask this type of problem, ultimately doing more harm than good.
A similar dynamic can result with psychotherapists. If you go to a psychologist and say “Hey, I think I might be in an abusive relationship”, you’re looking to pay an expert on how the mind works to help you get to the root of things. If the psychologist responds, “Well, if you really think you’re in an abusive relationship, you should break it off”, they’re not actually helping.
Similarly, if a therapist interprets your *belief* that you might be in an abusive relationship as the root problem (because it’s clearly causing you anxiety), but don’t make any attempt to determine whether your belief is true, then they may attempt to dissuade you of that belief as a way to reduce your anxiety. This can be phenomenally harmful for the individual if they are actually in an abusive relationship.
In other words, “complain about anxiety, fix anxiety” is too simple.
(I really liked your reply to Scott in the first comment chain on this post, btw.)
I have some complicated thoughts on this. I mean, the thoughts themselves aren’t complicated, but I feel like there are quite a few factors at play. I think the points of yours that I quote below are excellent, but at the same time I think they’re outweighed by other considerations.
One: “you’re paying an expert to help you figure it out and then fix underlying problem” — okay, that’s true. But it seems to me that a broken leg is fundamentally different from almost all psychiatric problems, because a) there’s often really no way to find the underlying problem anyhow, and b) unlike with a broken leg, I don’t think “walking on it”, so to speak, will actually make psychiatric problems worse.
For example, if I go to the psychiatrist and say “hey, I’m anxious”, there may not really be any underlying cause that we can turn up. (More on this below.) So for all practical purposes, the problem is just anxiety — and solved by treating anxiety symptomatically. If the anxiety is lessened by some meds, and then I go around living life without anxiety, it’s not making anything worse “beneath the surface”; it’s likely the same problem is lurking there if I go off the meds, but it won’t normally have become somehow greater because I lived for a while without feeling it.
So it seems to me that a better analogy might be that of a cold: symptomatic relief is not only all that’s really possible, but it enables you to live life in the meantime, and you’re not making anything worse like you would by using a broken leg. It might even be that in time, the problem goes away on its own, as most medical problems do.
(Or perhaps you have idiopathic leg pain, in which case, after making an effort to find some underlying cause — which I do support in the case of psychiatry, too; see my post below this one — the best thing a physician really can do for you is prescribe some painkillers.)
Two: “you’re looking to pay an expert on how the mind works to help you get to the root of things.” Okay, this is also true. But remember how I said above that there may not be any discoverable underlying cause? Well, I think that is usually the case, if it’s not an easily discoverable cause.
For that reason, I have trouble understanding if this kind of thing (therapy in general, I guess) is ever actually helpful. Take the response you put in the mouth of the psychologist: “well, if you’re in an abusive relationship, why not end it” — if the patient feels abused, isn’t that really all that can be said? (I know that in your example, they’re not sure if they’re being abused, but that seems to just set the advice a step back: “well, how bad is it? if it’s too bad, leave. If not, talk it out.”)
Good but obvious advice is what comprises therapy. (And if it’s not obvious, it’s possibly bull — see the starred* paragraph below.)
This reminds me of something Scott wrote, I can’t remember where, about his experience with old texts on therapy. A book would outline a supposedly real case, where a patient says “doc, I feel worthless; I’m not the best at my job” and the therapist says “have you considered that you shouldn’t tie your self-worth to your performance on the job?”, and the patient is just amazed: “I never thought of that! This changes everything!”
And to me, this has been exactly what all therapy I’ve received has been like; more, it is the only type of therapy I can even imagine. If I go to a psychiatrist or psychology and say “I’m anxious”, obviously I’ve thought about it myself. If it were subject to my thoughts and reason, I’d have thought and reasoned myself out of it long ago. What could anyone else possibly tell me that could help?
There’s a small chance I could receive a new perspective on the issue, true; but for all my mental problems, my perception has been that they are set in biological stone, unavoidable and unfixable by any internal efforts. Years of therapy just ended up retrying mostly the same ineffectual thoughts and futile attempts I already tried myself.
I may be generalizing too much from my own experience. I know I see some things in a much more black-and-white, perhaps less nuanced way than other people, especially when it comes to feelings and relationships. If people say “actually, thinking about my problems in a new way did help a lot”, I will believe it.
But I think another factor here is just that we don’t understand too much about the mind or mental problems; so even if talking about it can help in some cases, in a lot of cases I think there’s a biological or otherwise unreachable root cause at play, and expecting anything but symptomatic relief is very optimistic. Maybe, in some cases, symptomatic relief actually does address the root cause — like someone with a GABA problem taking alprazolam.
(*By “otherwise unreachable”, I mean that it’s my perception that either “I’m anxious” traces back to “well, I do have a speech in front of thousands of people in a week”, or else there’s just no discernible cause — can these things ever truly be due to hidden causes a therapist can winkle out, especially when the individual herself cannot? I am suspicious of “it was a forgotten memory of witnessing an argument when you were three, all along!”)
A broken leg is very easy and obvious, but if you don’t know why you’re anxious and there seem to be no real reasons you should be anxious, it’s really nice if the psychiatrist says “here, take this pill” instead of talking at you about how you need to face your fears of your own latent homosexuality… or whatever.
Especially if you already faced your own latent homosexuality and it didn’t make a difference. Just give me the damn pill!
In response to your questions of why therapy would be effective if you’ve already spent a lot of time thinking about it yourself: most people (in the world) aren’t as introspective as the people who read this blog. Many people don’t spend that much time thinking about their problems or what’s causing them, they just try to repress them as much as possible.
Also, to your point about obvious advice in therapy, there is probably an obvious answer to many problems, like someone in an abusive relationship. But most people are not willing to accept an obvious answer to their problems if it involves moving out of their comfort zone or doesn’t align with their worldview. So in my opinion a therapist’s job isn’t just to tell someone what the obvious answer is, but to help them come to the obvious answer on their own, and to help them tackle their problems in more manageable ways.
@57dimensions: That makes sense to me. Not everyone feels the need to closely examine all of their experiences and feelings, or repeatedly question ideas they may have internalized.
Getting walked through a chain of reasoning by another party might indeed be more convincing to someone, especially if they don’t trust themselves. For me, too, someone providing an outside validation of a course of action — even if I already knew intellectually that I probably should do it — can get me moving when akrasia/procrastination/depression are holding me back.
Also relevant to what 57 says is the idea of therapy as training for doing the obvious – if there’s some positive thought or whatever that you try to think, but which is ineffective, a therapist may know an effective method you can use to train yourself in thinking that thought.
Agreed!
More or less, what happens is:
Me: I have these problems.
Therapist: How are you going to fix it?
Me: I don’t know?
Therapist: I can’t help you if you don’t have a plan.
Me: *Doesn’t say what I’m thinking, which is “My plan was ‘come to a therapist and hope their expert training in how to deal with psychological problems can at least give me something to work with, but obviously I was mistaken. Also I’ve tried the see-what-I-can-do-alone thing for more than 10 effing years, which I told you on the first day in detail, come on now.”.*
To be fair, there was one therapist who actually seemed kinda helpful and even brought in one of those attach-electrodes-to-your-head-and-disguise-it-as-a-game-for-kids-with-ADHD things to do an actual quick feedback test. But then I had to talk to the Social Security Administration without my dad to feed me answers and it turns out the cost of honesty was going 4 months without health insurance, and that was the last I saw of that therapist.
(That afore-mentioned test convinced me that, instead of writing essays on Franco-African poetry, I need friends and hugs. It’s funny because I lost lots of money failing to do the former and have no hopes at obtaining the latter. … Wait, no; not funny. What’s the opposite of that?)
I think that you’re doing it right. The “solution”, I believe, is to go mostly with Attitude 1 and use your best judgment, as necessary, to figure out if there’s something further you should do. That is, people generally know what they need and want, if they’re rational adults, but sometimes your domain knowledge may show you a factor they aren’t aware of or haven’t considered.
In either case, you accept what the patient wants — to be happy and well, ultimately — and it’s just that sometimes you think you see a better route to that end (“I think you’d end up happier if you got therapy instead”).
Since patients have access to the most data about themselves and their internal feelings, if you’ve offered an alternative and they’ve seriously considered it but they’re still very certain of the problem, I think it’d often be best to then give their idea a try — assuming it’s not something so drastic that you feel you need real certainty to go through with it.
Maybe the patient really *has* carefully observed herself and discovered that SSRIs give her foot twinges, or whatever.
As you mention, it’s very easy to explain away anything once you’ve decided what the patient REALLY needs. I think that the horror of being unable to get help from a smug overlord usually outweighs the possible letdown of getting what you want and finding it’s actually pretty crummy too.
Okay, maybe “overlord” isn’t quite the right term… but realize — if you haven’t already, of course — that MDs have a lot of power over the patient, particularly a desperate one, and this means an Attitude 2 dynamic is terrifying. Here are people who are high-status, usually wealthy, used to authority, used to seeing an endless succession of uneducated, confused, dirty people… How will you win consideration if they’re not predisposed to give it?
You know that you’re just a little guy to them, and they hold the keys to heaven or at least to the door out of Hell. If they think you don’t know what you need, you’re fucked and you just wasted a day off work and a lot of money.
Of course, you probably perceive the whole interaction quite differently, and I know a lot of people make it hard to maintain hope in Attitude 1 when they come in suggesting they need amphetamine ampoules injected in their eyeballs so they can read faster. But I think you’re definitely erring on the right side.
As someone who has suffered involuntary institutionalization before, and has every reason to avoid therapists like the plague in my current circumstances, I endorse this.
I’m glad it made sense to somebody! (Apropos of nothing, I’ve seen your comments around, when I lurked, so: hi. 🙂 I hope you’re doing well, after the experiences you mentioned.)
I notice a few people in this thread strongly advocating the exact opposite (though overall it’s split pretty evenly, I think), and it honestly confuses me. Do they really have so much trust in doctors and therapists? Do they rather simply not trust themselves at all?
I want to postulate that they haven’t experienced this kind of stuff, and would feel differently if they had, but maybe it’s more fundamental than that.
Do they really have so much trust in doctors and therapists? Do they rather simply not trust themselves at all?
Well, for instance: with depression, you get told (by self-help books and support websites and Uncle Tom Cobley and all) that no, you are not a wretched, miserable, unlovable failure, that is just the depression telling you all this.
So you know that you can’t trust your mind or what it’s telling you; that what you feel is a symptom of your problem; that even the reasons you think are good reasons for why you’re a wretched etc. are only rationalisations and not actual facts. That an objective outsider looking at the facts would come to a different conclusion (that’s why one of the tricks is “suppose your friend or a stranger told you the exact same thing: would you say they’re a wretched etc. or would you say that they did have options and talents and the rest of it?”).
So you’re depending on the doctor or therapist or counsellor or psychiatrist to be that objective outsider, which is why trust and honesty is so important. If you are convinced X is really the problem but they are over-riding you that no, it’s Y, you really need to believe they are not pulling it out of the air. And since you already have all the self-doubt in place (because you know your head is fecked and you’re not making good decisions), you’re inclined to take their word for it when it comes to things.
The presupposition here – that you would tell a friend or a stranger the truth – is preposterous.
I think the idea is supposed to be “Imagine your friend Jane has broken up with her boyfriend/lost her job and she is saying ‘I’m such a loser, I always mess things up, I’ll never get anywhere’, wouldn’t you point out that relationships naturally come to an end sometimes/she got her last job and she is good in her field and she’ll find another job? So why do you judge yourself more harshly than you would Jane under the same circumstances?”
That trick does seem to depend, though, on Jane not being the type who does pick bad boyfriends and who isn’t good in a relationship because she’s unreliable and over-demanding 🙂
I believe Superintelligence refers to this problem as Perverse Instantiation, and it’s also related to The Hidden Complexity of Wishes. Simply granting people what they ask for leads to some problems, granting what you think they asked for or what you think is good for them as other problems.
As a total psychiatric basket case who’s had a lot of (sometimes voluntary, sometimes not) contact with mental health clinicians, I’ve often felt that my doctors tended to pay *too much* attention to my wishes, asking me to make decisions I couldn’t be trusted to make, or expecting a level of insight about my own needs or inner workings that I didn’t possess. Or to use this essay’s framing, I’ve gotten a lot of Attitude 1 when I’m not sure I wouldn’t have been better off with Attitude 2.
It’s hard to tell your doctor “stop respecting my autonomy so much”, though.
Yeah, there’s an inherent paradox involved when you tell someone your mind isn’t working right and they uncritically ask you what you think they should do about it.
Mixed observations:
This isn’t just psychiatry. If you’re a software engineer, or a plumber, or _anybody_, you have conversations like this:
Client: I need you to froblicate my wrexels, I’ll pay as much as it takes.
You: Uh, why?
Client: I need to djang my wrexels and I can’t unless they’re froblicated.
You: Actually, you can’t froblicate wrexels, it doesn’t really make sense. But if you demof your wrexel-gub, you can djang as many wrexels as you like! It’s really easy, I can do it right now for $10 and you’ll never have to worry about it again.
Client: Awesome! Thank you so much.
Of course, it’s not usually that easy. There’s a million stories where the client asked for X, and didn’t really communicate why they wanted X (which when X is big and well specified is easy to miss) and between they spent 9 months building X, and then it turned out to be useful.
And there’s also stories where the client refused to believe and insisted on their dead end.
Obviously medicine is trickier because you have a higher duty of care not to harm the patient even if they ask for it, and because they’re likely to have problems which interfere with their judgement.
But maybe it’s possible to separate those problems, at least conceptually, even if they’re intertwined. If you have a better suggestion for what’s wrong *and the patient agrees* (without bullying), that’s probably sensible. If not, is it likely to be something where you genuinely know better or where the patient genuinely knows better? Is it something where the patient is sufficiently not in control of themself you have a duty to override them, or not?
I remember you talking about psychiatric hospitals, and seeing people being wrestled to the ground screaming they were being given the wrong medicine and being killed. But if you checked, almost every time, there was a good reason for it. So I realise, patients are going to be convincing and wrong fairly often, hence many doctors’ cycnicism.
Someone once pointed out, people with severe medical problems are MORE likely to have OTHER problems in their life as well (both, other medical problems, and being absued, etc, etc). What does someone with narcissism who actually IS being abused by their spouse look like? How is that different to someone with narcissism who isn’t? Saying “it’s narcissism, so ignore it”, isn’t a complete answer. For that matter, even if they ARE provoking it, maybe you need to treat the symptom ASAP and GTFO them..?
I’ve noticed this (anti)pattern as well. Perhaps this is only fully applicable in more technical fields, but people* will muddle through until they get stuck, and then look for help on the thing they’re stuck on – but this is usually after they’ve already gone wildly off-track. It seems like this is kind of applicable to the psychiatric examples? I’m not really sure to what extent the analogy holds, but it seems like they have a goal that they’ve tried to accomplish like “get child to go to school,” encountered an obstacle like “child is nauseous,” and gone looking for a solution to the obstacle like “medication for nausea” because they didn’t know enough to find the better way to accomplish the goal. And this seems similar to having a goal like “remove the trailing newline character(s) from a string,” encountering an obstacle like “Windows and Linux create files with different line endings” and looking for a solution like “use a regular expression” because you didn’t know that string.rstrip() was there.
Now, there’s a real frustration to other side as well – when you know exactly what you need someone to do, and it’s simple, and they ask “why do you need that” and then pursue an extended chain of second-guessing the requirement. But it seems that feeling doesn’t weigh much against the benefit of getting a much better solution. When I’m tempted to think that I really do actually know best for real this time, I read some of NotAlwaysRight and recover.
* – Read “I.” I am the idiot. It’s me.
Third example is a masterpiece. I thought it was the least convincing of the pack, then thinking about it a little it became clear how it can trigger virtually everybody in some way. Birth control as the desirable default state? Pregnancy as medical condition? Faint echoes of past eugenic psychiatric sterilizations? Hinting that single motherhood is bad? Something here to subtly trigger everybody, left and right, nudging the reader away from attitude 2.
Would a patient come in and ask “How do I taper off birth control?”, though? I would have thought, if she wanted to become pregnant but was ambivalent, she would simply stop taking the pills and leave it to chance (and then, yes, miss her next period and freak out and go running to the doctor).
Looking up the CDC page, I see that hormonal methods (contraceptive pill) has “Typical use failure rate: 9%”, so even if the patient wasn’t intentionally coming off her birth control, there’s always the chance of unintended pregnancy (though that does usually happen by forgetting one dose, taking particular antibiotics which interfere with it, etc.)
Wouldn’t a person who intentionally “forgets” to take the medication so they get pregnant be already included in that 9% statistic?
Presumably; failure rates have all kinds of explanations. That’s why I find it strange that a (hypothetical?) woman would ask her doctor about tapering off hormonal birth control, unless she was taking it for additional/other reasons, e.g. polycystic ovarian syndrome.
If she’s simply on it to avoid getting pregnant, then just stopping taking it and letting nature take its course (will I/won’t I get pregnant this time we have sex?) seems the easiest thing to do.
If I recall correctly from the last time I looked at those numbers in depth, deliberately skipping medication would be included in the statistic with or without the pretense.
Which strikes me as a dishonest framing of the failure rate, but that doesn’t surprise me. Whenever you run into statistics bearing on a subject as politicized as birth control — and especially if they’re in presented in a form that would work as slides in a high-school health class — you can assume their interpretation’s at least somewhat tortured.
Carmela moralised by her Jewish therapist from The Sopranos: https://www.youtube.com/watch?v=bzVeLjj6Ao8
Could it be because you are the only decent and remotely ethically concerned doctor in the room?
http://imgs.xkcd.com/comics/sheeple.png
I’m serious btw.
That one might just be Randall committing the typical mind fallacy.
Do you mean that in those conferences there may be lots of people who are not convinced the guy is a narcissist but they don’t say anything because each one is convinced that they are the only ones who are not convinced?
I think they mean that every doctor likes to think that they’re the only decent and remotely ethically concerned doctor in the room.
Scott’s younger and more curious, so he’s more likely to test his assumptions. (Arguably, a successful psychiatrist with 30 years of data to work from *should* be more confident in her judgment that Scott is in his.)
Attitude 1 isn’t really an option if you take your position as a professional seriously. This doesn’t just apply to medicine, but to any profession in which a lay person consults (not hires) someone with specialised knowledge for an opinion. You obligation is to give your best opinion and advice, regardless of what the patient wants, after all, you are supposed to know more about the narrow topic that has brought him or her to you. I don’t write this in an arrogant way, it’s simply a statement of your professional responsibility. A good professional will not only state his opinion in a take-it-or-leave-it way, but work to explain why his opinion is as it is, and give the patient the opportunity to ask questions, eventually negotiating a plan that both can work with. The patient has every right at any stage to say they don’t like the opinion and to seek another. Doctors who simply give patients what they ask for have abrogated the responsibility that goes with their training, experience and position. When a patient says ‘I want some codeine’ it’s clear the proper response is not ‘How many?’ but ‘Why? What for? is it the best drug? Is it safe? Are we ignoring a fixable problem by covering up a pain?’ This is plain to all, but it’s just the same when the patient says they want an operation, a test or a particular psychiatric approach. You are obliged to use your judgement and to ignore that is to fail in your duties to the patient, to yourself and to your profession.
A more insidious path lies between attitudes 1 and 2, where the standard of care is to get the patient out of the door in a defensible way, but one that ignores real risks. Patient complains of chest pain, you note down carefully negative answers to all the cardiac questions and you send them away with an antacid. We all know we can phrase questions so as to get the desired answers. Your medical notes show a defensible course of action, but the patient has left with a potentially dangerous symptom inadequately explored. This one makes me mad, and I see it frequently from walk-in clinics and even some ERs.
Finally, to trump the famous psychiatrist story. I was told of a patient attending a teaching hospital’s outpatient psychiatry clinic, and she told a startling story of how her husband tortured her by suspending her over an electric fire (yeah, 1960’s England…) The psychiatrist started to assess her in terms of paranoid delusions, and then she pulled out the photographs that her husband had taken of her whilst she dangled. The moral was that there’s all sorts of nasty stuff out there that callow medical students will find hard to believe, but sometimes it’s true.
Under most conditions most people are spectacularly overconfident. Of course the patients are over-confidant as well as the doctors.
Scott
I was referred to a psychiatrist some years ago, for three or four sessions. I was required to get an evaluation by the hospital’s executive committee because they were concerned about my anger management. I was constantly complaining about such things as turnaround time, temperature control in the OR (one of my patients died from hypothermia), etc., and at one point a nurse said i had threatened him.
The psychiatrist listened to me, talked a bit, suggested and prescribed a very small dose of antidepressant (it could be called a homeopathic dose), and ultimately wrote a report to the executive committee, stating that in his opinion I was not mentally ill and that my anger seemed appropriate. He suggested that they look into the OR functions that were making me so upset.
I did use the affair to begin using several anger management techniques, and the executive committee directed the OR director to work with me to correct some of my percieved issues.
My guess is that in general, probably like you, and probably like most psychiatrists, he used a blend of the two different paradigms you have described. I have always felt that shrinks have the hardest task in medicine, because of the fact that they deal with the most complex structure in the known universe, the human brain.
The world is seldom black and white, but made up of shades of grey
BTW the psychiatrist was blind…
I don’t have any grand insights into the trade-off between Attitude I and Attitude II, but I feel very confident saying that Attitude II is great when the psychiatrist is remotely competent and obnoxious otherwise.
At one time I was in the fun situation of having two psychiatrists; one who was giving me CBT and was a minor celebrity in her field, and the other who was giving me medication and had a (well-deserved) horrible reputation. Dr Horrible loved to diagnose people with bipolar disorder and ADHD, in fact false diagnoses were a big part of her reputation, as well as giving really weird prescriptions like 1/6th of the recommended dose of antidepressants and violating doctor-patient confidentiality. She had previously misdiagnosed me with ADHD, which was pretty irritating even if the drugs didn’t do anything odd (probably another comically low dose).
So I went in to see Dr Horrible and she started asking me questions which were clearly coming from a diagnostic checklist. And, being the kind of bad patient who reads the DSM, I said “Wait, do you think I have bipolar disorder?” and she reluctantly said that she did. I mentioned that that didn’t make a lot of sense because I have never been manic, just depressed, and she said I should take bipolar medication because she had already diagnosed my hypochondriac mother as bipolar. When I didn’t think that was convincing she said OK and wrote me a prescription for ‘antidepressants’ which, thanks to Google and a bit of healthy paranoia, I learned was actually bipolar medication. I got a second opinion from Dr Celebrity, who agreed with me and told Dr Horrible to knock it the hell off.
On the surface that seems like another “Attitude II drools, Attitude I rules” story but the thing is that Dr Celebrity was pretty Attitude II herself: she didn’t take what I said for granted and tried to track down the root problem even if it wasn’t what I wanted to hear. From my perspective it seemed more like the presence or absence of physician skill was the deciding factor rather than the merits of either attitude as such.
One way of putting this is that Attitude I is safe, Attitude II is risky – it may be much better, it may be much worse.
I’m sure this has been said before, but cognitive behavioral therapy has one of the most confusing abbreviations I’ve ever seen. (Unless Dr. Celebrity was very inappropriate in a doctor-patient relationship, in which case I read it correctly the first time.)
This is me being clueless but what is the other inappropriate thing CBT can refer to besides cognitive behavioral theory? Is this another of those escort service abbreviations or something?
I’ve encountered it mostly in the content warning codes for erotic fiction. I avoid those stories myself (and am very glad that content warnings are a standard thing for erotic fiction on the web).
This is one of those things I know, I don’t know how the hell I know it (I must have looked it up for Reasons but blessed if I can remember why) and yes, every time I see “CBT” my mind goes into the gutter first 🙂
Cock and ball torture. I have no idea how I know this, but it immediately came to mind.
Edit: Yes, exactly! @ Deiseach. If there is a reason you didn’t tell Scott here, I’m sorry for blundering through and yelling it out… (I misread your post as “one of those things I DON’T know”, initially.)
If there is a reason you didn’t tell Scott here
I was trying to avoid the Scylla of telling people what they already knew and the Charybdis of telling people what they would then decide they really didn’t want to know 🙂
It was a term I stumbled across somewhere for whatever reason (I genuinely can’t remember where or why), I looked it up and then “Oh – that’s a thing? Well, takes all sorts!” (Spanking, for instance, is something that has absolutely no appeal for me, yet is the kind of default Baby’s First Non-Vanilla Naughtiness that is all over the place in stories and fanart and even the fluffy handcuffs for Valentine’s Day version of BDSM-lite. I don’t get it, even on the “well, the endorphin release” level explanation. Spanking and whipping and the rest of it are completely non-erotic to me, but for other people they are apparently a massive turn-on).
Ah, OK, thanks for the clarification. I had a feeling that I didn’t want to look that one up for myself and that seems to have been a good call.
I never encountered CBT meaning anything else than Cognitive Behavioral Theory…
a prescription for ‘antidepressants’ which, thanks to Google and a bit of healthy paranoia, I learned was actually bipolar medication
I do wonder why doctors aren’t honest with patients about tests/prescriptions now in the Age of the Internet. Even in the pre-Internet days, there were (and still are) easily available books about over the counter and prescription medicines for lay people to buy.
Do people really simply take meekly whatever is prescribed to them, or is this a peculiarity of my family? My father always wanted to know what was in any medicine he was prescribed, and I have acquired the habit from him of checking before I put it in my gob. Are doctors still working off the old model of the patient doing what they are told without need for explanation, and they don’t think most patients will either bother or be able to do the cursory amount of looking up on the Internet what is in this drug?
That is what mildly irritated me about the gynaecologist I went to see; he was fairly clearly considering a possible diagnosis of cancer when advising me to have such-and-such a procedure, yet when I asked him straight out “So you think this might be cancer?” he danced all around and gave me a brush-off answer. What was the first thing I did when I got home? Hit Google and yep, “first thing to consider with such-and-such patients presenting with thus-and-so symptoms: cancer”.
I wasn’t anxious about cancer, and I can see why he didn’t want to needlessly alarm me, but by the same token I was (and am) mildly annoyed he didn’t treat me as a reasonable adult and tell me “I want to investigate the possibility of cancer which is why I want you to come in for this surgical procedure”. Not directly telling me didn’t achieve anything, since I could tell by his demeanour he was thinking “Okay, could be cancer, need to rule that out” and I was able and willing to hit the Internet and check out what this procedure was done for and why. All it really achieved was “You don’t do me the courtesy of honesty so I don’t trust you”.
Very likely yes.
Yeah, and it’s a pretty predictable and reasonable reaction.
It’s especially puzzling with mental health doctors because the concept of therapeutic alliance is so critical to what they do. How on earth do they expect you to trust them after getting caught pulling a fast one?
Hell, even regular doctors complain about poor compliance with drug regimens. It’s not helping their case if you can’t be sure that you’re even on the right thing.
Hell, even regular doctors complain about poor compliance with drug regimens. It’s not helping their case if you can’t be sure that you’re even on the right thing.
It helps even less when your doctor is flicking through their handy handbook o’pills ‘n’ potions to figure out what they should give you, then you go home, look it up on the Internet, and see that “Known interactions with other drugs: FOR THE LOVE OF GOD DON’T TAKE IT WITH THIS MEDICATION YOUR LIVER WILL EXPLODE!!!!!” and, um, you’re already on that other medication 🙂
It seems to me telling your patient what their problem is, is part of your job as a psychiatrist. The difference between a good and a bad attitude 2 is, I think, often in approach.
The patient is your customer, and the customer is king, as the saying goes. But telling your customer what they want is a very important part of nearly every profession, because customers will rarely know exactly what they want. Customers will come in with a list of wishes, but often they won’t have completely thought them through, and some of those wishes will be impossible or in conflict with each other. It’s your job to figure out what the customer really wants, and provide that.
A slightly absurd example. Imagine you’re the CEO of a major company, and you want to build a new headquarters. You approach an architect, and you’re talking about what you want the new building to look like. At one point you tell the architect that you want 2 elevators. He just stares and you and says: “No, you want 6 elevators”. No, you insist, you really want 2 elevators. No, you get told again, you really want 6 elevators. Disgusted, you walk about, muttering about arrogant know-it-all architects.
Now, in this example, the architect is actually right. Two elevators will never be enough for a building of the size you want. He knows this from experience. He can show you the data about how many people will be using the elevators are rush times, and what this will mean for waiting times. If he had explained this to you, you’d have agreed. But he didn’t, and so he lost you as a customer.
Let’s go back to the classic textbook example Scott gave. The patient comes in and starts talking about her borderline emotionally abusive husband. Again the psychiatrist very quickly reaches the conclusion that the patient is in fact a narcissist. But now instead of interrupting her right away, he patiently listens. Then he asks her if she ever considered that she may be partly to blame for the problems, carefully breaches the subject of narcissism, goes over a checklist together with her, etc. Don’t you think the patient would be a whole lot more responsive to this diagnosis now?
You can’t win them all, and truly narcissistic people are very difficult to reason with. But an approach where you include the patient in your diagnosis making, by explaining your reasoning and considering their input, is a lot more likely to be accepted by the patient. And I think it’s also a lot more likely to be accurate. Your diagnosis could become stronger “You know doc, now that I think about it, perhaps this is similar to that time I …” or weaker “You mention X as a symptom, and you’re right that I do do X, but actually I think it was caused by …”.
What they need is Seven Red Lines.
Oh God, that was too painful to watch through to the end.
However, it does raise a question. The bad guys are showing absolutely no ability to update.
Has there been a rationalist breakdown of what it takes to learn to update? To encourage other people to update when they aren’t in the habit of it?
As someone who has done a fair amount of software design and building, the “expert” isn’t updating either. (and do remember the whole thing is satirical).
When gathering user requirements, don’t tell them why their design ideas are wrong, try to understand what it is that they are trying to accomplish. What ROI will they get from the seven lines? You might then come up with three lines, Red, Blue, Yellow, in a triangle with a green kitten in the center, because what they really want is to signal that they are an LGBT friendly company.
Interestingly, that skit is nearly a word-for-word translation of a Russian short story originally written on Livejournal.
“Everything’s a tradeoff between Type I and Type II errors.”
No, no, no! This is terrible thinking and you know better. For a given level of specificity and sensitivity, yes, you are stuck with trading off false positives for false negatives. But often a much better answer is to improve the accuracy of your diagnostic method.
A wise person I know wrote a really good post about how it can be simultaneously true that ADHD is overdiagnosed and underdiagnosed: https://slatestarcodex.com/2014/09/17/joint-over-and-underdiagnosis/
Rather than restricting ourselves to trading off overdiagnosis for underdiagnosis (or vice versa), we really should look at the possibility of improving the accuracy of diagnoses.
This happens in policy debates too. Should we lock up more people for longer? There are still crimes being committed! Should we lock up fewer people for less time? There are many miscarriages of justice! If you say we should crack down, you’re saying we should lock up more innocent people. If you say we should liberalise, you’re saying we should let more guilty people go free. There is another option. We could try to address both problems *at the same time* by improving the quality of the justice system to improve the accuracy of convictions.
There are loads of policy arguments where this happens. Regulation is an obvious one. Tax is another. Most areas of Government spending.
I’m not saying it’s always possible, or cost-effective, or that we don’t have to make a Type I/Type II tradeoff in the mean time. I am saying that the option of getting better at selection is often unjustly overlooked.
Sadly, because humans are fallen, the world is broken, and politics is tribal, this everyone-wins approach is often really hard to make happen. Arguing for what Robin Hanson called “dragging the policy rope sideways” is a dangerous position since those at the ends of the rope (rightly!) perceive you as uncommitted to pulling it their way.
But you of all people know better!
It’s *very* rare that you can dig up enough info that risk disappears – there’ll (almost) always still be a change you sometimes guess wrong, so you’ll still have tradeoffs between type I and type II errors. And in psychiatry, “getting more info” will run into limits of time and money , of the unverifiable past (“my dad beat me”), of privacy, of the patient’s mental state, etc. Yes, sometimes you good doctors can get more info (especially if they can have their interns break into the clients’ house at night and look for undercooked meat), but never perfect info.
Talking to my GP the other day, he was complaining that about half his workload was seeing people for behavioral issues that could only really be addressed by the patients themselves, if anyone.
For example, if the only thing that might treat your pain is physical therapy, and you can’t or won’t follow through, then we’re back in Type I.
Well… train your Type 2 abilities.
Play social lying games like Werewolf (or ugh, Avalon). And don’t merely participate – predict, be wrong, predict again, be wrong again, until you start getting it right. Then play with other people and do it again.
(Note: I’m absurdly good at this – and depression makes people fully opaque to me. Somebody not feeling much of anything, or feeling way too much of one thing, makes a nuanced impression impossible. Which might make the usefulness of training more or less applicable in a clinical setting.)
I suspect Avalon is better for training these skills than Werewolf is, in part because everyone stays in the whole game–and so you have more time spent guessing. Not sure how to trade that off against the rapid feedback of Werewolf, though.
(I’ve played with a couple of groups that would use body language to sense tension level and typically be able to pick off a few traitors that way–but that doesn’t work if outing someone as having secret knowledge immediately outs them to Mordred’s forces as Merlin!)
Before the local game store closed, I played Werewolf in groups of up to thirty. The length of the different deceptions gave something to the game that Avalon, with its relatively quick play, lacks. (Flip side, Avalon allows much more complex deceptions, provided the game progresses sufficiently to allow them. Around half the games I’ve played ended with none of Mordred’s forces ever being invited along on a mission.)
The first two games we played of Resistance, we lucked into victory because initial leader plus the ~three people sitting downstream were all blue team, and so we set it aside for a long time.
I’m not sure if we had the number of red team cards wrong, or if that’s just a hazard of the way the game is set up. A number of groups also tend to vote reject on the first few proposed teams, trying to get info out of that, which seems like it makes the game more fair (since you’re less likely to just luck into a winning team) but also poor play on the part of the blue team?
I think the largest Werewolf / Mafia group I’ve played with has been about twenty. You almost need to play it with some sort of parallel game that gracefully accepts new players coming in, because it could easily be thirty minutes until the next game starts.
For larger games, the mod would usually heavily modify the rules so that the players who die still get to participate in some way.
Themes vary by the moderator. In one, the “ghosts” got a tarot deck, and each turn could give a card from the deck to any player they wanted, so long as the seer was still alive. In another, the first X players to die joined a “Supreme Court” which could overrule the villagers’ decisions with a unanimous vote. I was working with a couple of other players to design a deathless version whereby the villagers and werewolves (and another two teams, pirates who press-ganged and another, whose membership overlapped with the main two teams) were trying to kill the leader of the opposing team, and killing non-leaders just recruited them to your side.
If son nauseates Mom every morning, it’s probably Mom who needs help.
Hee! I was just talking about that – IMHO, thoughtful speakers and writers should avoid “nauseous” altogether, because if you use it correctly, the majority of your audience will misunderstand you, and if you use it incorrectly, you will trigger the group of pedants still manning the bulwarks. (Such as me, for example.)
It makes me sad, but there’s no place for nauseous in careful communication.
Of course, I feel the same way in theory about “begs the question,” but I find myself using it whenever there is an opportunity to use it correctly, so I guess I contain multitudes.
Right, that behavior should be limited to when son is still in utero…
“my doctor said I was just making up my side effects for attention, and later on I got neuroleptic malignant syndrome and died!”
From my observations of Scott, I assume that was intentionally funny – either way, it made me smile to imagine ghosts posting on the internet, or people so deeply in Munchausen syndrome that they write posts about a medication killed them.
The company I work at is currently strongly moving away from Attitude 1 towards Attitude 2, with regards to safety incidents. We’ve received training in Root Cause Analysis, and 5 Whys, and such, but the key thing is that incidents have to be thoroughly investigated before we start using those processes. No matter what meta-level process you use, you will come up with different conclusions based on the amount of information you have. So my first instinct is always “can we get additional data/information?” There’s probably a threshold of where (assuming you can’t gather any more) given an information amount below the threshold, Attitude 1 is safer, and above, you can probably run Attitude 2 processes with more effectiveness.
There’s been some frustration about this because it does seem like the company is moving towards micromanaging these things, since RCA ends with company-wide policy/culture. Certain safety requirements have been imposed that are very overkill and impacting working efficiency. On the other hand, it’s safety. They’re actually trying to reach a 0 incidents ideal, regardless of asymptotes. And this is an environment where Attitude 2-driven “add more PPE” doesn’t have the kind of potential harms it would in other fields.
but the key thing is that incidents have to be thoroughly investigated before we start using those [Root Cause/Five Whys] processes
This seems backwards; possibly I am misunderstanding you here.
“Five Whys”, or as we call it a “Fault Tree”, should among other things guide the investigation as it proceeds. If you don’t have a structured methodology during the investigation, you can’t help but use the early data to form an intuitive guess as to the cause, and your further investigation will be biased towards confirming that guess. With a proper fault tree, you start with something you know to be true without investigation, e.g. “The rocket blew up right after second-stage ignition”, and the structure of the tree forces you to investigate all of the relevant possibilities at the appropriate level and until all but one has been ruled out.
If you can’t do that, fake it. Have someone who only knows the top-level problem and hasn’t been following the investigation come in and start asking you the “Five Why” questions until he’s satisfied. If he starts asking high-level questions that you don’t have answers for, that’s a sign your early investigation may have gone astray.
That’s just based on what we consider “investigation.” I’m currently thinking of it as just the data-gathering stage. Record what happened, interview people, make a timeline, get equipment diagrams, etc.
Then the Five Whys and RCA processes intertwine with the second iteration of data gathering. As you start to piece together causal factors, you may need to go back and re-interview, re-investigate, etc.
For certain types of events, you really need to use a timeline (non-causal) format to catch non-obvious relationships and potential causal/correlative factors and events. Otherwise you risk picking just the one obvious bit to address, and missing others.
Attitude 1 can be summed up as the “you know your own needs better than I do” approach. Attitude 2 can be summed up as the “I know your needs better than you do” approach.
But what about Attitude 3? In my own practice**, I explain it to people like this:
“My expertise is in helping clients get what they really want. Quite often, that isn’t the same thing as what they tell me they want when we first meet.
My training and experience means that I can often suggest pathways and outcomes which you didn’t consider. And I can often act to minimize downsides–options that you rejected as ‘too difficult’ or ‘too costly’ may actually turn out to be your best choice.
So. Forget for a moment that you say you want to ____. What are you actually hoping to accomplish? What are you hoping to avoid? And what was the process that brought you to that decision?”
**Law, actually.
+1
I find it hard to reconcile this with the things you have said about trans people….
There’s different forms of body dysmorphia.
Some people think their nose sucks, it actually does suck and fixing their nose actually makes them happy.
Part of the question is whether changing the nose shape will actually make the patient happier or will just shift her self-criticism onto something else. If therapy won’t help and surgery will, then even a Type II psychiatrist will presumably be OK with changing an otherwise normal nose.
See Scott’s discussion of amputations and body integrity identity disorder here:
https://slatestarcodex.com/2013/02/18/typical-mind-and-gender-identity/
Yeah, in that post I specifically contrast BDD and BIID.
Sometimes, your nose actually is the problem…
BDD isn’t the same thing as gender dysphoria.
I too found that a poor example.
There are reports that – contrary to common opinion which claims non-reconstructive cosmetic surgery is ‘shallow’, useless, and does no good – most people who have it done are happy with the result, experience higher quality of life and don’t become surgery obsessed.
Here’s one: https://www.sciencedaily.com/releases/2013/03/130311091121.htm
I think there have been other studies of this sort – perhaps focusing on specific surgeries (like breast implants).
Cosmetic surgery can be productive and body dysmorphic disorder can simultaneously be a thing. There’s no contradiction in the example; a cosmetic surgeon just happened to run into a BDD patient. One can infer from the fact that he’s a cosmetic surgeon that he’d be okay with performing the surgery under other circumstances.
(I actually imagine this happens reasonably often.)
I’m kinda surprised that we didn’t see the Parable of the Hair Dryer come up again — it seems very much a victory of Type 1 psychiatry’s more subtle side, and not an obvious aspect to it. Sometimes solving the patient’s immediate problem does work, even when the thing they’re asking about isn’t actually the real issue.
But the Hair Dryer can also be seen as a victory for Attitude 2 psychiatry, in that the expert psychiatrist examined the patient and saw what would truly help her–which she, mired in the midst of her madness, could not see for herself. (She was the one who asked for therapy and medication!)
More seriously though, it’s tempting to think that if there are two types on issue A and two types on issue B, the sides will line up so there are only two types of people. There are some areas where there are forcing effects that make this more true (like politics), but I think this is one where I would want to see survey data before I assumed that opinions on the Hair Dryer Incident lined up with one’s preference for Attitude 1 or Attitude 2 psychiatry.
(Scott, could you run that survey where you work? Or maybe at a psychiatric conference you go to?)
One particularly annoying manifestation of the attitude 2 psychiatry are political articles made by psychiatrists (or sometimes psychologists) which try to disguise themselves as a “personality analysis” of a politician (usually it is a politician) the psychiatrists have a beef with, usually based on nothing more than some public speeches they saw on the TV or perhaps a few articles written by the politician. Narcissism is a very common “diagnosis” in this case.
My default assumption is that nearly every sane human is self-centered to some degree. I’m not sure when that reaches narcissism.
I found Scott A’s response to diagnoses of Trump’s narcissism pretty funny.
Scott A’s Trump preoccupation is getting kinda weird, but in this case I think he hits the mark.
“Adding “artist” to choke makes you think past the sale. The sale is whether Rubio is a choker. Your brain accepts that truth in order to process whether or not Rubio is an artist at choking or just a regular choker. (I’ll bet you missed that.)”
This is actually exactly what I was talking about in the last thread re. concept handles: once we’re having a debate about what kind of choker Rubio is, we’ve already accepted the premise that Rubio is, in fact, a choker. Sure we could say that we reject the premise, but the right framing makes it less likely we’ll do that.
Diagnosing nearly any politician as a narcissist seems, quite frankly, like a pretty safe bet.
Isn’t that how we selected our politicians to be politicians in the first place? To serve as avatars for our collective narcissism?
To be fair to them, politics is tough. If you start out on the grind as a local politician working your way up, besides your day job an awful lot of your time outside of official work is going to be taken up with canvassing, building support, joining (preferably getting on the board of management) every damn organisation, club, society and ‘three guys in a phone booth’ you can inveigle your way into in order to have a finger on the pulse and network and get yourself known as ‘the guy (or gal) who gets things done’.
In order to put in that level of slogging, you really do have to sincerely believe you are The Man (or Woman) The Country Needs!, because why else are you doing it? You’re going to get a fair amount of abuse even from the people you are courting as constituents because “bloody politicians” and you have to do a lot of smiling and taking it on the chin as you work your way up the ladder. You do need to think you are special and a Man of Destiny to keep yourself going.
I am not a psychiatrist, not even a psychologists but I think that there are degrees of this, aren’t they? So that yeah, most people are kind of self-centered, politicians probably more, but then there is this pathological narcissism where you are simply unable to take any criticism and that is I think what these political psychiatry articles are trying to say. They are not saying “this guy is self-centred” but this guy is more or less mentally ill. Which is why it is annoying, because that’s a kind of One flew over cuckoo’s nest kind of thing and also a popular way of getting rid of political opponents in totalitarian countries. Of course, this is different in the sense that it won’t actually lead into forced hospitalization of those people but it still does not help if they manage to convince a lot of people that that politician is mentally ill and hence considering his opinions is a complete waste of time.
In psychiatry, as in all of medicine, all diagnoses should be provisional, as should all treatment plans. A physician should signal positive regard to the patient, listen to them, hear them, but not credulously. There doesn’t need to be hard line between type I reasoning and type II reasoning. Even patients who don’t carry a psychiatric diagnosis lie, tell half-truths and omit important information. They do this out fear of judgement, fear of possible diagnoses, lack of insight about their (mental and physical) inner workings amongst other reasons. This is normal. Most of what is important about type I reasoning is the signaling of positive regard and trustworthiness. (It certainly helps to actually feel positive regard and be worthy of trust. ) The danger is that one allows oneself to be credulous. Most of the failure of type II reasoning results from failing to listen to and hear patients. This matters for substantive and tactical reasons. Type II physicians prematurely diagnose patients and anchor to their (often) erroneous diagnosis. They also may project a disrespectful attitude to their patients. Arguably, an advantage of type II reasoning is a healthy skepticism. In short, I think you’re describing a false dichotomy. You can capture the best features of both styles of practice in your own. I try to in mine.
Yes. Thank you. (Dear reader: Anon’s entire comment is good, I only quoted the executive summary.)
That, the business of every bit of evidence confirming ones suspicion, was also one of the things I hated most about Robert Hare-style ideas of psychopathy finding their way into law enforcement: That there can be no cure, that should be no treatment (because treatment just teaches them how to manipulate people more effectively), and that if you diagnose someone positively, any evidence that would suggest diagnosis is in error is actually just proof that it’s correct, because psychopaths are tricksy, shifty folk who are more than capable of faking non-psychopathic behavior to manipulate people.
It seems to me that it would be easy to solve this dilemma by sticking the patient in an FMRI machine, asking them a battery of diagnostic questions and seeing what parts of their brain react.
Assuming you have a solid baseline to compare the patient’s FMRI data against, that should give you a pretty good way forward.
That said, I have no idea what I’m talking about, so there may be problems with this approach that I’m unaware of.
Sounds like the Voight-Kampff Empathy Test
Well, Leon will get upset and shoot you, for one.
Good post. Here’s my two cents. Dealing with mental health patients (or really people in general) requires the negotiation of needs. As long as you’re open to negotiation, you’re on the right track. The problems you identify between the two attitudes come down to the doctor making his/her own needs non-negotiable. Wanting to avoid conflict or asserting the prerogative of M.D. (“My decision”) are not bad in and of themselves, but obviously become problematic when the doctor clings to them irrationally.
To those blowing off the Type Two approach on the grounds that it’s paternalistic, one word:
Minors.
Mom makes the request, child suffers through the course of treatment.
Attitude 2 in its strong form reminds me of the “Betan Therapy” of Lois Bujold’s Vorkosigan stories.
TL,DR; Beta Colony is a technologically sophisticated Blue Tribe near-utopia, Bay Area California In Space, which we see mostly from a distance and with an emphasis on the bits that don’t quite meet utopian standards. Their therapists, in particular, are legendary – and I think we are meant to believe that they usually live up to the legend. We get the story from the point of view of the ex-POW who was sent off to therapy with physical indications of torture, no memory of any wrongdoing, and having fallen in love with her captor. Check, Stockholm Syndrome plus repressed memories and PTSD. And Betan therapists on their good days would be able to take care of all of that with utopian effectiveness and kindness. But she (and we) know that isn’t the case, and one of the less-utopian features of Betan Therapy is the concept of “retroactive consent” to treatments the therapist is sure you’ll end up agreeing to after the fact…
Basically, aren’t Type 2’s in constant danger of “conformation bias”?
If that’s the case, there are somewhat effective methods of counteracting that. To me, it would seem like a matter of basic ethics to rigorously employ such a methodology when making a diagnoses. It seems like you’re also saying that people don’t professionally check themselves at all.
Do psychiatrists keep track of this stuff and calculate base rates for bs? Like what if each psychiatrist kept a journal of entries like “patient 9876 claimed Geodon causes hallucinations. But Geodon was clearly not the cause of her hallucinations” and “patient 5130 claimed nausea prevented school attendance. Pills fixed nausea, did not fix school attendance”. Then you compile the data and form a prior about how many patients’ problem descriptions are honest/reliable.
I’m inclined to believe psychiatrists do something like this already. But I expected the post to mention calibration, and I didn’t see it.
I don’t know… sometimes, when I go to a doctor, I have a pretty clear idea what my problem is. Sometimes I don’t. Sometimes I think I do, but I’m wrong… and I consider it part of the doctor’s job (and one of his key qualifications) to figure out what the actual problem is.
Now, with psychiatrists, it’s a bit more tricky. We all think we know what’s going on inside our mind (it *OUR* *MIND*, dammit!), but that’s not always the case, and if someone has a problem that causes them to see a psychiatrist, that may be a sign that their view of themselves is even more distorted than that of “regular” people. Seriously, if a psychiatrist just went ahead and accepted their patients’ words without digging a little deeper, I’d consider that negligent.
For what it’s worth, this post makes me trust you more as a psychiatrist.
“After a while, he finally admits there is a bully in that class. The mother calls the school, and the school takes care of the bully.”
Okay, that’s the part where the story stopped being believable. 🙁
Yeah, I thought this, too.
When’s the last time a school ever “took care” of a bully?
Yeah, I’m not even sure what that would mean. Expel him? Not likely. Somehow make him into a decent person? Even less likely.
Take him out back and shoot him. 😉
But seriously, I have no idea what “taking care” of him would entail, either.
The actual course of action is a two-day suspension or something—and that’s on the harsh side.
I’ve seen it happen once. The victim called the school, the school — after a lot of bureaucratic thrashing around — called the bully’s parents, the bully’s dad beat the bully’s ass, and the bullying stopped.
But you can see all the places where this could fail — and the corporal punishment step probably prevents it from being nice and enlightened anyway.
This is literally the only way I can see this working, and it will be very rare.
Yeah, the “no smacking” campaigns are going to take care of “the bully’s dad beat the bully’s ass”.
In my experience, the school would tell the bully “stop doing that” and then the bully would figure out who it was and pummel me.
ut seriously, I have no idea what “taking care” of him would entail, either.
By which is meant “The school called in both kids and their parents, there was a talk in the principal’s office where no formal claim of bullying was made because that would leave the school open to being sued for slander by the outraged parents of the alleged bully, the whole ‘carin’n’sharin’ talk was given, and at lunchtime in the playground the kid who told his mother was hauled off behind the back of the buildings to get a kicking by the bully and his mates for squealing”.
I’ve worked in a school, and our principal and teams for Home School Liaison and Behaviour Support were very good at nipping incidents in the bud, but once it’s outside the school gates there’s not much you can do. If the bully and his pals lie in wait at the bus stop, for instance, and gang up on the kid who informed – the school can’t do anything and it’s often put down to ‘usual scuffle between kids’ and the even worse ‘boys will be boys’.
Because of the legal responsibility to provide an education, you can’t suspend unless you go through the whole policy procedure. Then you either get the parent(s) who don’t believe my little Johnny ever did anything like you say, or the parent(s) who don’t give a damn and the kid is more or less feral. Sometimes you’re lucky and you do get parent(s) who do care and want to work with the school and prevent this kind of behaviour, but it’s not 100% applicable and not something to rely on.
If you expel the bully, that’s even worse: if no other school will take them in, the parents can go to court to force the school to take them back (the whole legal requirement to provide an education). So “taking care of the problem” is not as easy as “Tell the teacher and your parents and the bad kid will magically vanish out of the school and the neighbourhood and won’t beat you up for revenge and you won’t get a reputation among your peers for being a teacher’s pet and a tattle-tale”.
Kids, you should definitely tell the teacher and your parents, by the way; they can’t solve a problem they are not aware of.
What strategies actually do work for stopping bullying in elementary and middle school?
I was given the “hit them as hard as you can” speech, but most of me doesn’t want to give that advice to another person.
My strategy was to grow into an extremely large young man and let the bullys move on to softer targets, but I don’t think that’s useful advice.
In all seriousness, fighting back isn’t good advice anymore because of Zero Tolerance policies and concerns about racial discrimination by schools. I saw this with my little brother growing up: a bully starts something, he fights back, and the school administration steps in to protect the bully and punish my brother. It got to the point where he was nearly expelled while the punks who had been bothering him got off with warnings.
The best results he got were from waiting until after school had let out to ambush them, which prevented him from being punished at least. But given that schools are expanding their authority over more of their students’ time outside of class hours that might no longer be viable either.
At the institutional level, who knows. Certainly all the institutional interventions I’ve ever seen have been totally ineffective, even counterproductive.
At the personal level, if you’re in a situation where bullying is happening, it’s going to be aimed at the softest targets that stand out the most. Standing out is usually out of your hands, so don’t be a soft target; don’t appear to be weak or isolated. Make friends, build social capital: this is probably the single best thing you can do. Do what you can to be comfortable in conflict situations; the more comfortable you are, the less likely it is to escalate. And if it does come to use of force, then yes, don’t back down.
But ideally it shouldn’t get to that point. I see a lot of nerds approaching bullying through what I think of as the Ender’s Game model, where virtue consists of ignoring any lesser provocations, then when it escalates to force, trying to stop it then and there with a single act of cathartic violence. This occasionally works, but at the cost of giving you a reputation as a psycho, which is going to further isolate you — probably not something you want! And even that’s not reliable. Sometimes they’re better at cathartic violence than you are. Worse, clever bullies are often good at using disciplinary systems to their advantage: you don’t want to be cast as the weird kid that went after his classmate with a rock. Especially in a post-Columbine world.
tl;dr bullying is schoolyard politics by other means, so be a politician.
“Make several big friends” (or just “make more friends”) sounds like actionable advice that could do quite a bit of good. It also sounds much better than escalating to high levels of violence.
Not appearing to be the weakest target and not letting low level bullying occur in ways that can escalate later also sound like a very good idea.
“Make several big friends” can also be described as “form/join a gang”.
I’ve seen evidence that juvenile gangs often start as self-defense groups.
It obviously depends on the situation, but I think parent-to-parent engagement can work, if handled diplomatically. Few parents want to raise a bully, and they’re not always as blind to it as one might stereotypically think.
Worse, clever bullies are often good at using disciplinary systems to their advantage
Oh yeah. I’ve given the example on here before of the early school leaver programme for which I provided clerical support, and the sly kid who used the kid with anger management problems as his catspaw.
Provoke the other kid, wind him up and let him off, and he’d reliably have a meltdown and cause disruption which meant the staff were busy calming him down and doing damage control, so the lesson schedules were disrupted, the other kids could hang around outside doing nothing (well, smoking weed if they thought they could get away with it) and sly guy got away with it (because if you called him on it, all he had to say was “But I only made a remark about [whatever], I didn’t do anything to X!”)
He was the one who should have been getting into trouble, not the kid throwing chairs and storming out, but he was too slick. My only consolation, as I said, was that he was the type to get into petty criminality and one day he was going to try that trick on a real Tough Guy and get stomped into the dust.
In my experience, there isn’t really any strategy that works. For context, I don’t know a whole lot about intra-male bullying, because I am female and mostly experienced intra-female bullying (which takes different forms than male bullying), but literally no strategy I tried or saw any other girls try ever worked to stop the bullying.
Telling the school didn’t work, because they don’t take intra-female bullying seriously, even when a female bully is telling another girl she should kill herself because she’s a waste of oxygen, or is shaming her in front of all of the other students at lunchtime for something innocuous but embarrassing, like having pads or tampons in her backpack (I saw this happen in middle school, where about half of the female students had begun menstruating). Schools seem to regard incidents like this as minor spats between friends (even though the students in question had never, ever been friends); at most they’ll make the bully give the victim a fake apology.
Confronting the bully didn’t work, because the victim virtually always has lower social charisma than the bully, so her confrontation ends up sounding pathetic and weak, which only encourages the bully (note that female bullies often bully in groups, ganging up on a socially awkward, low-status female, which makes a successful confrontation all but impossible; anything she says will immediately be twisted and mocked by the main bully’s friends).
Graduating from or otherwise leaving that school and going on to attend high school at a different school than the bullies was the only thing that worked, and it worked best when the new school was far away from where the bullies lived or attended school themselves.
I knew a few girls who got bullied so badly that their parents pulled them out of my (public) school and started sending them somewhere else (usually a private or parochial school, but sometimes a different public school when it was possible), which reinforces my general impression that putting a lot of physical distance between the bully and the victim is the only real solution.
I don’t think it’s much different for boys either. Children at that age create social status hierarchies no matter what, and someone always falls on the low end of the pole. It’s not just random bully picking on random kid…the kid who is bullied is low status for everyone. Just not everyone openly bullies them.
You really can only remove them from that hierarchy and put them in a new one that’s already formed. Or if the child has something that may always make them low status, remove entirely.
What an asinine article. The alleged “Two Attitudes” are merely straw men. Where is the evidence that either of the purveyors of the “Two Attitudes” approached their work using the scientific method (vs. as an enabling, doctrinaire therapist/priest)? What the brief case histories illustrate, regardless of “attitude”, is the need for a thorough assessment and evaluation of the person who appears for treatment. Over 80 years of research on “clinical judgment” has repeatedly shown that case formulations based on an empirical, multi-method approach (i.e. psychological testing, review of medical records, as well as a structured interview) provides a more accurate (i.e. valid and reliable) basis for treatment. Said another way, “Treatment without assessment is malpractice.” Psychiatrists and psychologists who function as described above (i.e. as priests vs. applied scientists) deserve to be sued into oblivion. Be greatful that air traffic controllers, brain surgeons, rocket scientists, etc. use a more rational, data-based approach.
What an asinine way to start criticizing it.
Studies show it’s better to be right than wrong.
Of course you want to be objectively right in all cases. The question is, when the evidence is inconclusive, what do you tend towards?
You are definitely coming across as a Type 2, Dr (let us pay formal respect to the academic qualification!) Nevotti. Run a battery of REAL SCIENCE tests and then tell the person what is wrong with them in your professional opinion.
Which is fine, except that there isn’t the same SCIENTIFIC METHOD for evaluation of subjective psychological states (as I am sure a gentleman of your education and professional experience is well aware). One can order a blood test to see if there is an iron deficiency or a problem with the thyroid; I am unaware that there are as yet comparable objective physical tests to diagnose mental disease (though this line of research seems promising for biomarkers).
But what of problems that do not have underlying organic causes? Relationship difficulties, struggles with social interaction, the feeling that one has not achieved in one’s career to the utmost of one’s potential – or is it rather that one has no real potential at all?
There is, of course, the checklist method: fill out this sixty-question survey and based on the scoring of the questionnaire, your diagnosis is – . Rather a mechanical, not to say mechanistic, approach but it at least has the naive charm of simplicity and directness. And the beauty of being reducible to an approach that insurance companies can teach their customer services operatives when people ring up to be referred to a clinic, thus obviating the need for expensive years of higher education at properly accredited institutions of learning to attain such qualifications as doctorates, n’est-ce pas?
I am charmed that you regard yourself (a forensic psychologist, am I correct, unless I am confusing you with another gentleman of the same name and title, in which case I apologise to both you and he) as an “applied scientist”, given the traditional – shall we say – demurral in attitude exhibited by the “hard” sciences to their “soft” siblings, and that you regard yourself as on a par with an air traffic controller – in which direction should the ascription of flattery flow, I wonder? From you to the air traffic control, or from them to you?
To: Imperatoris & Deiseach: (a) See below; (b) If you’re looking for perfection, you have chosen the wrong profession.
To: Arbitrary_greay:
Thank you. I haven’t heard the term RCA, but I follow a similar line of thought in my analysis of all my cases. It is structured, systematic, employs essential data (e.g. medical records; MMPI-2-RF and other psychological test data; third party inputs; etc) into the “Case Formulation.” An additional advantage is that, because of it’s structure and empirical approach, it is easier to see errors and what works based on outcomes vs. “clinical judgment.” Moreover, everything I do–EVERYTHING– is open, transparent, and subjected to harsh criticism by some very smart people (vs. hiding behind closed doors using arcane theory and terminology and listening only to the patient’s subjective inputs). Once one has used such an approach a number of times (>50-100+?), and has transformed errors into lessons learned through a data-based process of continuous improvement, it becomes a discipline much like how any concert musician practices a piece by Mozart or David Gilmore. Essential for training newbies, especially in the field of psychology vs. the usual opaque, “seat-of-the-pants” approach. Again, as I said in my initial criticism of the author of this piece, 80+ years of research strongly indicates “clinical judgment” (i.e. “Trust me, I’m the doctor”) sucks. A rational, empirical, quasi-mechanistic approach outperforms “clinical judgment” every time. OF course, since it transforms the mental health professional into a scientific observer rather than the role of oracle, it’s not sexy or self-aggrandizing. But then, neither is the job of an air traffic controller.
Cheers!
…y’know, beneath the layers of self-aggrandizing humblebrag huffery, you have a good point.
Unfortunately you argue it in the most destructive way possible, inspiring everybody who reads it to hate your guts and assume you have nothing useful to say, because you say it in such a fluffed-up, self-praising way. Your ego gets in the way of effective communication, making you less useful a contributor than all of those you look down upon.
They’ll eventually arrive at the same ideas, and get all the credit, because you’re more interested in conveying how intelligent you are than the importance of the idea you’re representing.
Reality bites!
My clients “get it.” I don’t much care if other “mental health” people “arrive at the same idea” or “get all the credit.” I am here to serve my clients.
Perhaps you would feel at home in an AlAnon meeting or reading Terry Cole Whittaker’s book, “What Others Think of Me is None of My Business”?
Cheers!
Joey
If you bother to write to convince other people, you should probably aim to be more effective, rather than to deny that their opinions of you matter. Because that’s a deflection: You’re being ineffective at something you’ve chosen to do.
Re. “Unfortunately you argue it in the most destructive way possible, inspiring everybody who reads it to hate your guts and assume you have nothing useful to say . . .”
Wow! The MOST destructive way possible? Really? “Inspiring people” . . .wow, those you refer to must be easily lead. “Hate your guts?” What can I say? Perhaps some CBT with a good therapist can help you with your jumping to conclusions and irrational thinking?
Oh, I struck a nerve – you only now respond to what I wrote before, in search of something to criticize? But it won’t work.
See, I -truly- don’t care what you think about me. You, contrary to your claim, care deeply about what other people think about you. That’s why you try so hard to appear intelligent, that’s why you put that silly “Ph.D.” after your username.
Your ideas are ignored in favor of your… personality quirks. You could try being nice, but that would require admitting that you’re wrong, which means you’re not even very good at adhering to the science you claim to espouse.
OK, let’s stop before this degenerates any further. I’m sure both gentlemen are capable of enlightening discussion, but this subthread has not been a good exhibition of that capability.
since it transforms the mental health professional into a scientific observer
And yet your battery and array of REALLY TRUE SCIENTIFIC SCIENCE tests all rely on “the patient’s subjective inputs” as the foundation of what you are investigating, since they are self-reporting symptoms and you are sorting all these out.
– psychological test data: this is also subjective since the answers the patient gives are subjective. The patient can tell you on Tuesday they felt low in mood and perhaps you can tease out that this was triggered by something that happened on Monday, but as yet there is no equivalent of a Holter monitor for the psyche to give you objective physical-based data
– third party inputs: if from laypeople (the patient’s family, co-workers, etc.) just as subjective and biased as anything the patient may tell you; if from professionals in the field, it is to be hoped that objective knowledge and experience are in play, but again there is the possibility of the “Attitude 2” pre-judgement at work here (e.g. I knew that the patient was such-and-such but they resisted all attempts to uncover this)
– medical records: as good or as bad as any other kind of record-keeping. As someone who works with client records, I can say there are often times instances of vital information that is missing, has not been recorded correctly, or has never been requested. I would never swear to any file being 100% complete and unimpeachable, circumstances change and those changes may not be reflected in the file.
This is not to say that good and accurate information gathering is not important, but since people are not reducible to simple mechanical systems then an approach geared towards treating them like cars undergoing a check-up by a mechanic is somewhat lacking. It very well may treat the symptoms but the underlying causes?
I see the old status-chasing of “we are too real scientists” is still at play here; putting mental science and treatment on the same footing as physics or chemistry, proper hard science and forget all the arty, humanities nonsense!
Really, Dr Nevotti, do not be ashamed of your profession being an art as much and as well as a science! The white coat is not some magical ceremonial garb, and sighing for the sweet name of REAL PROPER SCIENTIST is bowing to the altar of the sliderule priesthood of yore!
You make some interesting points. However, you are also very rude, which makes people around here less willing to give your arguments proper consideration. I assume you’re new here. If you want to engage us in a debate, I suggest you acquaint yourself with the way we do things around here (https://slatestarcodex.com/2014/02/23/in-favor-of-niceness-community-and-civilization/ and https://slatestarcodex.com/comments/). I think you’ll find people here refreshingly open-minded and receptive to your opinions and criticism, as long as you remain civil.
I have less than zero interest in engaging anyone in a debate or whether or not the people on this site accept what I have to say.
When I want to evaluate my work I ask my clients, i.e. the people who pay me (and I’m doing fine). I have no interest in what anonymous people with unknown credentials, experience, and data think or feel about my work. If you are upset or off-put, talk to your therapist, priest or best friend about it.
Then why contribute a comment, if you were not at the least offering a response to what Scott said in his post? Comment and go? What is the point there?
You can certainly say “This is nonsense and I think you’re wrong”. If you are not interested in debate, dialogue, or convincing people of the correctness of your contention, then you need not say anything – you can just shake your head at our obtuseness and click elsewhere.
But if you leave a comment, you do – for better or worse – invite reply, even if it is no more “This sounds interesting, please expand upon it” or “I agree!” And on a site like this, it will very definitely invite “I do not agree and this is why”, “Your reasoning is unsound”, or “I think you misunderstand the point being made”.
Moreover, dropping titles, qualifications and “I earn big bucks in the Real World so I don’t care about what you losers think” is not going to impress anybody on here as to how much more knowledge and experience in the matter under debate you have, your bona fides, or that you are indeed the big dog in the yard.
“Two Attitudes” is just a way of describing the difficulties involved in Binary Classification, which is inherent to diagnosis. The difficulty lies in deciding whether to err on the side of false positives or false negatives in the face of uncertainty (which may exist even after administering a state of the art examination).
https://en.wikipedia.org/wiki/Type_I_and_type_II_errors
Yeah I feel like its definitely possible to try to find a happy medium between Type I and Type II. Is it as simple as abiding completely by one or the other? No, but it is still worth a try.
A psychiatrist shouldn’t take everything a patient tells them as the absolute truth, but they should definitely try to make sure the patient believes that they believe. Because no one, in any realm of life, likes to be told by an ‘expert’ that their internal experience is ‘wrong’.
Especially in a clinical setting, patients have to have a huge amount of trust in their psychiatrist/doctor, and if the patient feels like they are being condescended to, or that they aren’t being taken seriously, that will likely taint the doctor patient relationship for the future. Why would a patient want to be completely open to a doctor if they are worried they will get a paternalistic response?
But obviously, as your examples where Type II thinking is the correct response, people don’t always know what’s best for them, and they definitely shouldn’t always be able to get what they think they need/want. So in a situation where a psychiatrist is wary of giving a patient what they think they need, the best solution is probably somehow managing to convince the patient of what would actual help them, without sounding like you’re discrediting their experience. And while also not jumping to immediate conclusions yourself, i.e. “They’re a narcissist”. Obviously this is easier said than done, and definitely not always possible.
Not sure if this is very actionable, but I’ll give my 2 cents.
My father is a physician (not a psychiatrist) with a reputation for helping patients no other doctor can help. I’ve had the benefit of observing him in practice for extended times. Part of his efficiency is a differing perspective on how to resolve things – he tends to focus on increasing overall vitality in the patient via lifestyle/ecosystem type changes in addition to relevant therapies. Getting people to participate in such changes requires a degree of inspirational charisma. But the subject of the OP is also a part of it. He manages to balance the two in a non-binary way – not sacrificing 1 for 2 as much as most would.
A big part of this is explicitly offering a therapy for the patient’s explicit complaint, while implying but not imposing alternative explainations he thinks are relevant. He makes sure to reassure the pt that their complaint is genuine and that the things he’s talking about are forces that apply to everyone to some degree. He gives small but concrete ways they can start to work on the issue in addition to the usual therapies. If he senses resistance he doesn’t back down but demphasizes the issue for the time. If he drives the pt away no-one gains a thing; if they come back, the message may eventually find fertile ground.
This all seems remedial when I put it down, but there has to be a real elegance to delivery so that people feel heard while at the same time introducing them to new ideas. Also, you need to have a good intuition about what people’s actual problems are.
Like I said at the start, maybe not too actionable. Hopefully inspiring at least 🙂
I’m really curious about your father’s situation… What is your father’s specialty? Could you give more concrete examples of what his different approach is (I’m a medical student, so you can be as technical as you want)? If you prefer to answer in private, my email is cryptonomicon314@gmail.com.
As a primary care physician, i entirely endorse this approach. The patient knows their own universe far better than we ever will. We as human beings cannot occupy the role of physician without the belief, correct or not, that we know medicine far better than our non-medically-trained patients ever will. Yet we know that we have vast lacunae in our own individual knowledge. We must respect and collaborate if we are to have any hope of helping.
The most helpful book I ever found on this subject is “Caring for Patients” by Allen Barbour, MD.
Douwe Rienstra, MD
Say there’s a scale 0-100, where 0 is the extreme Attitude 1 you can imagine and 100 is the most extreme Attitude 2. Then the 5 examples at the beginning of the post contrast something like hypothetical behavior 10 with actual behavior 40. The psychiatrist out of the textbook is 80. People in the comments who come strongly on the attitude 1 side see the issue as a 35-90 contrast, and people who come strongly on the attitude 2 side see it as a 5-35 contrast.
It seems therefore that typifying the problem as a choice between two positions is not helpful; readers’ (and possibly the author’s) past experiences with literal/imaginative doctor behavior causes these positions to snap to preconceived steoreotypes in their minds.
Attitude 2 can only be taken with a certain kind of skepticism about your own correctness about a situation. If I am taking Attitude 2, I can’t have the hubris to believe that my judgement is prescriptive truth. For example, take your case of the famous psychiatrist calling the women a narcissist. That is a global judgement, that applies in all scenarios, marking every aspect of her behavior in a narcissist light based upon previous judgement. A clear bias like this must be avoided at all costs in taking Attitude 2. To do this, a doctor taking Attitude 2 must work against the priming effects of the early comments of the patient.
Early Freudian psychotherapies saw the doctor using Attitude 2 as a blank screen, someone who was unaffected and beyond the realm of being influenced the by patient. Contemporary psychotherapies now see the relationship as a clearly dyadic, in that the doctor himself can be influenced, and has beliefs and values that are evolving, like the patient. In order for Attitude 2 to work correctly, both the Doctor and the patient have to work to reach an understanding that not every attitude and thought applies to every situation.
How does this work? A great place to start would be in understanding the three explanatory styles. They are:
1. Internal vs. External
If I take my thoughts, words, and actions to be part of who I am, then anything that threatens these parts of me threatens who I am. Externalizing these things will help me be able to give up any one decision in any three of these realms, because they are external to who I am.
2. Permanent vs. Stable
This has a lot do with self labeling. For example, “I am a loser.” There is no way that this description could fit every situation, yet often we apply these labels to ourselves, and we live them out in self fulfilling prophecies. I always lose my keys before I go out, I always eat way too much food, etc.
3. Global vs. Specific.
How much truth do you give to your judgement? Is this something that is pertaining to this specific situation, this specific event that this person was in, or specific time. Or, was this something that pertains to every situation, in all of the events in my life?
Many times we confuse things that should be pertaining to one side, that belong to another. Both the doctor taking Attitude 2 and the patient, should be encouraged to examine their own biases.
Consider the possibilty that you may not actually be bad at attitude two. You may just belive you are bad at it.
Illustrating attitude 2 with the single half-remembered textbook anecdote is misleading at best.
The question isn’t whether people do or do not know what they want. The question is how people prioritize their various desires and how honest they are about that prioritization. As your own personal testimony suggests, your desire to find a solution to your marriage or anxiety or body-image issue etc. may be (and often is) displaced by your neurotic desire to please the therapist.
With very little effort, you were able to come up with 5 examples showing how dangerous Attitude 1 can be. And in the same breath you explain how unwilling you are, not simply to use Attitude 2, but to even *think* you might need to be able to use it. Psychoanalysis is an invaluable therapeutic tool. I’m sorry to see someone in the field as bright as you refusing to engage with it.
Now I wonder how much of “getting a second opinion” is just moving from an Attitude 2 doctor to an Attitude 1 doctor.
As a fourth year med student going into psychiatry, I have often seen and struggled with this dichotomy in practice, and I was so excited to read this article. Finally, answers!, I hoped. And then the article ended. More coming?
This article and discussion has shown me that I must have some really deep biases. I can’t get past my first reaction, that trying the test with the alleged-hallucination pill, or bringing in the alleged-abusive husband for a sample of their communication, or having the patient test carrying the hair-drier, or whatever … is the obvious and only right approach. (In psychology or a lot of medicine, anyway.)
If the test supports the #2 guess, that’s evidence for persuading the patient to accept the diagnosis. If the test throws doubt on the diagnosis, that’s a good warning for the doctor. In any case, the fact that the doctor was willing to consider alternatives and to discuss such real-life tests and their implications with the patient, is good for their relationship.
I’d have more trust in an Atittitude 1 doctor who did at least a little of this sort of testing than one who did a pure Attitude 1, unless the case is so simple that ‘try the pill you asked for and see if it helps’ is a reasonable test.
I’m not sure anyone can EVER fully justify Attitude 2, because it ultimately makes the determination that someone other than the person themselves knows what is best for them (the words paternalism and arrogance come to mind). This has the potential to some very wrong and bad things happening to people, as it has in the past when an authority’s opinion is beyond question.
Maybe that’s one of the reasons that Attitude 2 goes against so many of our cultural principles around people having the right to determine the course of their lives.
In many of these examples, I could see how Attitude 1 could eventually lead the person to the same conclusion, possibly in a much less traumatic way. But even if they don’t, don’t we all have the right to struggle for the meaning and solutions to our own lives, even if that takes us off of what appears to others to be the correct course?
This kind of issue is not unique to psychiatry. If you’re developing software, you may also be faced with a client who says they want one thing, but isn’t going to end up happy if you just give them exactly what they say they want. Or in marketing research, the client may want a certain kind of study done in a certain way, but you know that’s not going to give them good information. Or as a former electrical engineer turned manufacturer’s rep told me, sometimes the customer wants to order a part but you know it’s the wrong choice.
In all of these cases the customer has a surface-level request for X, but its purpose is to address a more fundamental need Y. If you want to serve the customer well, you need to find out what that need is, and recommend a solution appropriate to it.
So I guess I’m leaning towards a modified version of Attitude 2 — more humble, avoiding the overconfidence bias, not interested in passing moral judgment on the patient, but willing to probe beneath the patient’s surface complaint before recommending a course of action.
a modified version of Attitude 2 — more humble, avoiding the overconfidence bias, not interested in passing moral judgment on the patient, but willing to probe beneath the patient’s surface complaint before recommending a course of action
Yes, I think most people wouldn’t mind being told “I don’t think that’s what you need, because [explain reasons] and I think [recommendation] would be better in this case”. It’s the flat “You’re wrong, I know your own business better than you do” that gets peoples’ backs up.
Happens all the time in game design, especially if you have a less closed and more interactive development cycle (more common these days as multiplayer becomes more important and open betas become the norm).
Attitude 2 only works if you have long-term, accurate history with the patient and approach it with clear eyes and no unconscious biases. If Scott is Irish, he may not know about the Satanic Panic in the states starting in the 1980s, which was attitude 2 run incredibly amok, leading to things like false memory syndrome and satanic ritual abuse being tendered as the real explanation to behaviors. Recovered memory therapy is attitude 2 to the extreme, too.
Or more simply, if you simply can’t spend enough time with a patient to determine he is being bullied (how many kids open up quickly, or how would a person know he only is sick school days?) you only can do attitude 1, trust what they say.
I’m seeing some kind of fallacy at work here. It seems to me that the problem is not Type 1 and Type 2 psychiatrists, but Type 1 and Type 2 patients. In other words, some patients have a decent understanding of the nature of their own problems while others are clueless about what really ails them. Obviously this problem is worse for mental health professionals, but any doctor should be asking “Does this patient understand their own problem?” Then the doctor should take a Type 1 or Type 2 approach as needed.
What about patients who, as part of their pathology, mislead the psychiatrist?
Very good, Scott.
“Everything’s a tradeoff between Type I and Type II errors.”
C’est la vie.
I practice and teach Taiji and am constantly faced with the same Attitude 1 / Attitude 2 dilemma. A student wants me to teach them in a particular direction, but I know they are not ready for it, and want to instead teach them in a way that will better contribute to their long term progress.
Perhaps there is a difference: experience has shown that beginning and intermediate students are not skilled at perceiving where they can most effectively focus. The teacher’s guidance is truly essential, regardless of the student’s opinions. Attitude 1 is proven less than effective.
However, there is an Attitude 3. The teacher can show the student the path and then step back, allowing the student to walk it or not. Not push them along it, but return to the spot until the student takes the step. If the student will not step then progress will stop for a while. But that is less or a risk for Taiji instruction than when the patient is deeply troubled.
In fact, the worst failure mode of Attitude 2 isn’t that the patient leaves and doesn’t get any care. It’s that you’re wrong, but she believes you.
Attitude 2 psychs are *excellent* at basically unintentionally gaslighting you into believing the incorrect theory they have come up with, because their position and their status and the dominant cultural narrative of what psych treatment looks like and your own fragile mental state makes that really easy to do. This can mean that even if you end up seeing someone else, the mistake never gets fixed because attitude 2 psychs *might* figure out that the original psych’s opinion that is now your opinion is wrong if they just heard you say it, but they have your notes and, funnily enough, attitude 2 psychs never disbelieve other attitude 2 psychs.
Which isn’t to say patients always know what’s wrong, because how could we?
I just think that whenever you do adopt attitude 2, you have to be prepared to actively seek evidence that you are wrong. If the resources were there, I’d even suggest something like getting a second opinion from a colleague *without telling them your theory*, or something like that.
I mean at the end of the day it sounds like you’re doing your best? ‘Person feels gaslighted and belittled and hates psychiatrists now’ feels like a different kind of side effect/result of a mistake than the horrible physical consequences that can come from prescribing the wrong medication, or the right medication that nobody knew someone would turn out to be allergic to, but morally it isn’t. And every doctor is going to make mistakes.
I’ve read a lot of articles and seen a lot of anecdotal evidence suggesting that Attitude 2 is more likely to be adopted toward certain demographics of patient – does this match your experience?