While looking up data on the Implicit Association Test for my post two days ago, I came across Nock & Banaji 2007 (Prediction of Suicide Ideation and Attempts Among Adolescents Using a Brief Performance-Based Test), an interesting study which I am learning about only six years late (I’m catching up!)
They tried to use the Implicit Association Test to measure suicidal intent in psychiatric patients. This is good. Right now the technology for predicting suicidal intent in psychiatric patients is asking them very nicely “Excuse me, but do you think you’re going to commit suicide soon? Because if you say yes, we’re going to have to lock you up in a hospital against your will. But please, answer honestly!”
Okay, that’s exaggerated a little for dramatic effect. For one thing, people don’t say that last part. They just imply it. And there are various screening instruments that ask the question in a variety of ways, and with a variety of related questions (“Do you feel like life is not worth living?” “Have you formulated a specific plan?” “Have you bought the tools you need to carry out the plan?”) and then those screening instruments have been validated ad nauseum. And many people considering suicide really want help and are happy to be able to admit it and get it off their chests.
But the basic gist is still that if you want to know whether someone is going to commit suicide you don’t have many options besides asking, and some people have an incentive not to tell the truth.
The Nock & Banaji paper validates an alternate means of assessment. Implicit Association Tests, as mentioned before, are computer-based instruments where a subject has to press keys that categorize like and unlike concepts as quickly as possible. The idea is that they will be able to do this slightly faster on concepts that already seem implicitly associated to them than concepts which seem contradictory. In the most famous example, people were asked to organize the categories (white people + good adjectives) / (black people + bad adjectives) and then afterwards the categories (white people + bad adjectives) / (black people + good adjectives). In general white people were able to do the first task faster, presumably because they had stereotypes that already associated black people and negative attributes together so their implicit associations were aiding in the task rather than contradicting it. If this doesn’t make sense to you, it’ll become much clearer upon taking the test.
Anyhow, instead of working with races and adjectives, this new version of the test matches the categories (self, other) with the categories (pictures of self-harm, pictures of not-self-harm).
Self vs. other were words like “me” “mine” “I” versus “you” “them” “her”. Self-harm vs. not were pictures of scarred, cut skin versus pictures of healthy uncut skin. The theory was that if someone associated themselves with self-harm, they would have an easier time (as measured in reaction time) creating the categories (me + self-harm) and (others + not-self-harm) than the alternatives.
I’ll admit this protocol sounds to me like it would do a terrible job predicting suicidal intent. It seems like if anything it would pick out people who had cut themselves before. The authors note this, but say that they were reluctant to use actual images of suicide (person hanging from noose?) because it might “plant the idea” in people’s heads. It is conventional wisdom in psychiatry that this doesn’t actually happen (though I haven’t personally researched the evidence base for this) and I’m disappointed that the study went the self-harm angle.
Extremely mysteriously, though, the study claims that it adjusted for presence of self-harm and found no effect on its suicidality data, and so it had no qualms about taking a population consisting of both self-harmers and non-self-harmers and assuming this test on self-harm wouldn’t be confounded by that.
I will put aside my extreme skepticism and report what they found – which was that their test was able to distinguish healthy controls, people with past suicidal ideation, and past suicide attempters with accuracy of p < 0.01 in each distinction. Further, significant differences remained when they controlled for all previously known ways of detecting the suicidal – eg age, demographics, pre-existing psychiatric diagnoses.
They also claimed that their test was able to predict prospective suicide attempts. That is, two members of their study group attempted suicide in the six months after their study, and they noted that these two people had higher average scores on their suicidality test than the subjects who didn’t. But looking at their calculations, it looks like they simply compared the average score of the two attempters with the average score of the seventy-one non-attempters. This seems useless to me. The two attempters were almost certainly from either the “previous suicide attempts” group or the “previous suicidal ideation” group, so comparing them to the entire rest of the study sample can’t tell us anything about whether this new test is any better than just noting that psychiatric patients with a previous history of suicide are more likely to commit suicide than healthy controls. In fact, this whole section seemed damning-by-faint-praise; if this was the most they could say about its predictive validity, that’s a bit worrying (in the study’s defense, it did classify this as “preliminary evidence”)
Overall I am very excited that work is being done in this area, but a bit skeptical about this study in particular. The self-harm aspect really bothers me and their claim that controlling for it doesn’t change anything needs more proof. In particular, they controlled for presence of past self-harm but not amount of past self-harm, and it seems totally plausible that people who self-harmed themselves more in the past are more suicidal.
But the biggest uncertainty is how useful this will be. The Holy Grail would be some test you could give someone, see they’re suicidal, place them in a hospital for a few days until they’re no longer suicidal, give them the test again to prove they’re no longer suicidal, and let them out when you see their test scores have improved. But for all we know, this method could test only extremely long-term constructs, something along the lines of whether you’ve ever harmed yourself or considered harming yourself. That would be useless for seeing whether someone’s suicidal now, and useless for determining whether they’ve stopped being suicidal after some treatment. The study’s extremely half-hearted attempts to analyze prospective data don’t reassure me here.
Still, I’m glad people are finally realizing the what I’ve been saying for years, which is that the IAT is really powerful and needs to be used for something other than nebulous social justice projects. I bet if the CIA created a (self/other + patriotic American/Russian double agent) Implicit Association Test it would totally work. In the absence of that, I will just hope for more research on this suicide thing.
Even assuming that training can’t speed up the ease with which Boris and Natasha can associate self+civilian and other+spy, wouldn’t they just be trained to *slow* their associations of self+spy and other+civilian? This still seems to fall in the “Experimental, but not Organizational” interesection of Eliezer’s three levels of tests, even if it’s not quite as easy to game as “How happy are you?”
Well, now I’m scared. I just spent like a week hospitalized for suicidal intent. If they could tell if I was lying or not when I asked to be sent home I might still be there. I wonder what I’d score on this test? Do you have a link, by chance?
I don’t think it’s publicly available.
If you’re thinking that you fooled your psychiatrist into thinking you were sane when you still had issues…well…I don’t know anything about your case, but in a lot of the cases I’ve dealt with, the psychiatrists know very well that the person isn’t completely cured, but once someone is doing well enough that they can plot how to get out of the hospital effectively, there’s not much more that the levels of inpatient treatment most people are willing to provide can do for them anyway.
I feel like economics places a pretty hard floor on how many people can be hospitalized for how long, and that better tests of suicidal intention would just shift those resources towards people who need them more rather than mean everyone is hospitalized forever.
Now imagine someone 50 years ago making this statement about our prison system.
Further evidence of psycho-folk (psychologists rather than psychiatrists in my example) going along with failed attempts to fool them: Until recently we still had conscription in Germany. There was a suitability exam which had a standardized psycho-screening component.
When I went they were still trying to draft as many people as possible and part of my physical was clearly manipulated to make me eligible. For ethical reason, I was totally honest on psycho test, which probably accurately flagged me as very weird but harmless. My Platoon had two people clearly more insane than me, one of whom got to leave early because of it.
A year later, the army got downsized. Since not drafting eligible people would have gotten them in trouble with the constitutional equality guarantee, they army was suddenly interested in people being as sick as possible. And so in college I met someone who was starting with me except he hadn’t lost a year. He had been found ineligible for playing dumb on the psycho-screening. He did it the dumbest possible way, simply by giving the most insane answer to every item. That must have been really obvious, because some psychological defects are rather hard to combine. But he wanted to be insane, the psychologist wanted as many insane people as possible, so insane he was. And that even though a year earlier they wanted as many sane people as possible, so I was sane when I’m objectively much insaner than that guy, not that I’m bitter or anything.
Also, for vaguely related fun, y’all have read Feynman’s account of his army psycho screening, right?
You don’t mention if you ended up attempting suicide after being released, but it’s clear you at least didn’t complete suicide. Which seems like evidence that keeping you hospitalized wasn’t necessary to preventing your suicide. Which would mean that a tool that accurately predicted whether people need hospitalization to prevent suicide would have led to your release.
Whether this particular test is such a tool, of course, is another question all together, as Scott notes.
A test whose results you have no conscious control over and which can result in you getting locked up until those results improve sounds *lovely*, my goodness. The Holy Grail!
I wonder if Scott Alexander has played Fate/Stay Night.
I wonder if he’s played Tsukihime, but I’m also not entirely sure how either of these are relevant to the post.
Not to mention that dumping you back home in the same situation that drove you to contemplating suicide doesn’t seem much good; the test might work best as a Cover Your Ass legal indemnity – “Hey, he passed our Are You Going To Top Yourself Today Test?, it’s not the hospital’s fault he had no job, his wife left him, he was kicked out of his flat for non-payment of rent and then he decided to throw himself under a bus!”
And not to mention that being hospitalized for a week is a good way to get fired, making your life substantially worse.
Assuming we’re locking a constant number of people up for Thing X (and it is constant, since it’s limited by money/hospital space), it is way better to have a test that actually determines Thing X correctly than just sort of guess and lock up some of the wrong people at the cost of not locking up some of the right ones.
Okay, somebody actively suicidal being locked up for a week (and I hate that we’re actually using the term ‘locked-up’ but I see why it is necessary in some cases) might be gotten over the immediate impulse to jump off a cliff.
But if there is no other support than “You’ve had your week on the ward, you can go home now, here’s a bottle of tranquilisers”, then it’s not a long-term solution. And unfortunately, even in a socialised medicine scheme like the NHS in the U.K. or the Irish health system (which you know yourself, Scott) that’s the options for a lot of people who need expensive and long-term therapy but if they can’t pay for it themselves they aren’t going to get it.
Post-inpatient support is completely orthogonal to the issue of whether we’re getting the right inpatients in the first place.
No, Steve, I don’t think it is. Yes, it is important to identify if you are getting the right patients, but once you have them, what are you going to do with them?
You are aware that involuntary hospitalization is a thing that happens already, right? Based, basically, on the subjective assessment of a psychiatrist?
Yes, but right now AIUI it’s based on things that are mostly under your control. You don’t say “I have such-and-such plan to kill myself on Tuesday,” or whatever, to your psychiatrist unless you, at some level, want to be hospitalized. And since I think that suicide is an important human right, I also think that “on some level you want to be hospitalized” is a much much better criterion for locking you up than “you are likely to attempt suicide.”
So your real objection is to involuntary hospitalization in the first place. This (hypothetical) tool is only a problem inasmuch as it makes it harder to resist the current regime.
Precisely!
But I do think there is a set of people who benefit from the current “semi-voluntary” regime. Asking outright to be hospitalized is probably pretty hard and frightening, and if you’re conflicted it may be a lot easier to just describe your thoughts to the shrink and leave the final decision in their hands.
I have to say, *I* am not benefited by the current semi-voluntary regime, because I have an extremely strong preference against being hospitalized, which means I have to be circumspect about what I say to mental health professionals about my suicidality. A system that encourages one to lie to mental health professionals strikes me as somewhat of a Bad Plan.
I think the advantage you cite is relatively minor. Even in a regime of no involuntary hospitalization for suicidality, there’s nothing preventing mental healthcare professionals from listening to people’s thoughts and then, if it they think hospitalization is appropriate, issuing a strong recommendation for hospitalization. So if “Asking outright to be hospitalized is probably pretty hard and frightening, and if you’re conflicted it may be a lot easier to just describe your thoughts to the shrink and leave the final decision in their hands,” people can still do that.
And the current regime has some serious disadvantages (Ozy notes a big one).
Hmm. Immediate objections that leap to mind:
(1) If you’re going to take an online “Do you feel suicidal?” test, isn’t it likely that you’re already feeling a bit inclined towards that state of mind, so it’s self-selecting for ‘people contemplating or thinking vaguely about suicide’, which means that saying “our test predicts suicidal tendencies” is somewhat like doing a survey of heavy drinkers and then saying “our test predicts at least some of these will get smashed within the next six months”?
(2) As I mentioned about the previous racial bias test, what about people with poor reflexes/left-right distinction difficulty? It seems a bit much if you’re classed as a suicide risk based on poor hand-eye co-ordination
(3) Do these things do any good? True Confession time again: I contemplated suicide when I was 12, but here I still am many years later. I really don’t think ‘caring intervention’ based on getting me to take a test about it would have done me much or any good, and indeed might have made things worse.
1. This was not done on an online sample. The particular study I cited took three groups of about 25 people – healthy controls, known psych patients contemplating suicide, and known past suicide-attempters – and saw whether the test could correctly distinguish who was in each group.
2. The test consists of two reversed parts. So for example, first you would sort the categories (me + suicide) (others + not-suicide) and then you would sort the categories (others + suicide) (me + not-suicide). Your “score” is a function of the difference between these. So anyone with a consistent hand-eye coordination problem will still get a normal score.
3. Yes, this is a problem. That is one reason why we need tests to sort out people like you, who should be left alone, from people who actually are about to commit suicide, whom it would be good to be able to help.
It’s a complicated problem. My fears are more along the lines of this kind of test being turned into a blunt instrument; you know how government policies need simple solutions to present to the public so politicians can be seen to be Doing Something, and balancing public finances means that what looks like a cheap solution that can be done on a large scale will be very tempting.
So instead of giving patients an appointment to see a psychiatrist, they get this “Are you likely to kill yourself?” test maybe administered by their G.P. or maybe even done online before being funnelled further into the health service, and if you are deemed likely to be an immediate risk, you get involuntarily committed for a week, a prescription for tablets to calm you down, maybe an appointment with a community mental health clinic in a month’s time – and never mind that you’re unemployed/in a failing relationship/being bullied and harassed/simply so depressed you just want to stop living.
That last was me at twelve; if anyone had asked me why I wanted to kill myself, the only answer I could have given would have been “I just want to be dead”. I wasn’t particularly unhappy, doing badly at school, anything like that; I just didn’t want to be alive anymore.
So, how do we distinguish between people who need help and people who just want to commit suicide and should have the right to do so, lest the terrorists win?
We work solely with the individual’s consent.
This should include questions of who is forced to pay the bills, by the way. In a civilized society, the answer must be: Only those who freely purchased the medical service.
I’m fairly sure IATs are gameable, because I typically do so accidentally. Basically the first time I make a mistake I figure “Oh, now it’s going to sort me as …” and that seems to be a strong self-fulfilling prophesy. I’ve had both possible extreme biases on several of these tests. There still may be some signal in what mistake I make first, but it’s probably too small to be useful.
OTOH, it’s not like classical tests are any better. I’ve seen test manuals for some insanity screening tests and there’s no magic even in the control questions. Basically these tests count on the subject being dumb, naive, or honest, and I suppose they work on average because most people actually are dumb, naive, or honest.
I thought so too. To quote Scott himself (from the LW article on IATs):
There’s been some evidence that the IAT is pretty robust. Most trivial matters like position of items don’t much much of a difference. People who were asked to convincingly fake an IAT effect couldn’t do it. If the same person takes the test twice, there’s a correlation ofabout .6 between the two attempts2. There’s a correlation of .55 between the Bona Fide Pipeline and the IAT (the IAT wins all competitions between the two; it produces twice as big an effect size). There’s about a .24 correlation between explicit attitude and IAT score, which is significant at the 90% but not the 95% level; removing certain tests where people seem especially likely to lie on their explicit attitude takes it up to 95.
That does not sound like p<0.01.
Wow, this sort of thing is fascinating, but gets way too close into ‘thought crime’ terratory for me to be comfortable with it. I guess it’s similar to the situation you were talking about the other day – being an introverted, likes BDSM, has previously had mental health problems person might (or might not) correlate quite well with risk of rapey-ness[1] but it seems harsh to screen someone from everyone on a dating website because of predictive factors that are not causal and that they have no control over. Now think about how much worse it is when it boils down to ‘how you sort things into categories’ as the correlating factor and ‘whether you are at liberty’ is the negative outcome.
[1] Yes, I did read the rest of the thread and realise that that was not what the particular app actually did. But an app could do this.
A friend told a story about some bit of scientific apparatus that wasn’t working. I can’t remember exactly, but it was completely cold, but several components were failing which usually only fail when it gets hot. The people grad student working on it said “but it can’t be too hot, look”, but her interpretation was “the thermostat is broken so the heater isn’t turning off”.
The point being, whenever we have any sort of diagnostic, we have a natural tendency to treat the best diagnostic we have as the official answer. Which is often a better approximation than having no data, but it makes it really easy to say “OK, she/he _seems_ healthy, but our psychological test said she wasn’t, so xxxxx” and not know when to let common sense into the interpretation.
So I’d be very interested to see what this test _could_ do. But I agree with your caveats 🙁
Hmm, I am very ambivalent towards this. I assume you read TheFerrett’s recentish essay about suicidal fantasies, and how for a certain subset of people they’re a coping/escape mechanism that they’re almost certainly not going to act on. I’m one of those people, and I’m having the damndest time trying to convince my psychiatrist that I’m not about to throw myself under the nearest bus the moment I become stressed. I doubt the IAT would classify me as non-suicidal either, but I’m excited that we might be getting closer to something that might be able to differentiate between people who are actually a danger to themselves vs those who merely have slightly insane thoughts.
Is the order in which the associations are done randomized?
I took the white/black IAT yesterday, and it gave me white/good, black/bad first, and the inverse second. During the two practice rounds, I kind of prepared a conscious association (white people are good) for the first part. For the second part, I did the same thing, preparing a conscious association black people are good, but found that I floundered a bunch until about halfway through because I didn’t consciously flush my previous association of white people are good. It gave me the highest possible preference for white people over black people.
I took it again just now, taking a minute to clear my brain before the second part, and got no preference between white and black.
“The Holy Grail would be some test you could give someone, see they’re suicidal, place them in a hospital for a few days until they’re no longer suicidal, give them the test again to prove they’re no longer suicidal, and let them out when you see their test scores have improved.”
This would be horrible! Since when do you advocate imprisoning people for having certain thoughts!?
What the Hell!??
As I said above, you know that involuntary hospitalization for suicidality is something that happens now, right?
Having been a victim of this, I can say yes, I do.
Then, as I said above, I submit that your real objection is to involuntary hospitalization, not to a hypothetical new psychological test.
The simplest cure for suicidality is alien invasion – something which my species is happy to provide for you! … Unfortunately, the imperial fleet has been delayed by an ambush near Deneb. But rest assured that, no more than five of your “ice age” cycles from now, the fleet shall arrive to liberate you from your petty self-involvements.
Either everyone I know is extremely atypical or the choice of pictures is unbelievably stupid. It’s hard for an individual to test whether seeing pictures of suicide make suicide more likely, but it’s drop-dead obvious that pictures of cut skin make cutting more likely. “Trigger warning: self-injury”; “No discussion of self-harm is allowed in channel”; thousands of anecdotes on self-injury discussion boards.
The actual reason for using a test about self-injury rather than suicide is probably that they had already developed one, which is a much more reasonable explanation. But in the study that shows this test predicts self-injury there is no untested control group, so they don’t look at whether the test causes self-harm.
Pingback: In Defense of Psych Treatment for Attempted Suicide | Slate Star Codex
Pingback: In Defense of Psych Treatment for Attempted Suicide « Random Ramblings of Rude Reality
“The Holy Grail would be some test you could give someone, see they’re suicidal, place them in a hospital for a few days until they’re no longer suicidal, give them the test again to prove they’re no longer suicidal, and let them out when you see their test scores have improved.”
Or, you know, we could just kidnap random people and violate their basic citizen’s rights severely, because surely some good will come of it and some of them will be grateful some years later, maybe, and if not, hey, at least we did something, right??