[Thanks to some people on Rationalist Tumblr, especially prophecyformula, for help and suggestions.]
There’s an old philosophers’ saying – trust those who seek the truth, distrust those who say they’ve found it. The psychiatry version of this goes “Trust those who seek biological underpinnings for mental illness, distrust those who say they’ve found them.”
Niculescu et al (2015) say they’ve found them. Their paper describes a process by which they hunted for biomarkers – in this case changes in gene expression – that predict suicide risk among psychiatric patients. They test various groups of psychiatric patients (including post-mortem tissue from suicide victims) to find some plausible genes. Then they use those genes to predict suicidality in two cohorts of about 100 patients each, including people with depression, schizophrenia, schizoaffective disorder, and bipolar disorder. They arrive at an impressive 92% AUC – that being the area under the curve graphing sensitivity vs. specificity, a common measure of the accuracy with which they can distinguish people who will vs. won’t be suicidal in the future.
The science press, showing the skepticism and restraint for which they are famous, jump on board immediately. A New Blood Test Can Predict Whether A Patient Will Have Suicidal Thoughts With More Than 90% Accuracy, says Popular Science. New Blood Test Predicts Future Suicide Attempts, says PBS.
There is a procedure for this sort of thing. The procedure is that the rest of us sit back and quietly wait for James Coyne, author of How To Critique Claims For A Blood Test For Depression, to tell us exactly why it is wrong. But it’s been over a week now and this hasn’t happened and I’m starting to worry he’s asleep on the job. So even though this is somewhat outside my area of expertise, let me discuss a couple of factors that concern me about this study.
The 92% accuracy claim is for the authors’ model, called UP-SUICIDE, which combines 11 biomarkers and two clinical prediction instruments. A clinical prediction instrument is a test which asks questions like “How depressed are you feeling right now?” or “How many times have you attempted suicide before?”. By combining the predictive power of the eleven genes and two instruments, they managed to reach the 92% number advertised in the abstract.
It might occur to you to ask “Wait, a test in which you can just ask people if they’re depressed and hate their life sounds a lot easier than this biomarker thing. Are we sure that they’re not just getting all of their predictive power from there?”
The answer is: no, we’re not sure at all, and as far as I can tell the study goes to great pains in order to make it hard to tell to what degree they are doing this.
Conventional wisdom says that clinical instruments for predicting suicidality can attain AUCs of 0.74 to 0.88. This is most of the way to the 0.92 shown in the current study, but not quite as high. But the current study combines two different clinical prediction instruments. In Combining Scales To Assess Suicide Risk, a Spanish team combines a few different clinical prediction instruments to get an AUC of…0.92.
If you look really closely at Niculescu et al’s big results table, you find that each of the individual prediction instruments they use does almost as well – and in some cases better than – their UP-SUICIDE model as a whole. For example, when predicting suicidal ideation in all patients, the CFI-S instrument has an AUC of 0.89, compared to the entire model’s 0.92. When predicting suicide-related hospitalizations in depressed patients, the CFI-S has an AUC of 0.78, compared to the entire model’s 0.7. Here the biomarkers are just adding noise!
Are the cases where the entire model outperforms the CFI-S cases where the biomarkers genuinely help? We have no way of knowing. There are two clinical prediction instruments, the CFI-S and the SASS. Combined, they should outperform either one alone. So, for example, on suicidal ideation among all patients, the SASS has an AUC of 0.85, the CFI-S has an AUC of 0.89, and the model as a whole (both instruments combined + 11 biomarkers) has an AUC of 0.92. If we just combined the CFI-S and SASS, and threw out the biomarkers, would we do better or worse than 0.92? I don’t know and they don’t tell us. When all we’re doing is looking at the overall model, the biomarkers may be helping, hurting, or totally irrelevant.
So what if we throw out the clinical prediction instruments and just look at the biomarkers?
The authors use their panel of biomarkers for four different conditions: depression, bipolar, schizophrenia, and schizoaffective. And they have two different outcomes: suicidal ideation according to a test of such, and actual hospitalization for suicide. That’s a total of 4 x 2 = 8 tests that they’re conducting.
Of these eight different tests, the panel of biomarkers taken together come back insignificant on seven of them.
And there’s such a thing as “trending towards significance”, but this isn’t it. Here, I’ll give p-values:
Depression/ideation: p = 0.26
Depression/hospitalization: p = 0.48
Schizoaffective/ideation: p = 0.46
Schizoaffective/hospitalization: p = 0.94
Schizophrenia/ideation: p = 0.16
Schizophrenia/hospitalization: p = 0.72
Bipolar/hospitalization: p = 0.24
The only test of the eight that comes out significant is bipolar/ideation, where p = 0.007. This is fine (well, it’s fine if it’s supposed to be post-Bonferroni correction, which I can’t be sure of from the paper). But I notice three things. Number one, there were only 29 people in this group. Number two, some of the most impressive looking genes for the ideation condition were worthless for the hospitalization condition. CLIP4, which got p = 0.005 for the ideation condition, got p = 0.91 for the second condition and actually had negative predictive value. And third, some of the genes that best predicted bipolar in the validation data had no predictive value for bipolar at all in the training data, and were included only because they predicted major depressive disorder alone. Given that the effects jump across diagnoses and fail to carry over into even a slightly different method of assessing suicidality, this looks a lot less like a real finding and a lot more like a statistical blip.
Finally, note that even in bipolar ideation, their one apparent success, the biomarkers only got an AUC of 0.75, lower than either clinical predictive instrument. The only reason their model did better was because it added on the clinical predictive instruments themselves.
So here it looks like seven out of their eight tests failed miserably, one of them succeeded in a very suspicious way, and they covered over this by combining the data with the clinical predictive instruments which always worked very well. Then everyone interpreted this as the sexy and exciting result “biomarkers work!” rather than the boring result “biomarkers fail, but if you use other stuff instead you’ll still be okay.”
The absolute strongest conclusion you can draw from this study is “biomarkers may predict risk of suicidal ideation in bipolar disorder with an AUC of 0.75”. Instead, everyone thinks biomarkers predict suicidality and hospitalization in a set of four different disorders with AUC of 0.92, which is way beyond what the evidence can support.
II.
So much for that. Now let me explain why it wouldn’t matter much even if they were right.
AUC is a combination of two statistics called sensitivity and specificity. It’s a little complicated, but if we assume it means sensitivity and specificity are both 92% we won’t be far off.
Sensitivity is the probability that a randomly chosen positive case in fact tests positive. In this case, it means the probability that, if someone is actually going to be suicidal, the model flags them as high suicide risk.
Specificity is the probability that a randomly chosen negative case in fact tests negative. In this case, it means the probability that, if someone is not going to be suicidal, the model flags them as low suicide risk.
In this study population, about 7.5% of their patients are hospitalized for suicidality each year. So suppose you got a million depressed people similar to these. 75,000 would attempt suicide that year, and 925,000 wouldn’t.
Now, suppose you gave your million depressed people this test with a 92% sensitivity and specificity.
Of the 925,000 non-suicidal people, 92% – 851,000 – will be correctly evaluated as non-suicidal. 74,000- 8% – will be mistakenly evaluated as suicidal.
Of the 75,000 suicidal people, 92% – 69,000 – will be correctly evaluated as suicidal. 8% – 6,000 – will be mistakenly evaluated as non-suicidal.
But this means that of the 143,000 people the test says are suicidal, only 69,000 – less than half – actually will be!
So when people say “We have a blood test to diagnose suicidality with 92% accuracy!”, even if it’s true, what they mean is that they have a blood test which, if it comes back positive, there’s still less than 50-50 odds the person involved is suicidal. Okay. Say you’re a psychiatrist. There’s a 48% chance your patient is going to be suicidal in the next year. What are you going to do? Commit her to the hospital? I sure hope not. Ask her some questions, make sure she’s doing okay, watch her kind of closely? You’re a psychiatrist and she’s your depressed patient, you would have been doing that anyway. This blood test is not really actionable.
And then remember that this isn’t the blood test we have. We have some clinical prediction instruments that do this, and we have a blood test which maybe, if you are very trusting, diagnoses suicidality in bipolar disorder with 75% accuracy. At 75% sensitivity and specificity, only twenty percent of the people who test positive will be suicidal. So what?
There will never be a blood test for suicide that works 100%, because suicide isn’t 100% in the blood. I am the most biodeterminist person you know (unless you know JayMan), I am happy to agree with Martin and Tesser that that the heritability of learning Latin is 26% and the heritability of jazz is 45% and so on, but suicide is not just biological. Maybe people need some kind of biological predisposition to consider suicide. But whether they go ahead with it or not depends on whether they have a good or bad day, whether their partner breaks up with them, whether a friend hands them a beer and they get really drunk, et cetera. Taking all of this into account, it’s really unlikely that a blood test will ever get sensitive and specific enough to overcome these hurdles.
We should continue research on the biological underpinnings of depression and suicide, both for the sake of knowledge and because it might lead to better treatments. But having “a blood test for suicide” won’t be very useful, even if it works.
Conclusion It is well (presumably) that (physician) John Arbuthnot did not order a blood panel on (philosopher) David Hume.
Now an imaging study, that might have been different … at least prognostically.
Bah, you call that a letter to Dr. Arbuthnot? This is a letter to Dr. Arbuthnot.
Lol … yes, Scott, certain couplets from your link show us that Alexander Pope — as early as 1735! — did anticipate flaws and foresee improvements in modern psychometric practice and discourse:
> So when people say “We have a blood test to diagnose suicidality with 92% accuracy!”, even if it’s true, what they mean is that they have a blood test which, if it comes back positive, there’s still less than 50-50 odds the person involved is suicidal. Okay. Say you’re a psychiatrist. There’s a 48% chance your patient is going to be suicidal in the next year. What are you going to do? Commit her to the hospital? I sure hope not. Ask her some questions, make sure she’s doing okay, watch her kind of closely? You’re a psychiatrist and she’s your depressed patient, you would have been doing that anyway. This blood test is not really actionable.
Wait, so “she’s your depressed patient”, and you don’t have a better prior probability of suicidality than that of the general population?
The study was done on high-risk psychiatric patients of exactly the sort psychiatrists would probably be evaluating, so all these numbers should be considered from that perspective. If we did this test on the general population, the predictive value would plummet.
The link you give under “conventional wisdom says” which found AUCs around 0.75 was a random sample of Korean-Koreans who served in the Vietnam War. This is a bit of a drop, but not, like, a Felix-Baumgartner-fighting-a-balrog-on-Black-Tuesday drop.
Alternate justification for why what Scott did was okay: most of that 92% accuracy comes from asking questions like “have you been thinking about suicide lately?”, “do you have feelings of hopelessness?”, and “death: ’tis a consummation devoutly to be wished???”, questions which, if your answer is yes, might spur you to go see a psychiatrist. This means that if you assign a higher probability to a person being suicidal given that they have walked into a psychiatrist’s office and then update on positive test results, you’re going to mostly be double-counting the same evidence.
I’m talking about the study we’re looking at now. Remember, it found AUCs around 0.8 or 0.9 just from its clinical prediction instruments.
Actually, the Korean Vietnam War vets study concerned completed suicides over an 8-year period, while the Niculescu et al. study deals with suicidal ideation or hospitalization over a 1-year period. The two are not at all comparable, and it was a mistake for me to think so (and probably a mistake for you to suggest so in the original post). Because the incidence of completed suicide is so low, the precision in the Korean study (i.e. the number of suicides flagged by the instrument divided by the total number of people flagged by the instrument) was just 1%.
In most respects this is the more interesting outcome to look at– I take it that what we ultimately want to do is identify all and only those people who will go on to complete (or maybe attempt) suicide. It would be absurd to hospitalize someone on the grounds that you would almost certainly have committed them next January anyway, so predicting future hospitalization is much less helpful.
After reflection, you are right that the predictive value of the test would plummet in the general population, but this would mostly be because of the lower base rate, not because of a lower AUC.
To be more precise here, the claim is that getting a positive result on the battery of tests (t) will mostly screen off being in the care of a psychiatrist (p) as a source of information on suicidality (s). That is, P(s|t&p)≈P(s|t). rossry’s calling this a prior helps to confuse matters here– really you are conditionalizing on the conjunction.
Oh, I completely missed that. My mistake.
I’m sort of curious what sort of effect telling people who have no inkling of suicidal tendencies that, with *medical authority*, they are predisposed to kill themselves. Would it cause people to spiral into depression? Would it cause people to live in willful defiance of killing themselves? Sort of like fortune telling, but just from a really really bad fortune teller.
See https://slatestarcodex.com/2014/01/19/genetic-testing-and-self-fulfilling-prophecies/
But… Why wouldn’t there be a heritable suicide tendency? What mysterious force could be eliminating such genes?
You joke, but the heritability of mental illness is pretty high. Check out MaTCH at http://match.ctglab.nl/#/specific/plot1, the heritability of Mental and Behavioral Disorders (under Main Chapter) is similar to the average over all traits (at about 0.4-0.5). Sorry I can’t direct link to the relevant plots, this site is too javascripty, there’s also good ol’ wikipedia: https://en.wikipedia.org/wiki/Psychiatric_genetics#Heritability_and_Genetics
Suicidality can be a by-product of fitness-increasing suffering, if the suffering is generally adaptive enough to outweigh the probability of fitness-decreasing suicide attempts.
Since evolution has no benevolence function, we would expect such suffering to be maximized whenever its net effect is even slightly adaptive.
Note that they’re testing gene expression, not genes themselves.
Remember that heritability is about heritable variation.
At the simplest level, consider cancer. From my cell biology and biochemistry days, I remember someone saying there are two mysteries about cancer – why it doesn’t happen all the time and how come it happens at all. In order for cancer to happen, various key components and various defences need to be damaged. A gene that leaves some part of the cancer-prevention system pre-damaged… probably won’t kill you if the rest of your genome is good. So the selection pressure against such damaged genes may not be incredibly high, allowing them to accumulate at levels big enough to be noticed. Perhaps suicidality is similar – perhaps we come with various defences against it, if you’re missing one or two of them you’re at higher risk but you still need damage from the environment to set it off.
There’s also the possibility that suicidality might in part be a downside of some tradeoff (or lots of tradeoffs). Like how in sickle cell, there are benefits and downsides to the “trait” version, but a strong downside to the full blown “anaemia” version. Of course the value and harm from all of this depends on the environment, including how much malaria there is locally, and how good the medicine is (for both the malaria and the anaemia).
Indeed. It might be something boring, like suicide-predisposition being tied to some beneficial mutation of some digestive pathway or something. But if we’re looking to concoct an EvoPsych just-so story, it wouldn’t be hard. E.g., suicide-predisposition might be tied to being a sensitive, artsy type who ends up with lots of mating opportunities. (Not actually endorsing this theory!!)
http://lesswrong.com/lw/l0/adaptationexecuters_not_fitnessmaximizers/
Hey EY, how often do you get linked to your own essays as an explanation to a question you asked rhetorically?
I kind of hope it happens all the time.
Perhaps by a similar principle to that which caused religion to provide an evolutionary advantage in the past (and in many places still does)?
Because heritable fatal conditions don’t exist. /s
If your parents didn’t have any children, you probably won’t have any children either.
Fatal condition != infertility
Also, not all alleles are dominant.
There are statistics which better capture this information, they’re called positive (negative) predictive value. The positive (negative) predictive value is fraction of a cohort that tests positive (negative) and actually *is* positive (negative). It’s harder to calculate than sensitivity and specificity, because one has to know the background prevalence. In Scotts example the PPV would be 48%. The paper quotes a PPV of between 37% and 94% (see Figure 6c) for their test depending on background risk.
I don’t know why you’re so quick to say there will “never” be a blood test for suicide; obviously this one is not ready for clinical use but never is a long time. Care to put a wager on that? Benchmark could be a blood test which gets certified for clinical use within the next 50 years.
Oddly enough, in computational linguistics, machine learning and related fields, PPV is called “precision” and is paired with “recall” – sensitivity. In computational linguistics it’s pretty rare to hear people talking about specificity at all; people just don’t bother with it.
The claim is that “There will never be a blood test for suicide that works 100%” and I’d happily bet on that one (well, I’m not sure about the infrastructure for maintaining 50-year long bets, and I’m not sure I’m keen on bets where it’s unlikely I’ll be alive to collect the winnings). A clinically useful blood test? I think I differ from Scott here in that if the existing clinical instruments are useful for something, and adding the biomarkers really does give them a boost, then a blood test may have uses. The gripe is with the study at hand; as Scott says, the study doesn’t give a fair comparison, I’d like to have seen a combined panel of all the clinical instruments without the biomarkers, to see how much, if anything, the biomarkers were adding. One would hope this would be pretty standard practise; it’s something I do in the computational linguistics/machine learning field.
Specificity is pretty important to some machine learning tasks, like positive identification in targeting ISR systems.
I’m not sure what they’re doing with that categorization method based on SDs, but it seems to depend a lot upon small sample size of actually suicidal people.
Compare to the idea of a blood test for murder. It wouldn’t surprise me if one day there was a blood test for being a violent sort of person, but will there ever be a blood test where, if a guy’s accused of murder, you can just take a drop of blood, put it in a test tube, and say “Yup, he did it”? Or even one where if police are suspicious of a guy, they can draw some blood, do the test, and say “Lock him up, he’s going to kill within the week”?
This is kind of what I would need for a suicide blood test to be useful. If it just says “suicidal tendencies”, well, millions of people have suicidal tendencies all the time. Deiseach says later down this thread that she’s practically always a little suicidal. A lot of my patients say the same thing. I’m always going to be keeping track of these people, but what I want is a test where I can say “Now it’s time for you to go to the hospital,” which in terms of burden of proof I want it to be almost as good as the murder test that a policeman could use to send someone to jail.
Some people are probably more violent than others, but whether a person commits murder or not also depends on things like whether they have a gun, whether somebody offends them enough, whether they’re in the wrong place at the wrong time, whether they’re drunk, whether their victim ducks out of the way in time, etc. The idea that a week before each murder occurs, there’s a spike in a certain blood chemical – doesn’t really seem plausible to me. The same thing is true of suicide. You’ll probably get vague tendencies, but nothing you can use to get actionable information.
It seems to me that this is less a problem with the tests and more a problem with psychiatric treatment.
In other fields of medicine, say, oncology, knowing that a patient has 48% of having a tumor rather than 7% is usually actionable knowledge.
If you had a drug that reduced suicidality but it had enough side effects or was expensive enough that you didn’t want to give it to people with 7% suicide probability but you would give it to people with 48% suicide probability, then such test would produce actionable knowledge.
But if psychiatric treatment for suicidality is essentially a week long forced vacation in the adult kindergarten, as you imply in your other posts, then tests aren’t particularly useful.
Is it really? I am having trouble finding a link, but I could have sworn I recently read that a study showed no change in the cancer rate after preventive mastectomies.
I’m not an expert, but the Internet says that preventive mastectomies reduce breast cancer risk by 90% in high-risk women.
I think I read about that same study, but it was only true for a very specific, atypical kind of breast cancer (ductal carcinoma in situ). Preventative mastectomies still work for preventing most types of breast cancer.
@Anonymous
Touche. I probably skimmed and didn’t notice it was only a specific type. I don’t follow the details about failure modes of equipment I don’t own very closely.
Well, the treatment being ineffective and having horrible side effects is a problem with the treatment that makes even the best test have almost no net value.
You could imagine a test with high PPV and a low NPV (e.g. wikipedia’s ClueBot: https://en.wikipedia.org/wiki/User:ClueBot_NG. It catches 40% of vandalism with a false positive rate of 0.1%). Environmental factors definitely matter for murder as well as suicide, I agree that there will never be a perfect test.
However, it doesn’t have to be perfect to be useful, just better than nothing. Or better than current standards. I’m not sure it’s possible to measure the latter realistically (“Yeah I know you think this person should be on suicide watch but let’s not do that and just see if they attempt suicide, it’s for science”) maybe one could take real case files and ask psychiatrists to make predictions in a similar study as this. Also it sure would be nice if they had tested a clinical-only predictor, maybe somebody else will do that with this dataset. (It looks like all the necessary clinical data is on their website at http://www.neurophenomics.info/current_research.php, and at least some aggregates of gene expression, so a sufficiently motivated individual could make a predictor based on clinical data)
They may perhaps come up with something like “After doing a ton of tests on people who have attempted/succeeded at suicide, we have an idea that people with these markers, if they’re depressed/stressed/a whole list of environmental and mental conditions, are more likely to attempt suicide” and that could be used something like “You’re at risk of breast cancer/heart disease so you need to eat healthily and exercise and take these medications”.
I’d hate to see it going the same way as “I’ve been told I’ve got the gene for breast cancer so I’m having a preventive mastectomy”, as some women apparently do when they get that information, as they feel the risk is too high. It would be even worse if it were treated as a determinative test: “Well, you’re definitely going to kill yourself!”
I could also see life insurance companies demanding people undergo such tests before taking out policies.
Where did they imply that?
Their abstract describes the performance of the best biomarker for each individual purpose, and then says,
This clearly seems to imply that adding biomarkers to clinical instruments increased the AUC from 89% to 92%. I do agree that SASS+CFI-S should have been included in the testing, though.
Also, their apps didn’t literally ask people if they were suicidal (although they did ask related questions, of course).
I don’t think the biomarkers added anything. They performed pretty much at chance levels. And yet every news article and every person whom I’ve talked to about this study came away with the impression that there was now a “blood test for suicide”.
If they had done the same study, except with Tarot cards instead of biomarkers, would they be allowed to get away with calling it a “joint clinical-Tarot diagnostic instrument” and claim in the discussion to have “validated in unbiased fashion a Tarot-based method suicidality pathway” and so on?
It is possible that many people, including professional science journalists, have very poor reading comprehension.
But you’re saying that their reading comprehension is fine, since the article itself implies that biomarkers alone gave a 92% result. I don’t see where it does.
And yeah, if journals considered Tarot-less papers too boring to publish, and negative Tarot-based results had to be presented as promising to see the light of day at all, I would happily read about “joint clinical-Tarot diagnostic instruments”, as long as the actual results were included.
Well, I used the word “implies” instead of “said” because they didn’t come out and say it. But once many different people are getting the same false reading from something, and I can see what in the text is giving that impression, I think it’s fair to say “implies”.
But I’ll change my wording.
If they had done the same study, except with Tarot cards instead of biomarkers
You could run a test where you ask people to pick, say, six cards and see what kind of pattern emerges, and see if that correlates: do people who pick a preponderance of, say, Swords then go on to attempt suicide/have previously expressed suicidality?
“I see you selected Death, The Devil, The Tower, the Five of Cups, the Five of Pentacles and the Nine of Swords. Is there anything you’d like to talk about?” 🙂
Not many people are descended from a long line of extremely suicidal human ancestors. Suicide isn’t the kind of thing that would be highly encouraged by natural selection.
Suicide may have adaptational value if the suicidal person can’t have children and is unable to support the relatives that can or do have children. Feeling like a burden, etc.
Vamair, what you describe is selection against suicidality.
No, he’s describing https://en.wikipedia.org/wiki/Kin_selection for suicidality.
Exactly. If a person is unable to have children for some reason and if the relatives have to spend even a tiny amount of resources to support the person, than from the evolutionary standpoint (note for suicidal people: f*ck the evolutionary standpoint) it’s more adaptive to die.
I don’t see how that would work. The reduced fertility due to suicidality would have to be offset by much greater fertility in the suicide’s relatives.
Also, suicides generally happen after childhood, that is, the relatives get no benefit from having invested their resources in the child. Small children never commit suicide, yet most parental inputs go to them, especially in pre-modern societies.
@JK: “If a person is unable to have children for some reason”, then “the reduced fertility due to suicidality” is zero.
Yes, if suicidality reduces fertility to zero, and the person would have had, say, two children without the “suicidality gene”, then his or her relatives must get a corresponding boost in their fertility. For example, if a suicidal person’s biological family consists of two siblings, each of them must have at least two extra children to make up for the “lost” fertility. I don’t think this is plausible.
Also, it seems that the genetic mechanism would have to be interactive, or have relatively low penetrance, or something like that, for this to work even in principle.
I see lots of value provided by postmenopausal women; I don’t think that one’s kin value drops to zero if you don’t contribute to childbirth anymore.
“it seems that the genetic mechanism would have to be interactive”
That’s the idea. You could maximize inclusive fitness by somehow inserting an if-then clause into a human’s brain along the lines of “if you are a net drain on your family’s resources and your mating prospects are dismal, then kill yourself.” It’s pretty hard to believe that selection could be so deft at wiring conditionals into the minds of adult humans, though, and the cost of false positives is unacceptably high.
“I see lots of value provided by postmenopausal women; I don’t think that one’s kin value drops to zero if you don’t contribute to childbirth anymore.”
I think this is the important point. To have a positive inclusive fitness value basically all you have to do is feed yourself and maybe babysit your sister’s kids once in a while. This is a very low bar for an able-bodied adult to meet, so this explanation looks pretty implausible. At most, it might account for suicide among the old and infirm.
He’s describing suicidality as a selection mechanism for promoting early fertility.
Then we shouldn’t see things like the recent trend (http://i.imgur.com/NQY91pF.png) where suicide among the elderly is on the downslope (likely due to increased quality of life as a result of medical advances) whereas the rate among the young has dramatically increased, in some cases almost doubling. And, of course, there’s the problem of the wrong sex being the ones prone to suicide; if the “can’t reproduce” hypothetical is correct, then women should be the lion’s share of suicides, since men can have children at any age, generally speaking.
Like most “just so” evo bio ideas, the ideas being bandied about here only work if you refuse to think through the implications of your theories, what your ideas require to be true in order to work – just off the top of my head, there’s the problem that with the exception of menopause, the human body generally isn’t aware of an inability to have children (that’s why fertility doctors are a thing); that humans didn’t always know that sex was related to childbirth (there are still tribes that are ignorant of this); that human reproductive timespans are too long to encourage such short-sighted behaviors (there’s always next time, just because you couldn’t successfully mate now isn’t a reason to give up when your reproductive age is measured in decades); that tribal cooperation, where one practically has to be an invalid to not contribute to the success of the tribe, isn’t compatible with such behavior. There’s just so many problems with this idea, I’ve barely even scratched the surface.
No, suicide certainly wasn’t selected for. If there is genuinely a gene that tells you to commit suicide, it must be a de novo mutation, and should only account for a fraction of the suicides we see. No, I suspect that suicide is a complicated beast that has more to do with social interactions, and I’m convinced that it’s no coincidence that suicide has increased amongst populations where social atomization has skyrocketed.
1. You’re underestimating the degree to which men’s fertility declines with age.
2. More importantly, you’re failing to take into account the fact that variance in reproductive success is considerably greater in men than in women across almost all cultures. If ex hypothesi suicide is connected to the absence of mating prospects, we should see it chiefly in young men.
@nyccine
It looks to me a lot more like the rate among teen-early twenties in the 50’s and 60’s is an outlier. Unless the teenage world really did change radically in the 70’s somehow that I’m not aware of.
graphs are better than tables. alt.
@Who, why are you singling out the teens? There’s a lot of change, but it’s pretty smooth across age. At least, the first few decades are simple compression of the range.
Or already had children, stuck around long enough for those children to become self-sufficient, and is now just sucking resources away from current breeders.
Octopuses do exactly this. After mating once, male octopuses (basically) starve. Female octopuses die shortly after their eggs hatch.
If we’re throwing stuff evolutionary just so stories, I would guess that depression allows people to honestly express extreme guilt.
If I’ve done something horrible and I don’t want to get killed by the tribe, it may actually be safer to be sad to the point of thinking about killing myself rather than have everyone in the tribe thinking they should kill me.
Wasn’t throwing a “just so story” here. I’ve just described the conditions when suicide with a conditional switch will be selected for to show that it’s possible in principle. Without an implication that it’s how it actually works. There is a possibility, but I wouldn’t bet too much on that.
For your story I’d be more inclined to believe something like that about guilt and not depression.
Maybe not people, but what about praying mantises and black widows?
What about them? Mantids aren’t suicidal, for one. In fact, male mantids will attempt to avoid sexual cannibalism by the female, if at all possible, but when your lifespan is less than a year, you have to weigh the risk of getting eaten after or during a successful mating, versus the risk of not having another mating opportunity. It appears likely that black widow behavior is the same; observations of males willingly being eaten by the female (as opposed to just not being able to escape) are primarily artifacts of the lab, and it’s not completely accepted that this is natural behavior.
Of course, even conflating “sexual cannibalism” with “male suicide,” it wouldn’t be a fit here, because in these cases the male has already successfully mated, whereas in the hypothetical being discussed here, you commit suicide because you haven’t mated.
I read that depression’s purpose in our chimp days was to encourage the individual to lie low after they had been kicked out of the group (e.g. their territory has been conquered by another male.)
The strategy of “he who runs away lives to fight another day” could certainly be advantageous if you only need to mate once to reproduce. But as just as Asperberger’s is an over-expression of a useful skill, suicide might be an over-expression of depression’s original “selfish gene” purpose.
It’s a gene expression test, not a genetic test!
…oh, so it is.
The trouble, I suppose, with the second-to-last “suicide is not just biological” seems slightly ill-fitted to the problem.
Suppose I have a lot of problems in my life, and those problems are predisposing me towards suicide. Let’s say for concreteness there’s some stress-related stuff. You might expect some differences in gene expression as a result of the stress etc.; this might be an epiphenomenon that doesn’t cause suicide, that doesn’t add directly to our understanding of how it happens, but nevertheless has predictive value.
Or maybe the gene expression is on the causal path, the gene products modulate brains to create a predisposition to suicide. But still there’s the trigger-level stuff, which might happen on the timescale of nerves firing or hormones being released rather than on the timescale of genes being expressed. Still biological, in a sense, but nothing that a (well-timed) blood test looking for gene expression would directly observe.
So I’ve got something arguably “not biological” that the blood test would pick up, and something arguably “biological” that it wouldn’t. OK, I’m probably using two slightly different definitions of “biological” there, but that just goes to show that the whole “biological” concept is a bit murky and when you go and exemplify “biological” with the heritability of jazz, then people will anchor on that sort of thing.
I agree that this could happen.
But, for example, I have a lot of patients who say “I felt really suicidal last week, but luckily I was a good Christian, so I didn’t act on it.”
Is it possible that stress leaves traces in the blood? Absolutely. But whether that stress translates into suicide attempts is going to be based on so many things like “are they a good Christian” that your predictive value is going to be shot.
Well, religiosity is heritable, too. I’m happy to take your word for it that no “blood test for suicide risk” is ever likely, I just think that a lot of the “not biological” stuff is actually somewhat biological. Let’s say that poverty made you more likely to be suicidal (does it? I have no idea). Well, economic success looks quite heritable. Not that any of this stuff is necessarily ever going to be practically testable. But gene expression seems like it’s always going to be lurking in there somewhere.
Well, if they want a blood sample for testing, I’m rolling up my sleeve right now because I’d be fascinated to see how wrong they are.
Suicidal ideation? Yep, since I was 12! Not constant, though; comes and goes at intervals.
Any suicide attempts? No.
Ever been hospitalised? Nope.
So do you consider yourself actively suicidal? Fecked if I know. Is passively suicidal a thing? Because I’m engaging in some unhealthy behaviours that might hasten my demise by a few years but is that the same thing?
Re: heritability of suicidal behaviours/ideation: Nobody in my immediate family, though a few years back a cousin of mine (in the paternal line) did commit suicide. Mental problems/neurodivergence out the wazoo all along my paternal family line, though, for previous, current and upcoming generations (i.e. children of cousins), by which I mean everything from “seeing a psychiatrist and on medication for years” to “diagnosed on autism spectrum” to the good old-fashioned general “You’re weird“.
Am I depressed? I don’t know. I thought I was, but I may not be; first counselling appointment seems to think it’s more mood swings than depression. But I’m not bipolar (because no corresponding mania/highs). Although they did offer to put me on anti-depressants. Which I refused. Yes, I’m more confused than ever.
Now, if they can figure out what the hell is going on with me from any biomarkers, I’ll be impressed 🙂
My thoughts exactly. Same reason I took the Meyers-Briggs test and occasionally read my horoscope (usually in The Onion, but still). I wanna take it just to see how bad it is.
Yet the law requires that you involuntarily commit people for being “suicidal” in the sense of talking about it, possibly making them lose their job or causing other problems, and I highly doubt that the chance of such a person actually being suicidal is greater than 48%. The blood test doesn’t give more false positives than the method you already use.
There’s a lot of truth in this, but I feel like there’s an alleviating factor in that now we mostly wait for people to express their suicidality. Sure, we can’t predict which people who say they’re suicidal are going to go through with it, but you can at least pretty definitively avoid committment for suicide by not being suicidal at all.
I feel like “You told your psychiatrist you were suicidal, and she didn’t believe you when you said you wouldn’t act on it” is less unfair then “You took a blood test and you were in the unlucky 8%”
However, there’s some chance of a miscommunication, and people can only avoid *that* by not ever going to a psychiatrist at all. Furthermore, a lot of people don’t know about this and therefore have no chance to avoid it.
These are the kinds of questions that doctors, counsellors, psychiatrists have to ask when someone presents with suicidal ideation/attempts. But I had to swear up, down and sideways that I really, really wasn’t planning to top myself when I went for my counselling assessment and I suppose that’s how I dodged “It might be a good idea if you voluntarily commit yourself for a while”.
I do understand that, quite apart from the concern for the patient, the medical professional has to cover themselves from “And they told you they were considering suicide, and you let them go?” when someone makes an attempt at/succeeds in committing suicide, and that if it looks at all possible that this is the next step, better safe than sorry and have them committed.
I’m pretty sure that the only people who know how to ask for the help they need without expressing suicidal ideation or intent don’t need help.
So you think that people who are good at lying and don’t want to get committed don’t also need help?
…you can at least pretty definitively avoid committment for suicide by not being suicidal at all.
Sure. You could also survive the Holocaust by not being a Jew at all, and under Islamism, you can avoid being sentenced to death by not being an atheist at all, and in Russia you can avoid being beaten up by not being gay at all.
Gee, what are these people always complaining about?
By the way, I want a pro-lifer to pay my rent for the next 40 years, let’s see how that goes.
Hmm…this post seems necessarily unkind. I reckon it passes useful/true, but I just wanted to point that out.
As someone who’s been involuntarily committed before and can frankly and honestly say that the mental health system is the only thing I’m truly terrified of, I broadly support the sentiment you seem to be trying to get across here, but even so. I think it’s pretty clear that most mental health professionals _are_ trying to help their patients, something that isn’t the case for those other groups.
BTW, Scott, I’ve been meaning to ask: would the use of a Rorschach test and other such bullshit as part of a diagnostic battery used to justify keeping a juvenile committed in inpatient care qualify as malpractice? (This really happened to me. I was horrified, and when I brought it up was told my concerns were further evidence of my paranoia)
Edit: Admittedly, I did express these concerns in a way that pattern-matches pretty well to “Raving lunatic is convinced the establishment is persecuting him/her for evil motives (several thousand dollars daily from insurance, in this case). But I feel like coming from someone who has absolutely no control over their essential imprisonment and is being kept there at least partially due to tests that he/she KNOWS are METHODOLOGICALLY UNSOUND at best, clever bullshit at worst ought to grant a little leeway on the delivery.
Considering large portions of psychologists of all strips apply the test despite it having exactly zero verifiable diagnostic use, I think it says a lot more about how broken psychology is in general than individual practitioners.
@WhoWouldn’t
Yeah, I discovered TheLastPsychiatrist’s work today. Needless to say it didn’t alleviate my terror of mental healthcare :\
Obligatory reminder that they put many other people in death camps as well, and thus one could not survive just by not being a Jew.
The point was that it is a weak justification for violating people’s rights to say that they are in the reference class of people whose rights you want to violate.
Also my point about the cost of living is to be taken seriously. I have yet to see even one of the people who justify hard measures like forced commitment in the name of suicide prevention put their own money on the table to cover the costs of living of other people. But logically, if you justify coercion to make someone stay alive, all their future costs of living are the costs of your preferences and therefore should be carried by you.
“But logically, if you justify coercion to make someone stay alive, all their future costs of living are the costs of your preferences and therefore should be carried by you.”
This really only applies (to my mind) in cases of prolonged, dedicated suicidality with little to no relief. If the suicidality was or is a transient/abnormal condition (even if chronic), then the person justifying the use of coercion is doing so to protect your real agency and your real preferences from the derangement of a hostile/interloping emotional state/mental illness. This becomes doubly true when you reflect that many people in the grips of suicidality (myself included) are incapable of recognizing the sensation’s transience, and subjectively experience many years worth of memory as having occurred in a suicidal state when they may in reality have been non-suicidal (if not exactly life-positive in my case) for the majority of that time.
Edit: Just to be clear, I am pro-choice on the matter of suicide, but I reconcile this with the above via promoting a waiting period of a year or two before someone ought to be permitted to exit.
Edit 2: The first edit is in relation to cases only where the suicidality is not comorbid with severe or terminal illness. Those poor folk have every right to opt out whenever they damn well choose.
Saal, your distinctions seem reasonable to me, but I still don’t see a justification for locking nonviolent, noncriminal people up without consent. After all, there could be legally binding pre-commitment solutions like banning yourself from suicide for 3 years or so, so you could consent beforehand if you fear you are unstable. At the very least, it should be opt-out so that you can avoid the benevolent coercion within such a time frame if your will is stable during that time, even if it is applied by default to people who have not expressed their will.
I’ve said for years now that I don’t want such coercion, and in fact that I want to be able to buy pentobarbital for personal use if I choose to do so. Not once in any of these conversations have I asked for paternalism in this matter, in the last 5 years. There is no logical sense in which this shouldn’t count as stable agency.
There’s a broadly consequentialist justification available here: IF most people who commit suicide have basically good lives but are making a mistake because of cognitive distortions, IF we can’t tell the difference between them and people with basically bad lives who rationally decide to commit suicide, and IF indiscriminately institutionalizing all of the suicidal people will save the lives of the first group at the expense of temporarily violating the rights of the second group, a policy of committing everyone who threatens to harm themselves will probably promote the greater good.
Think of it this way: you’re being asked to sacrifice your rights in order to save the lives of a few mentally ill people, because we can’t reliably tell the difference between them and you. You come out looking quite the altruist!
Earthly Knight, the pre-commitment options and waiting periods mentioned above should solve this issue to a sufficient degree.
But even if those options weren’t possible, there still is the question of costs of living. Do you seriously expect people to be productively engaged in the labor market for decades when they would rather be dead than doing so?
Earthly: That would also justify killing someone to take their organs to save five people.
Some utilitarians accept that too, of course.
It would also justify lots of other things depending on facts. For instance, it could justify making Islam illegal if we found that that reduced terrorism and thus gained more utils than the utils lost to the people prohibited from being Muslims. Or even if it didn’t justify making Islam illegal, depending on facts, it might justify barring Muslims from particular jobs or other actions that would be considered persecution by most people.
Waiting periods don’t work so well for episodic and remitting illnesses. Major depression fits both of those descriptions, unfortunately. Dolorous Dale signs up for the suicide registry in 2012, goes into remission for three years and forgets about it, gets depressed again in 2015, and, hey, would you look at that, he’s made it through the waiting period. Myopic Manny signs up for the suicide registry at 18 and kills himself at 20. But if he had held on just a year longer, he would have found himself with a great job and a beautiful wife, shaking his head and smiling about his tortured adolescence.
I also worry over people who would benefit from therapy or medication but are resistant to trying it, for whatever reason. I read once about a woman with delusional parasitosis who killed herself because she was tired of having bugs under her skin all of the time. Wouldn’t it have been a thousand times better for her if she had been compelled into treatment?
Jiro, accepting one consequentialist argument does not commit us to the looniest form of pure consequentialism. I’m a lot more willing to countenance someone being kidnapped and pumped full of psychotropic drugs for 72 hours to save five lives than I am someone being butchered and their organs harvested to save five lives. The magnitude of the rights violation is much greater.
Waiting periods don’t work so well for episodic and remitting illnesses. Major depression fits both of those descriptions, unfortunately. [Examples follow]
Sure. Your examples are contrived, but if you looked hard enough, I’m sure you could find some anecdotal evidence of this sort of thing happening. But the point is, are these examples so statistically relevant that they justify the full cost implied by your paternalism?
It seems pretty obvious that the answer is no. Those are some very expensive QALYs you’re trying to buy – even if you count the religious people’s preference satisfaction as a quality of life improvement for them.
I agree that episodic cases like Dale’s will be rare. But Manny’s case does not sound so implausible to me, and if I had to guess, I would say that there are probably more people who were suicidal for a while when they were younger but got over it than there are people of sound mind who decide to kill themselves but wind up in an institution before they can carry it out.
Lets get it right out in the open:
Suppose someone has a transitory thought in which they decide that suicide is better for them; if the pharmacy refuses to sell them three bottles of sleep aid, they will give up on suicide and go on to live a full and happy life.
It remains a violation of basic human dignity to tell that person that their preferences are invalid and a violation of basic human rights to imprison them and restrict their freedom. (I moved the goalposts a little bit there; the scenario I described didn’t require commitment.)
Are we really willing to say that saving a life is worth X violations of basic human rights equivalent to false imprisonment? Once we go that far, electroshock therapy becomes an option. There’s also a tangent that discusses the obligation of torture to discover terrorists, conditional on the belief that torture provides accurate information.
You can pretty much avoid commitment for suicide by choosing not to share that you are suicidal with anyone qualified to help you.
Frankly, I’d much prefer to be able to say “A blood test indicated a condition which required inpatient treatment.” when describing a period of involuntary commitment. Hell, if the patient is covered under FMLA, you could even write a note which made firing them for missing work illegal.
I thought ideation wasn’t enough for involuntary commitment. Don’t you have to have made actual plans, or otherwise indicate that the ideation is serious?
You say a test with 92% sensitivity and specificity isn’t actionable, given a prior probability of 7.5% and a high threshold for involuntary commitment. But isn’t it actionable in the other direction? Suppose you’re considering committing a depressed person, but the test comes back negative. This means there’s only a 0.7% chance of suicidality.
I recognize that you would only commit someone if there were warning signs in addition to depression, thereby raising your prior probability. You said you wouldn’t commit at 50-50. What if you have a patient for whom your prior probability of suicidality is 90%? A negative test result here would reduce your probability of suicidality to only 44%, and you shouldn’t commit them.
That would be great if the evidence that you used to get to 90% likely was independent of the evidence used to update on it. But getting to 90% would require asking the patient if they had suicidal thoughts, which is not independent of the tests which basically ask them if they have suicidal thoughts.
Sure. But it seems to leave room for biomarkers producing useful information here, even if, as Scott says, “suicide isn’t 100% in the blood.”
An accurate but imperfect blood test might never be enough to involuntarily commit someone, but if a purely biological test had numbers in the same ballpark as the numbers mentioned in this post, it could be enough to prevent someone from being involuntarily committed.
I worked out the VoI for a hypothetical example in which you choose between commitment and non-commitment and also have a test with the properties specified in OP. Unfortunately, it turned out to be too long to be an SSC comment, so I’ve posted it at http://www.gwern.net/Statistical%20notes#value-of-information-clinical-prediction-instruments-for-suicide Under my set of assumptions, far from being worthless or non-actionable, you could gain $8k per patient. More precise tests would of course do even better, and a more realistic example with more choices than just commitment would also show the value to be higher.
Can I flag up a typo?
seems to reduplicate some words.
I disagree that the 48% chance is not substantial information.
Compared to the base rate of 7.5%, this means that given a positive test result your patient is 6.4 times more likely to attempt suicide in the next year than the average person in the psychiatric population. That’s quite an odds ratio. I wonder if that alone is enough to justify recommending hospitalization, or at least considering it?
Addendum: taking the general population value of 1.1% risk of suicide (the middle value in the paper), that’s 44 times more likely. Aside from this point, I agree with you on the hocus-pocus that they did with the biomarkers.
I think they counted an accurate diagnosis if the person attempted suicide within the next year. You can’t hospitalize them the entire next year.
If it was “within the next week”, you’d have a point.
Okay, it seems I didn’t consider the length of hospitalization correctly.
I think part of what went through my head is: if you get them admitted, the treatment they get might lower their risk of suicide over the long term (e.g. a year or longer), even if they were no longer in the hospital. It’s a hypothetical, but maybe the pros of acting outweigh the cons.
(This gets to meta level on probabilities, honest.)
On the bird in the hand side, as Scott wrote a while back, some patients actually are in a tight financial situation where even a few days in hospital can cause immediate life disruption and financial harm: job loss, repossession of car, large bills for the hospitalization, etc. Such trauma, plus the lasting financial effects and damage to his credit rating and tenant record, will increase incentive for suicide for many years, as well as immediately. (And increase incentive for staying away from doctors in the future.)
If the doctor believes the patient’s claim that immediate job loss or other practical damage (to one extent or another) will follow, then that is a For Sure, and its long-range damage (to one extent or another) is a For Sure also.
The possibility that commitment will have some key good effects in future — is very theoretical and depends on quite a few If’s.
Back to meta level on probabilities, what is the term for a cut-off point where large probabilities can be rounded off to For Sure — and a cut-off point where small ones can be rounded to Forget It? So that this kind of reasoning can be used in normal life?
I’m actually somewhat impressed that Popular Science qualified their headline to say ” A New Blood Test Can Predict Whether A Patient Will Have Suicidal Thoughts With More Than 90% Accuracy, says Popular Science”
Because suicidal thoughts don’t always (or even that often?) lead to suicidal actions. Lack of suicidal ideation may be a fairly good predictor of lack of suicide, but the inverse isn’t true.
That corrected headline sounds even more bullshit, to be honest. Blood test markers can predict your thoughts? Really?
So what am I thinking now, anyone want to hazard a guess?
I bet I can design a blood test that will correctly predict that you are thinking about pain.
You could probably design a bloodletting that would correctly predict that the subject is feeling sleepy and could use a cookie and some OJ.
You can up the specificity by increasing the prevalence to 100%
Sounds more bullshit, but at first glance, seems to reflect the claim of the paper more accurately. Whether that means the paper is more bullshit is left as an exercise for the reader. (Preferably not more than one, to avoid replication problems.)
They should probably scrap the suicidality angle and concentrate on “New blood test can predict likelihood of more severe depressive tendencies”, then. Else it really will get simplified in the media to “New test can predict you are going to kill yourself!”
Off topic, be relating to the last link thread.
Psychology professor responds to the Reproducibility Project results (and manages to make a mess of both physics and biochem in the process).
Link is broken. Try this
That’s weird. I don’t know how the protocol portion of the URL got dropped. It is preserved by copy-pasting.
IMO the article seems — whether intentionally or not — to dodge the point. It’s mind-numbingly trivial that “failure to replicate” means the original study’s result is valid only under certain conditions — more specifically, the conditions of the original study! This is cause for alarm is that people take studies as valid. People see a study and generalize to other situations. Who gives a damn if something happens only in the exact conditions of the study? Yeah, we’re learning things. But who cares if we can’t apply that knowledge? It damn well should be concerning that results people are generalizing outside the lab don’t even occur when people are trying to duplicate the conditions!
It seems to me that you’re right that the article is dodging the point, but I don’t think you’ve emphasized the right dodge. The problem exposed by the failures to replicate is not that the studies don’t apply under different conditions, it’s that they don’t seem to apply under the same conditions. Replication is trying to repeat the original study as closely as possible, the conditions, in principle anyway, are the same in both. If you are doing the same thing and getting different results, that’s a real issue.
I agree with what you said and I meant that with my last sentence, but I should have been clearer about it.
My after-the-fact rationalization is that science is then applied to things, and no amount of
justifies diminishing the severity of such a high failure rate. This doesn’t happen in a vacuum. Psychology results go on to be used in a multitude of ways and it’s an immense waste of time and money to implement something that research suggests could work, but turns out to not replicate even when trying to mimic the original conditions.
That aside, your comment is the better criticism of the article. I’m afraid I’m too diverted by the “it’s okay if we don’t reproduce, science is all about the journey” schtick.
Don’t forget the 48% number depends upon your prior being the prevalence in the entire study population. If the person you’re trying to diagnose comes from a sub-population already considered to be at-risk, say recently divorced, just returned from multiple military deployments, just got fired from a job, whatever it is, where the background prevalence is much greater than in the population-at-large, 92% sensitivity can get you a lot further than 50/50. Of course, 7.5% already seems high enough that I’m not sure what narrower population actually has a higher prevalence than that. Maybe recently defeated Japanese naval officers.
The problem isn’t the 92% sensitivity. The problem is the 8% non-specificity.
Actually, the problem is that sensitivity and specificity on their own don’t take base rate information into account. If non-suicidal people are ~12x more common than suicidal people (i.e., if we use Scott’s Pr(non-suicidal) = 0.925 and Pr(suicidal) = 0.075), setting the test threshold so that sensitivity = specificity = 0.92 is far from optimal. It should be shifted pretty heavily in favor of increased specificity and decreased sensitivity.
We can also take costs and benefits of correct and incorrect diagnoses into account, see, e.g., here.
“Are the cases where the entire model outperforms the CFI-S cases where the biomarkers genuinely help? We have no way of knowing.”
This might be a fun application of mediation analysis.
If TLP were still posting, I’d imagine we’d see a Terrible Awful Truth post about this test. Because now there’s an objective test justifying why some particular low-income person needs to be on SSDI and get welfare and benefits.
Scott! You probably don’t drink, but for the sake of TLP-esque rants, please drink a dangerous amount of rum and write a scathing exposé!
You say that having a blood test that’s (more or less) 50-50 accurate wouldn’t really be actionable… but wouldn’t this just end up as another instance of the base problem you’ve written about before, where you end up having to commit all the positive results so you don’t get sued out of your career by the estate of the first one that you didn’t commit who committed suicide?
I wonder if that would make the existence of the test negative utility (given the current medical system).
What AUC would you consider actionable, given a 7.5% prevalence?
Jazz is partly heritable?
Skibbity-be-bop.
Well, for the technical meaning of “heritable”, liking jazz is heritable; heritability is about differences that make a difference in context. Once you know what’s under the hood it’s not all that surprising. For example particular sorts of personality types might be predisposed to liking jazz, I don’t know about jazz stereotypes but one thing I hear is that jazz tends to be popular among people who like their music complex; so if things like IQ and Openness To Experience are heritable, that’s going to contribute to the heritability of liking jazz too. Throw in a bunch of other observations about the typical personalities of jazz fans and it’s plausible that you get something that’s pretty heritable.
so if things like IQ and Openness To Experience are heritable, that’s going to contribute to the heritability of liking jazz too
I dislike jazz (or at least, about two minutes’ worth of it is as much as I can listen to at a time before I go “Okay, got it, boodily-dop-zzzamm-zzamm-zzamm, you can stop now”).
This therefore means I must be stupid and closed-minded?
Uncannily accurate! 🙂
Nice!
try this?
not met a lot of people who hate it and ms Dulfer is a gentle introduction to jazz at the worst of times.
jazz tends to be popular among people who like their music complex
That’s for that stupid formless modern experimental jazz. Real jazz is dance music that relies on syncopation, unlike most previous dance music which uses it only occasionally.
I agree. Jazz of the latter sort can be tolerable, even pleasant. Jazz of the former sort is just somewhat organized noise, and not even pleasant white noise. I’d rather listen to my actual white noise generator.
And while of course there’s always the No True Music Appreciator argument, it can’t fairly be said, in my opinion, that I don’t like complex music. I like several forms of symphonic and electronic music which can get quite intricate. I just don’t like “stoned guy with a saxophone” style jazz.
Edited to add: I didn’t take the original poster’s use of the phrase “people who like complex music” as a direct insult, nor do I think they meant it as one. (Other people have said similar things in other places who were being snooty. I don’t think the OP was.) I was just addressing it anecdotally.
Erm. Possibly I could have phrased things more carefully. By “jazz tends to be popular among people who like their music complex” I meant something like “there exists a correlation, possibly weak even by social science standards, between liking complex music and liking jazz”, but I realise it could have been read another way.
Me? Erm, it’s OK I suppose. Not something I’d actively seek out. I don’t think I’ve encountered much of the “stoned guy with a saxophone” variety.
Related question:
Is suicidal ideation enough to get someone involuntarily committed when talking to a therapist, or do they have to make specific plans, or otherwise express seriousness about suicide?
I sure hope ideation alone isn’t enough, since there are plenty of people who would never commit suicide in a million years who still like to fantasize about it sometimes as a coping mechanism (“Phew, at least there’s a way out if things get too bad”).
Not in my experience.
I’d hope that this applied everywhere – if you find yourself daydreaming about suicide quite a bit or something like that, then I think that’s a sign that it might be worth going to the doctor to get checked out for depression or similar, and not to worry about being rejected for being too sane or not having real problems or whatever – and you’d hope that people would be able to mention this without causing a huge panic.
When I say “I think that’s a sign” I mean “I should have spotted it as a sign months before I finally ended up seeing a doctor, and that would probably have saved me a fair bit of distress”.
Bravo! I love it! The way that you reminded me of the PPV and NPV without ever forcing those words on me was great! Sensitivity and Specificity alone are meaningless! Thank you for a well-written piece.
Thank you for this very detailed and accurate analysis of the Niculescu paper. A quick comment on suicide:
In my experience doing root cause analysis of suicidality to determine best course of treatment, there are three questions that net me the highest yield.
1) does this suicidality represent a decreased will to live? (manifested by a drift towards vegetative symptoms)
2) does this suicidality represent an increased wish to die? (manifested by a drift towards agitated symptoms)
3) does this suicidality represent a command hallucination?
There are many other reasons that someone may become suicidal, but these three questions are at the top of my decision tree — more numerous and subtle questions then branch off from there. Given that these three clinical entities are fairly distinct, I would expect to see stronger associations with biomarkers if future association studies will be examined in this way.
This paper sounds like nothing more than a cynical, dishonest fraud. And yet it was published in a reputable peer reviewed journal.
If the peer review process worked, no reputable researcher would submit work of this caliber — for fear of tarnishing their reputation.
AUC is a combination of two statistics called sensitivity and specificity.
AUC is calculated from sensitivity and specificity values for a range of response criteria. Here’s a nice Stack Exchange answer about it. Of course, you’re right that whatever criterion produces sensitivity and specificity of 0.92 is bad, but the appropriate response to this is to shift the criterion to take base rate (and costs and benefits) into account, not to simply dismiss the test.