OT98: Vauban Thread

This is the bi-weekly visible open thread (there are also hidden open threads twice a week you can reach through the Open Thread tab on the top of the page). Post about anything you want, ask random questions, whatever. You can also talk at the SSC subreddit or the SSC Discord server. Also:

1. Comment of the week is mrjeremyfade on how companies are responding to the new tax bill – in particular, their difficulties winding down their now-obsolete tax evasion schemes without admitting they were always just tax evasion schemes.

2. New sidebar ad – this one for Mark Neyer’s book The Mechanics Of Emotion, which he describes as “an exploration of physics, emotion, money, AI, and meaning. Also, dirty jokes.”

3. And an update from another advertiser – Nectome, previous winner of the Small Mammal Brain Preservation Prize, is back in the news for winning the Large Mammal Brain Preservation Prize. They don’t have a human product available yet, but there’s a waitlist which apparently includes Sam Altman. Obviously Nectome’s embalming process is 100% fatal, and not aimed at anyone except the terminally ill.

4. The Future of Humanity Institute is doing some experiments on human judgment and probability calibration, and asks me to pass on the link for anyone willing to play some online game-type-things.

Posted in Uncategorized | Tagged | 727 Comments

Navigating And/Or Avoiding The Inpatient Mental Health System

Apology and disclaimer

This is in response to questions I get about how to interact (or not interact) with the inpatient mental health system and involuntary commitment. The table of contents is:

1. How can I get outpatient mental health care without much risk of being involuntarily committed to a hospital?
2: How can I get mental health care at a hospital ER without much risk of being involuntarily committed?
3. I would like to get voluntarily committed to a hospital. How can I do that?
4. I am seeking inpatient treatment. How can I make sure that everyone knows I am there voluntarily, and that I don’t get shifted to involuntary status?
5. How can I decide which psychiatric hospital to go to?
6. I am in a psychiatric hospital. How can I make this experience as comfortable as possible?
7. I am in a psychiatric hospital and not happy about it and I want to get out as quickly as possible. What should I do?
8. I am in the psychiatric hospital and I think I am being mistreated. What can I do?
9. I think my friend/family member is in the psychiatric hospital, but nobody will tell me anything.
10. My friend/family member is in the psychiatric hospital and wants to get out as quickly as possible. How can I help them?
11. How will I pay for all of this?
12. I have a friend/family member who really needs psychiatric treatment, but refuses to get it. What can I do?

I am a psychiatrist, which both means I have some useful experience here, and makes it hard for people trying to avoid the system to trust me. Anything written with too much honesty risks degenerating into “here’s how to cheat the system so nobody will know you’re about to commit suicide”. But anything written with too little honesty risks degenerating into some variation of “trust the wise benevolent doctors to do what is best for you”. This is an impossible edge to balance on, and I am sure I fail at one point or another.

But my first excuse is that if somebody doesn’t understand how the commitment system works, they’re not going to innocently blunder into spilling their guts. They’re just going to never go to the psychiatrist at all. If someone wants to avoid ending up in the hospital but doesn’t know how, it’s not like they’re stuck doing everything we want. They can just lie about everything. Or they can just never go to the psychiatrist at all. If they understand a little bit about how the system works, they can at least lie strategically, in the one place where they have to lie, while cooperating 99% of the way.

And my second excuse is that in the end, this is not an adversarial enterprise. Psychiatrists commit people because they’re scared. They’re scared because they can’t predict what the patient is going to do – and on another level, they’re scared because they might get sued if they don’t follow the rules. If patients who aren’t going to hurt themselves know how to explain that they aren’t going to hurt themselves in a way that reassures their psychiatrist, and in a way that doesn’t leave their psychiatrist legally liable for not committing them, then everybody can be more comfortable and get on with the hard work of actual treatment.

This guide applies to adult mental health care only. Child/adolescent mental health care is totally different and I don’t know anything about it. I have only worked in two states, it might be a bit different in other states, and it is definitely a lot different outside the US. Nothing in here is official medical advice. Follow it at your own risk. Please don’t use this to avoid psychiatric care which you actually need. All of this will be wrong in certain situations; when in doubt, trust your intuition.

1: How can I get outpatient mental health care without much risk of being involuntarily committed to a hospital?

Mental health care is divided into inpatient and outpatient settings. Inpatient care means it’s in a hospital, voluntary or otherwise. Outpatient care is your local doctor’s office, or psychiatrist’s office, or therapist’s office.

If you go to a hospital for mental health reasons, your risk of getting involuntarily committed is relatively high – see below for more. If you go to an outpatient provider, your risk is much lower.

In theory, the outpatient system is supposed to provide voluntary treatment, with risk of involuntary commitment only in certain very clearly delineated situations that you can understand and avoid. Each state’s laws are slightly different (and I can’t say anything about non-US countries), but they tend to allow involuntary commitment only in cases of immediate risk of hurting yourself, hurting someone else, or being so psychotic that you could plausibly hurt someone by accident (eg you jump out of a window because you think you can fly).

The key word is “immediate”. If you just have occasional thoughts about suicide, or you have some limited hallucinations but remain grounded in reality, according to the law this is not enough to involuntarily commit you.

In practice, not every mental health professional knows the laws or interprets them the same way, so they can just commit you anyway. The check on this is supposed to be that you can sue them when you get out of the hospital, but almost nobody bothers to do this, and judges and juries usually find in favor of the mental health professional.

So the law isn’t as much protection as it probably should be. In reality your best protection is to only open up to competent people whom you trust, and to frame what’s going on in a way that doesn’t scare them unnecessarily.

Don’t joke about committing suicide. Don’t bring up occasional stray suicidal thoughts if they don’t matter. Don’t say something like “I think about suicide sometimes, but doesn’t everyone?”, because your psychiatrist will have heard the last ten people answer “No, of course I never think about suicide”, and they will not be impressed with your claim about the human condition. Assume that any time you mention suicide, there’s a tiny but real chance of getting committed. If you are actually suicidal, take that chance in order to get help. Otherwise, this is really not the time to bring it up. If you wouldn’t offhandedly chat about terrorism with an airport security guard, don’t offhandedly chat about suicide with a psychiatrist.

(none of this applies to competent psychiatrists whom you trust, but award this status only after many positive experiences over a long-term relationship)

If your psychiatrist asks you outright if you ever have suicidal thoughts, well, tough call. If you don’t, then say you don’t. If you mostly don’t but you are some sort of chronically indecisive person who has trouble giving a straight answer to a question, now is the time to suppress that tendency and just say that you don’t. If you do, but you would never commit suicide and it’s not a big part of why you’re seeing them and you don’t mind lying, you can probably just say you don’t. If you do, and it’s important, and you don’t want to lie about it, then make sure to be very specific about how limited your thoughts are (eg: “I only thought that way once, three years ago) and to add as many of these as are true:

1. “Of course I would never go through with it, but sometimes I think about…”
2. “I love my friend/family member/partner/pet too much to ever go through with it.”
3. “I don’t have any plans for how I would do it.”
4. “I’m [religion], and we believe that God doesn’t want us to commit suicide.”
5. “I’ve been thinking about it for [long time], but the thoughts haven’t gotten any worse lately.”

The same applies to hallucinations and other signs of psychosis. Most people have very minor random hallucinations as they are going to sleep. Most people hear their own thoughts as silent “voices” in their head at least some of the time. Most people who take hallucinogenic drugs will hallucinate. You don’t need to bring these up when someone asks you about hallucinations. If you actually have some troubling psychotic symptoms, then mention them, but add as many of these as are true:

1. “Of course, I know these aren’t really real.”
2. “These have been going on for a while and aren’t any worse lately.”
3. “I would never listen to anything the voices say.”
4. “I only get that way when I’m on drugs / really tired / under a lot of stress.”

If you do all of these things, your chance of getting involuntarily committed to a psychiatric hospital by an outpatient provider is probably one percent or less, unless you’re really really sick.

Notice the words “by an outpatient provider” here. None of this applies if you are in a hospital (eg with pneumonia). If you are in a hospital, be extra careful about this to the point of paranoia. Unless you’re really worried that you might go through with suicide, be careful about mentioning it the hospital. Get your pneumonia or whatever treated, and then go out of the hospital, find a competent outpatient psychiatrist whom you trust, and open up about your issues to them. If you decide to open up to the nurse-assistant giving you a three question psychiatric screen in the pneumonia ward, you may end up on a psychiatric unit regardless of how careful you are, because hospitals don’t take chances.

2: How can I get mental health care at a hospital ER without much risk of being involuntarily committed?

Hospital ERs are not set up to provide psychiatric help to random people. They are set up to evaluate people and decide if it’s a real emergency. If it is, you will be committed to an inpatient unit. If it isn’t, they will tell you to see an outpatient psychiatrist, and you will be back at the beginning except with an extra $5000 bill to pay.

This is not true 100% of the time, and you can take your chances if you want. In particular, if you have extreme anxiety, sometimes they can give you enough fast-acting anti-anxiety medication to calm you down and last you until you can see an outpatient psychiatrist. But going to a hospital ER for any mental-health-related reason other than expecting to get admitted to a hospital psychiatric unit should be a last resort.

3. I would voluntarily like to get committed to a hospital. How can I do that?

If you have a competent outpatient psychiatrist whom you trust, call them up and tell them what’s going on. If they have connections at a local hospital, they may be able to get you directly admitted, which will save you a lot of time and suffering.

Otherwise, you will have to go to a hospital ER. Be prepared for this to be extremely unpleasant. It may take up to 24 hours of sitting in the ER before a psychiatrist can see you. You will probably get examined by nurses, medical students, non-psychiatrist doctors, etc, and each time you will think “Finally! I am getting evaluated and I can get out of this ER!” but you will be wrong. Although there will probably be some crappy food and drink available, there may not be much in the way of entertainment, quiet, or privacy. Do yourself a favor and bring a book or game or something. You may not be allowed to keep your cell phone or laptop or other metal object (more on this later). If family or friends are willing to help, have them come along – if only so they can go out and bring you back real food when you get hungry.

Once you set foot in an ER and mention the word “psychiatry”, you should be prepared for someone to tell you that you’re not allowed to leave until the evaluation is complete. Maybe no one will tell you this, and you can try to leave, and it’ll be fine. But you should be prepared for it not to work.

After many trials and tribulations, you will be examined by a psychiatrist, who will decide whether or not to accept you to the psychiatric unit. You are not guaranteed admission to the unit just because you want it. You might be turned down if the psychiatrist thinks you aren’t sick enough to need it, or if your insurance refuses to pay for it. Insurance companies are very reluctant to pay for hospitalizations unless there is a clear risk involved, so explain what the risk is.

The only thing that (almost) always works is mentioning suicide. If you say you’re suicidal, you will get admitted. If you want to be sure, do the opposite of everything above. Stress that you are suicidal. Stress that it’s not just the occasional fleeting thought, but actually something that you might really go ahead with. If you have a plan, share it.

If you’re not suicidal, expect to have to argue. Talk about what you’ve already tried and why it didn’t work. Talk about all the damage your mental illness has caused in your life. If there’s any chance you might snap and do something horrible – hurt someone, hurt yourself, have some kind of spectacular breakdown – play it up. If you have to, say something vague like “I don’t know what I would do if I couldn’t get help”. Be ready for this not to work, and for the psychiatrist evaluating you to recommend you go to an outpatient psychiatrist.

If you really want help beyond the level of outpatient treatment, but your insurance company won’t budge, ask about a partial hospital program. This is something where you go to a hospital-like environment from 9 to 5 for a few weeks, seeing doctors and getting therapy and classes, but you’re not involuntarily committed and you go home at night. Sometimes insurance companies will be willing to do this as a compromise if you are not suicidal.

4. I am seeking inpatient treatment. How can I make sure that everyone knows I am there voluntarily, and that I don’t get shifted to involuntary status?

I want to be really clear on this: in your head, there might be a huge difference between voluntary and involuntary hospitalization. In your doctor’s head, and in the legal system, these are two very slightly different sets of paperwork with tiny differences between them.

It works like this, with slight variation from state to state: involuntary patients are usually in the hospital for a few days while the doctors evaluate them. If at the end of those few days the doctors decide the patient is safe, they’ll discharge them. If, at the end of those few days, the doctors decide the patient is dangerous, the doctors will file for a hearing before a judge, which will take about a week. The patient will stay in the hospital for that week. 99% of the time the judge will side with the doctors, and the patient will stay until the doctors decide they are safe, usually another week or two.

Voluntary patients are technically allowed to leave whenever, but they have to do this by filing a form saying they want to. Once they file that form, their doctors may keep them in the hospital for a few more days while they decide whether they want to accept the form or challenge it. If they want to challenge it, they will file for a hearing before a judge, which will take about a week. The patient will stay in the hospital for that week. 99% of the time the judge will side with the doctors, and the patient will stay until the doctors decide they are safe, usually another week or two.

You may notice that in both cases, the doctors can keep the patient for a few days, plus however long it takes to have a hearing, plus however long the judge gives them after a hearing. So what’s the difference between voluntary and involuntary hospitalization? Pride, I guess, plus a small percent of cases where the doctors just shrug and say “whatever” when the voluntary patient tries to leave.

Some decent fraction of the time, patients who intended to get voluntarily hospitalized end up involuntarily hospitalized for inscrutable bureaucratic reasons. The one I’m most familiar with is the ambulance ride: suppose the hospital you’re in doesn’t have any psychiatric beds available and wants to send you to the hospital down the road. For inscrutable bureaucratic reasons, they have to send you by ambulance. And for inscrutable bureaucratic reasons, any psychiatric patient transferred by ambulance has to be involuntary. Your doctors don’t care about this, because they know that there is no practical difference between voluntary and involuntary – but if you are still trying to maintain your pride, this might come as kind of a shock.

Some other decent fraction of the time, patients who ought to be involuntarily hospitalized end up voluntarily hospitalized for inscrutable bureaucratic reasons. The one I’m most familiar with is doctors asking patients whom they are committing against their will to sign a voluntary form, ie “Agree to come voluntarily, or else I will commit you involuntarily”. This sounds super Orwellian, but it really is done with the patient’s best interest at heart. Involuntary commitments usually leave some kind of court record, which people can find if they’re searching your name for eg a background check – which could come up anywhere from applying for a job, to trying to buy a gun. Voluntary commitments usually don’t cause this problem. Even though nobody feels very warmly to the psychiatrist telling them to sign voluntarily or else, that psychiatrist is right and you should suck it up and sign the voluntary form.

If given a choice, you should sign voluntary, if only for the background-check reason above. But don’t count on getting the choice, and don’t get too attached to the illusion that it really matters in some deep way.

5. How can I decide which psychiatric hospital to go to?

If it’s an emergency, the answer is “whichever one is closest” or even “whichever one the ambulance you should call right now takes you to.”

If you have a little more leeway, and you have a competent outpatient psychiatrist whom you trust, ask them which one to go to. They will probably be familiar with the local terrain and be able to give you good advice.

If you live in a big city with wealthier and poorer areas, and it’s all the same to your insurance company, try to go to a hospital in the wealthier area. Not only do wealthier people always get nicer things, but – and sorry if this is politically incorrect – you would rather be locked up for a week with the sorts of people who end up in wealthy-area psychiatric hospitals than with the sorts of people who end up in poor-area psychiatric hospitals.

US News & World Report ranks the best psychiatric hospitals. They’re mostly looking at doctor prestige, but I would guess this correlates with other factors patients want in a hospital. If you’re really prestigious you have a lot of money and a lot of eyes watching you, and that probably helps. I suspect teaching hospitals are also good, for the same reason. But these are just guesses.

If you have no other way of figuring this out, you can try looking at Psych Ward Reviews. This site is underused and suffers from the expected bias – you only write about somewhere if you don’t like it – but it’s better than nothing.

Keep in mind that sometimes hospitals will be full, and they will send you to a different hospital instead, and you will not have any say in this.

6. I am in a psychiatric hospital. How can I make this experience as comfortable as possible?

When you go to the hospital ER to get admitted, bring a bag of stuff with you. This should include clothing, fun things to do like books, earplugs, snacks you like, and phone numbers for people you might want to contact.

Keep in mind that you will not be allowed to have anything that could be used as a weapon, for a definition of “could be used as a weapon” which is clearly aimed at MacGyver-level masterminds who can create a railgun out of three paperclips and a stick of gum. The same goes for anything that could be used as a suicide method. This means for example no laced shoes, pillowcases, scarves, and a bunch of other things you will not expect. Basically, bring stuff to the hospital, but expect a decent chance it won’t be allowed in.

Metal objects, including laptops, cell phones, mp3 players, etc, will never be allowed in. These will be taken from you and put in a locker during your stay. If for some reason you have to transfer hospitals during your stay, these things always somehow get lost. Your best bet is to bring a friend with you to the ER, and have them take your cell phone and other valuables.

If you forget to bring a bag of stuff, or if you were committed involuntarily and unexpectedly and didn’t get a chance, call a friend or family member and ask them to bring you your stuff.

7. I am in a psychiatric hospital and not happy about it and I want to get out as quickly as possible. What should I do?

Good news: average stays for psychiatric hospitals have been decreasing for decades, and are now usually a week or less. I did a study on the hospital I worked in and came up with an median stay of 5.9 days, and remember that there are a lot of really sick people bringing up those numbers.

(there are a few states that have laws centered around the number “three days”, but there are also a lot of states that don’t. For some reason the “three days” number has leaked into the general consciousness and everyone expects that to be how long they stay in the hospital. Don’t necessarily expect to get out of the hospital in exactly three days, but do expect it will be closer to 5.9 days than to weeks or months.)

Even better news: contrary to rumor, psychiatrists rarely have a financial incentive to keep people hospitalized. In fact, most hospitals and insurances now encourage quick “turnover” to “open up beds” for the next group of needy patients, and doctors can get bonuses for getting people out as quickly as possible. This should worry everyone else in the hospital who’s getting treated for pneumonia or whatever, but from the perspective of a psychiatric patient who wants to leave quickly it’s pretty good.

If you have a good doctor, you should trust their judgment and do what they say. But if you have a bad doctor, then the only thing you can count on is that they will respond to incentives. Their incentive to get you out quickly is the hospital administrators and insurance companies breathing down their neck. Their incentive to keep you longer is that if you get out of the hospital and ever do anything bad, they can get sued for “missing the signs”. So their goal is to do a token amount of work that proves they evaluated you properly so nothing that happens later is their fault.

That means they’ll keep you for some standard time interval, traditionally (though not always) three days, just so they can say they “monitored” you. If you seem unusually scary in some way, they might monitor you a little longer, up to a week or two. Your chances of successfully convincing them not to do this are essentially nil. Imagine you kill someone a few weeks after leaving the hospital, and during the trial the prosecutor says “The patient was taken to St. Elsewhere Hospital for evaluation of mental status, but discharged early, because he said he didn’t want to have to sit around and be evaluated for the usual amount of time, and his doctor thought this was a reasonable request.” Your doctor is definitely imagining this scenario.

Instead of pleading with your doctors to let you go early, just do everything right. Have meals at mealtime. Go to groups at group time. Groom yourself, not just because you look saner when you’re well-groomed, but because there will actually be nurses monitoring your grooming status and reporting it to the psychiatrists making release decisions. When people tell you things you should do after leaving the hospital, agree that you will definitely do them. If people ask you questions, give reassuring-sounding answers.

For this last one – don’t contradict evidence against you, don’t accuse other people of lying, just downplay whatever you can downplay, admit to what the doctors already believe, and make it sound like things have gotten better. For example, if you were found lying face-down with an empty bottle of pills next to you, don’t say “I didn’t attempt suicide, I just tripped and the pills fell into my mouth!” (I have seriously had patients try this one on me). Don’t say “It was my girlfriend’s fault, she drove me to do it!” Just say something like “That was a really bad night for me, and I don’t remember exactly what happened, but now I’m feeling a lot more hopeful, and I think that was a mistake.”

Don’t overdo it. Nothing is more annoying than the person who’s like “The twenty minutes I’ve been talking with you so far have turned my life around, and now I realize how wrong I was to reject God’s beautiful gift of existence, and am overflowing with abounding joy at the prospect of getting to go back into the world and truly confront my problems with the help of my loving family and…” Just be like “Yeah, things were rough, but I feel a little better now.”

Most important, take the damn drugs.

Yes, I know that some psychiatric drugs are unpleasant or addictive or dangerous or make you feel miserable. I’m not challenging your decision not to want to be on them. But take the damn drugs while you are in the hospital, for 5.9 days. Then, when they let you out, decide if you still want to continue. I guarantee you this will be easier for you, for your psychiatrist, and for the various judges and lawyers involved. The alternative is that you refuse to take the drugs, somebody has to set up a court hearing to get an involuntary treatment order, you have to sit in the hospital for weeks while the legal system gets its act together, the psychiatrists finally get the order and drug you against your will, and then after however many weeks or months, you get released from the hospital and stop taking the drugs.

If you have a good doctor whom you trust, then talk to them about the drugs and make a decision together. Let them know if there are any side effects. If a drug isn’t working for you, tell them, so they can switch it. Be honest, and willing to stand up for yourself, but also open-minded and ready to listen.

But if you have a bad doctor, just take the damn drugs. Bring up side effects, mention anything that’s intolerable, but when – like bad doctors everywhere – they ignore you, just take the damn drugs. Then, when you get out of the hospital, go to a competent outpatient psychiatrist whom you trust, tell them the drugs aren’t right for you, and talk it over with them until you come up with a better plan.

This is a good general principle for everything: agree to whatever people ask you while you’re in the hospital, talk to a competent outpatient psychiatrist whom you trust once you get out, and decide which things to stick to. I remember working with a doctor who wanted to discharge his patient to some kind of outpatient drug rehab. The patient refused to go, so the doctor wouldn’t discharge her, and they were in a stalemate over it for weeks, and the whole time the patient was tearfully begging the doctor to release her. I cannot tell you how much willpower it took not to sneak into the patient’s room and yell at her “JUST AGREE TO GO TO THE REHAB AND THEN DON’T DO IT, YOU IDIOT”. I mean, I am as in favor of Truth as everyone else, but I don’t even think her doctor cared if she went to the rehab or not. He just wanted to be able to document “Patient agreed to go to rehab”, so that when she started taking drugs again, he would have ironclad court-admissable evidence that it wasn’t his fault.

Finally, your doctors will be very interested in “discharge planning”, ie making sure you have somewhere safe to be after you leave the hospital. They may not be willing to believe you about this. So get a family member (best) or friend (second-best) on your side. Have them agree to tell the doctors that they will watch over you after you leave, make sure you take your medication, make sure you get to your follow-up outpatient psychiatrist appointments, make sure you don’t take any illegal drugs. Your best bet for this is your mother – psychiatrists love mothers. Tell your doctors “I talked to my mother, she’s really concerned about my condition, she says that I can stay with her after I leave and she’s going to watch me really closely and make sure I’m okay”. Only say this if it’s true, because your doctors will call your mother and make sure of it. But if you can make this work, this is really helpful.

Even if all of this works, it’s just going to get you out of the hospital in a bit less than 5.9 days instead of a bit more than 5.9 days. There’s no good way to get out instantly. Sorry.

8. I am in the psychiatric hospital and I think I am being mistreated. What can I do?

Your best bet is to find someone with a position like “Recipient Rights Representative” or “Patient Rights Advocate”. Most states mandate that all psychiatric hospitals have a person like this. Their job is to listen to people’s concerns and investigate. Usually the doctors hate them, which I take as a pretty good sign that they are actually independent and do their job. If you haven’t already gotten a pamphlet about this person when you were admitted, ask the front desk or your nurse or someone else who seems to know what’s going on how to contact this person.

You may be able to switch doctors or nurses. Just go to the front desk or someone else official-looking and ask. I don’t think this is a legally codified right, but sometimes nobody cares enough to refuse. Keep in mind that if you switch doctors, you may have to stay longer so that the new doctor can do their three-day-or-so assessment of you, separate from the last doctor’s three-day-or-so assessment.

Threats don’t work. Everybody makes threats, and everyone at the hospital is used to them. Threatening to hire a lawyer is especially boring and overdone and will not even get anyone’s attention.

Actually hiring a lawyer will definitely get people’s attention, but it’s a high-variance strategy. Remember that it’s very hard to get a doctor not to hold you for a three-day-or-so evaluation, and that most people are released before anything goes to court anyway (a court hearing can take weeks to set up). I have mostly seen this work in cases where I have no idea what the doctors are thinking and everybody seems sort of confused and just letting the patient sit in the hospital for no reason. Lawyers can be a very good incentive for people to un-confuse themselves. I am not a lawyer, I have tried to avoid the state of prolonged confusion where lawyers become necessary, and I don’t want to give any legal advice beyond saying it will definitely get people’s attention. But I would feel bad if someone read this, hired a lawyer, found them not to be genuinely helpful (as in fact they probably will not be), and then got a huge legal bill.

Some people wait until they get out, then comparison-shop from the outside world and hire a lawyer to sue the people who mistreated them in the past. If you’re going to do this, document everything. Your doctors are documenting everything, and if one side comes in with perfect documentation and the other side just has vague memories, the first side will win. By “document everything”, I mean have a piece of paper where you write down things like “2:41 PM on October 10: Nurse Roberts threw a pencil at me. Informed such-and-such a person and they refused to help. Informed such-and-such another person and they also refused to help.” Write down exactly where and when everything took place – the psychiatric hospital may have video surveillance, and if everybody knows which videos to get, it will make life much easier. Report everything to the Patient Rights Advocate, even if they’re useless, just so you can call them up and have them testify you reported it to them at the time. I am not a lawyer, this is not legal advice, and your lawyer will be able to tell you much more – but documentation never hurts.

If things are really bad, figure out if there are surveillance cameras, and hang out in front of them.

Once you leave the hospital, consider giving feedback. Most hospitals will have some kind of survey or hotline or something that lets you praise hospital staff whom you liked and report hospital staff whom you didn’t like. This won’t heal any wounds you suffered – and while in the hospital, threatening to report a doctor will be ignored just like all threats – but it might help somebody way down the line. You can also write a report on Psych Ward Reviews. In fact, do this anyway, whether you’re mistreated or not, so that other people can learn which hospitals don’t mistreat people.

9. I think my friend/family member is in the psychiatric hospital, but nobody will tell me anything.

Yes, this definitely sounds like the sort of thing that happens.

Because of medical privacy laws, it is illegal to tell a person’s friend or family that they are in the psychiatric hospital, or which psychiatric hospital they’re in, without their consent. If the person is too paranoid, angry, or confused to give consent, then their friends and family won’t have a good way to figure out what’s going on.

Your best bet is to call every psychiatric hospital that they could plausibly be in and ask “Is [PERSON’S NAME] there?” Sometimes, all except one of them will say “No”, and one of them will say “Due to medical privacy laws, we can’t tell you”. I know this sounds ridiculous, but it really works.

Once you have some idea which hospital your friend is in, call and ask to speak to them. They will say something like “Due to medical privacy laws, we can’t tell you if that person is here.” Say “I understand that, but could you please just ask them if they’re willing to speak to me right now?” If they are willing to speak to you, problem solved. Otherwise, you might still get some information based on whether the person leaves you on hold for a while in a way that suggests she’s going to your friend and asking them whether they want to talk to you.

You can also ask to speak to (or leave a message for) the doctor taking care of your friend. The receptionist will say “Due to medical privacy laws, we can’t tell you if that person is here.” Say “I understand that, but I have some important information about their case that I want the doctor to know. They don’t need to tell me whether my friend is there or not, just listen.” At this point, all but the most committed receptionists will either admit that your friend isn’t there, or actually get a doctor or take a message. There is no doctor in the world who is so committed to medical privacy that they will waste time listening to the history of a patient they don’t really have just to maintain a charade, so if you actually get a doctor this is a really strong sign.

Once you have a good idea where your friend is, you can ask the receptionist to pass a message along to them, like “Call me at [this phone number]”. If they still don’t respond – well, that’s their right.

Most hospitals will have visiting hours. Going to visit someone who refuses to let you know they’re at the hospital and refuses to give anyone consent to talk to you is a high-variance strategy, but you can always try.

10. My friend/family member is in the psychiatric hospital and wants to get out as quickly as possible. How can I help them?

First, make sure they actually want to get out as quickly as possible, and you’re not just assuming this. You would be surprised how many people miss this step.

Second, make sure they know everything in section 7 here.

Third, offer to talk to the doctors. Doctors often don’t trust mentally ill patients, but they usually trust family members. If your friend isn’t sick enough to need to be in the hospital, tell the doctors that. Describe the circumstances around their admission and why it’s not as bad as it looks. Mention how well you know the person, and how you’ve been with them through their illness, and how you know they would never do anything dangerous. Only say this if it’s true – if they’re in the hospital for stabbing a police officer, your “they would never do anything truly dangerous” claim is just going to make you look like an idiot.

Offer to help with discharge planning (see the end of section 6). Tell them that the patient will be staying with you after they leave the hospital, that you’re going to be watching them closely to make sure that they’re safe, that you’ll make sure they take their medications and go to followup appointments. Again, only say this if it’s true – or at the very least, coordinate with the patient, so you don’t say “My son will be staying with me under my close supervision.” and then your son ruins it all by saying “Haha, as if.”

If you have a sob story, tell it. If you are ninety-seven years old and your son is the only person who is able to take care of you and bring you to your doctors’ appointments, mention that. Sob stories from patients generally don’t work, but sob stories from family members might.

Offer to come to the hospital during visiting hours and meet with the doctors. This both underlines everything above – it shows you’re really invested in their care – and also gives you a good opportunity to pressure the doctors face to face. I don’t mean you should threaten them or be a jerk about it, but just ask “Why can’t Johnny come home? We really need Johnny at home to help with the chores. Everyone at home misses Johnny.” I don’t guarantee this will work, but it will work a little, on certain people.

If there are many people in your family who are willing to work on this, use whoever is closest to the patient (eg their mother) – and in case of a tie use the person who is the most upstanding high-status member of society. A promise to take care of someone sounds better coming from a family member who is a doctor themselves (or a lawyer, or a teacher) compared to from the patient’s unemployed stoner brother with a NO FEAR tattoo.

As somebody who is not in a psychiatric hospital, you are in a much better position to hire a lawyer if one needs to be hired. Again, in the majority of cases a patient won’t even stay long enough to have a court hearing. If you are poor and have limited resources, this is definitely not how I would recommend using them. But if you have money to burn, or your friend/family member is being held for an inexplicable amount of time (longer than a week or two) and you don’t know why, you are going to be in a much better position to take care of this than the patient themselves.

Even if all this works, it’s just going to make someone stay a bit less than 5.9 days instead of a bit more than 5.9 days. There’s no good way to get someone out instantly.

11. How will I pay for all of this?

If you don’t have health insurance, there is usually some kind of state/county mental health insurance program that is supposed to help with this kind of thing. You usually have to earn below a certain amount to qualify. Your social worker at the hospital can talk to you about this. I am not promising you such a program will exist – if you’re concerned about money, look into this before you go to the hospital.

If you do have health insurance, they may pay for your admission. The problem is that they have to decide if you are really ill enough to need psychiatric care, and they make this determination separately from the doctors who decide whether to commit you or not. In the worst case scenario, you can be involuntarily committed because your doctors decided you needed care, but your health insurance refuses to pay for it because they decided you didn’t need care. If this happens, you are stuck with the bill. This is horrifying and there should be some kind of law against it, but I’ve seen it happen and I think it’s legal.

Your best bet in these cases is to try to get the state/county mental health insurance mentioned above. Sometimes you can sign up for it after you leave the hospital, and then get your costs reimbursed.

If everything goes wrong, and you’re stuck with a bill and no insurance company willing to pay it, try to argue the hospital down. Hospitals know that the average random sick person can’t afford to pay $20,000 or whatever ridiculous amount they charge. They make these numbers up as part of a complicated plot to fool insurance companies into overpaying, which never works, and they expect patients to try to bargain. They are also usually willing to consider whatever payment plan you think you can make work. I don’t know very much about this, but there’s some more information here.

As far as I know, committing people involuntarily and leaving them with a huge bill is legal, and hiring a lawyer will not help with this. I don’t know much, so you may want to ask a lawyer’s opinion anyway, if you can afford it.

12. I have a friend/family member who really needs psychiatric treatment, but refuses to get it. What can I do?

If your family member is not a danger to themselves or others, your options are limited. You can try to convince them to voluntarily seek treatment, but if it doesn’t work, it doesn’t work.

If your family member is a danger to themselves or others, you have a good case for getting them involuntarily committed to the hospital. A good example of this would be them threatening to hurt you, or actually hurting you, or being so out of touch with reality that you are legitimately afraid they might hurt you or themselves. Them being paranoid (“people are out to get me”) or extremely confused about basic reality (“I am able to fly”) counts as legitimate reason to believe they might hurt you or themselves. If this describes your family member, document everything worrying that they say or do so you can present it to the doctors doing the assessment and (eventually) the courts.

Then, if your family member is cooperative/confused enough to let you drive them to the hospital, drive them to a hospital ER. If they’re not this cooperative, call the police and they will take things from there. Be prepared for the police to potentially put your family member in handcuffs and be really aggressive and police-y about it (and if you have a dog, arrange for it to be somewhere else at the time – like stuck in a bedroom with the door closed). The police will bring your family member to the hospital ER. You should go to the hospital ER too, so that you can tell the doctors what’s wrong and why you think they need treatment – ie why they are dangerous or potentially dangerous.

The most common way this ends is that your family member goes to the hospital, is started on some drugs, gets a little better, goes home, stops taking the drugs, and gets worse again. If the doctors at the hospital are not competent, they may not think about this. It may end up being your job to insist on some kind of longer-term solution.

If your family member is psychotic, then the gold standard for longer-term solutions is a long-acting injectable antipsychotic medication. This is a shot that a nurse can give them which will give them a few months’ worth of antipsychotics all at once, safely. This way they don’t have to remember/agree to take their medication at home. Then a few months later you can wrangle them back to a doctor’s office where someone can give them the shot again; repeat as needed. If your family member doesn’t agree to this, you’re going to need a judge’s order – but judges are really cooperative with this kind of thing and your psychiatrist can tell you more about how to make this happen. A partial hospital program can also help with this.

There is a kind of institution with different names everywhere, usually something like “Assertive Community Treatment”, which basically consists of some mental health professionals in a van who go around to people’s houses and make sure they’re okay / staying on medication after they’ve been discharged from the hospital. These are chronically underfunded and you have to fight to get into them, but if nothing else works you can see if there’s one of them in your area. These people are also good at wrangling patients to get their monthly dose of long-acting injectable antipsychotics.

If you need a quick way to deal with a family member’s psychosis, and they refuse to take antipsychotic medicine, and they don’t meet criteria for involuntary hospital admission – well, I can’t believe I’m saying this, and this is super not medical advice – but cannabidiol, a chemical in marijuana, is a weak but functional antipsychotic. Normal marijuana is awful for this situation and contains lots of other chemicals that make psychosis worse, but you can get special cannabidiol-only strains that act sort of like weak non-prescription antipsychotic medication. In a state like California where marijuana is legal, you can talk to a marijuana expert about which strains these are and how to use them. In a state where only medical marijuana is legal, you can take your family member to a random quack to get them a medical marijuana card, then follow the same process. Most psychotic people refuse to believe that they are psychotic, but most of them are very anxious. If you frame the marijuana as a way to help with their anxiety, they may go along with it. Then they might get non-psychotic enough to make them understand there’s a problem, after which they can go to a psychiatrist and get a longer-term solution. Again, this is definitely not medical advice and if you have any other options you should take those instead.

You can get a lot more (and much more responsible) advice from the Treatment Advocacy Center, a non-profit that helps people figure out how to get their friends and family members psychiatric treatment.

Postscript

All of this is to prepare you for worst-case scenarios. Many people seek inpatient mental health treatment, find it very helpful, and consider it a positive experience. According to a survey on Shrink Rap (heavily selected population, possibly brigaded, not to be taken too seriously) about 40% of people who were involuntarily committed to psychiatric hospitals eventually decided it was helpful for them. This fits my experience as well. Be careful, but don’t avoid getting treatment if you really need it.

The Dark Rule Utilitarian Argument For Science Piracy

I sometimes advertise sci-hub.tw – the Kazakhstani pirate site that lets you get scientific papers for free. It’s clearly illegal in the US. But is it unethical? I can think of two strong arguments that it might be:

First, we have intellectual property rights to encourage the production of intellectual goods. If everyone downloaded Black Panther, then Marvel wouldn’t get any money, the movie industry would collapse, and we would never get Black Panther 2, Black Panther Vs. Batman Vs. Superman, A Very Black Panther Christmas, Black Panther 3000: Help, We Have No Idea How To Create Original Movies Anymore, and all the other sequels and spinoffs we await with a resignation born of inevitability. This is sort of a pop-Kantian/rule-utilitarian argument: if everyone were to act as I did, our actions would be self-defeating. Or we can reframe it as a coordination problem: we’re defecting against the institutions necessary to support movies existing at all, and free-loading off our moral betters.

Second, and related, the laws have their own moral force that has to be respected. With all our celebration of civil disobedience, we forget that in general people should feel some obligation to obey laws even if they disagree with them. This is the force that keeps libertarians from evading taxes, vegetarians from sabotaging meat markets, and doctors from giving you much better medications than the ones you consent to – even when they think they can get away with it. Civil disobedience can be justifiable – see here for more discussion – but surely it should require some truly important cause, probably above the level of “I really want to watch Black Panther, but it costs $11.99 in theaters”.

(I admit I sometimes violate this principle , because I – like most people – am not perfectly moral.)

But I can also think of an argument why Sci-Hub isn’t unethical.

The reason I don’t pirate Black Panther is because, if everyone pirated movies, it would destroy the movie industry, and we would never get Lego Black Panther IV: Lego Black Panther Vs. The Frowny Emoji, and that would make people sad.

But if everyone pirated scientific papers, it would destroy Elsevier et al, and that would be frickin’ fantastic.

As far as I can tell, the movie industry is capitalism working as it should. No one animator can make a major motion picture, so institutions like Marvel Corporation exist to solve the coordination problem and bring them together. Marvel Corporation is probably terrible in various ways, but it’s unclear we have the social technology to create non-terrible corporations right now, so unless we’re communists we accept it as the price to pay for a semi-functional industry. Then some market-rate percent of the gains flow down to the actors and videographers and so on. If you destroyed this system, you wouldn’t usher in a golden age of independent superhero movies. You would just stop getting Black Panther.

The scientific journal industry is some kind of weird rent-seeking abomination which doesn’t seem to add much real value. I don’t have space to make the full “journals are not helpful” argument here, but see eg this article, Elsevier’s profit margins, and the relative success of alternative models like arXiv. See Inadequate Equilibria for the discussion of how this might have happened. The short and wildly insufficient summary is that it looks like we backed ourselves into an equilibrium where eg tenure committees consider journals the sole arbiter of scientific merit, anyone who unilaterally tries to defect from this equilibrium is (reasonably) suspected of not having enough merit to make it the usual way, and coordination is hard so we can’t make everyone defect at the same time.

Thus Dark Rule Utilitarianism: “If I did this, everyone would do it. If everyone did it, our institutions would collapse. But I hate our institutions. Therefore…”

I think this fully addresses the first argument against science piracy. But what about the second? Sure, I don’t like the institution of scientific gatekeepers, but anarcho-communists don’t like the institution of private property. If I steal scientific papers to destroy the journal system, doesn’t universalizing that decision process lead to anarcho-communists stealing cars to destroy capitalism? Shouldn’t “civil disobedience” be reserved for the most important things, like ending segregation or resisting the Nazis, rather than endorsed as something anyone can do when they feel like destroying something?

This kind of thing leaves me hopelessly confused between different levels. It’s much worse than free speech, where all you’ve got to keep track of is whether you agree with what someone says vs. will defend their right to say it. But an important starting point is that endorsing “civil disobedience is sometimes okay” doesn’t lead to a world where anarcho-communists steal cars and nobody stops them. It leads to a world where there is no overarching moral principle preventing anarcho-communists from seizing cars, and where we have to do politics to decide whether they get arrested. In practice, the politics would end up with the car thieves arrested, because stealing cars is pretty conspicuous and nobody likes car thieves.

Isn’t this just grounding morality in power? That is, aren’t we going from the clarity and fairness of “everyone must follow the law” to a more problematic “everyone must follow the law, except people clever enough to avoid getting caught and powerful enough to get away with civil disobedience?” Well, yeah. But from an institution design perspective, everything bottoms out in power eventually. All we’re doing here is replacing one form of power (the formal power possessed by law-makers) with another form of power (the informal powers of stealth and/or popularity that allow people to get away with civil disobedience). These two forms of power have different advantages and are possessed by different groups. The formal power is nice because it’s transparent and democratic and probably bound by rules like the Bill of Rights, but it also tends to concentrate among elites and be susceptible to tyranny. The informal power is nice because it’s inherently libertarian and democratic, but it’s also illegible and susceptible to being used by demagogues and populists.

So, a metaphor: imagine a world with a magic artifact at the North Pole which makes it literally impossible to violate laws. The countries of the far north are infinitely orderly with no need for police at all. Go further south and the strength of the artifact decreases, until you’re at the edge of the Arctic Circle and it might be possible to violate a very minor law if your life was in danger. By the time you’re at the Equator, any kind of strong urge lets you violate most laws, and by the Tropic of Capricorn you can violate all but the most sacred laws with only a slight feeling of resistance. Finally you reach the nations of the South Pole, where the laws are enforced by nothing but a policeman’s gun.

Where would you want to live in such a world? It’s a hard question – I can imagine pretty much anything happening in this kind of scenario. But if I had to choose, I think I would take up residence somewhere around the latitude of California. I would want the laws to carry some force beyond just the barrel of a gun – a high trust society with consistent institutions is really important, and the more people follow the law without being watched the less incentive there is to create a police state.

But I also wouldn’t want to live exactly at the North Pole. And when I try to figure out why, I think it’s that civil disobedience is the acid that dissolves inadequate equilibria. Equilibria are inadequate relative to some set of rules; if you’re allowed to break the rules, they can become adequate again. Under this model, civil disobedience isn’t a secret weapon to save up for extreme cases like desegregation, it’s part of the search process we use to get better institutions.

If the artifact is a metaphor for the moral law, then my choice to live outside the North Pole suggests that I can consistently defy unjust laws a little, even if my decision will be universalized. I should expect some problems – groups I don’t like will use civil disobedience to promote causes I abhor, and the state will be less orderly and peaceful than it could be – but overall everyone will end up being better off. This doesn’t mean I have to support those groups or even excuse their criminality – part of the politics that decides the result is me expressing that they are bad and need to be punished – it just means that, given the chance to magically make all civil disobedience impossible in a way that applies equally to me and my enemies – I would reject it, or take it at some less-than-maximal value.

So this is my argument that Sci-Hub can be ethical. Universalized it would destroy the system – but the system is bad and needs to be destroyed. And although this would break the law, a very slight amount of law-breaking might be a beneficial solution to inadequate equilibria – one that could be endorsed even when universalized.

Posted in Uncategorized | Tagged | 405 Comments

OT97: Dopen Thread

This is the bi-weekly visible open thread (there are also hidden open threads twice a week you can reach through the Open Thread tab on the top of the page). Post about anything you want, ask random questions, whatever. You can also talk at the SSC subreddit or the SSC Discord server. Also:

1. Comment of the week is John Schilling on Google X Prize. There’s also a lot of good discussion in the free energy thread, though I can’t pick just one.

2. New ad for brain preservation company Nectome – see eg this article about their head researcher winning the Brain Preservation Prize. If you’re interested in helping, there’s an link for joining their team at the bottom of their site.

3. Nobody is under any obligation to comply with this, but if you want to encourage this blog to continue to exist, I request not to be cited in major national newspapers. I realize it’s meant well, and I appreciate the honor, but I’ve gotten a few more real-life threats than I’m entirely comfortable with, and I would prefer decreased publicity for now.

4. I recently put a couple of responses to an online spat up here because I needed somewhere to host them, unaware that this would email all several thousand people on my mailing list. Sorry about that. I’ve deleted some of them because of the whole “decreased publicity” thing, and I would appreciate help from anyone who knows how to make it so I can put random useful text up in an out-of-the-way place without insta-emailing everybody.

5. Thanks to Lanny for fixing this blog’s comment report function. You should now be able to report inappropriate comments again. If you can’t, please say so and we’ll try to figure out what went wrong.

Posted in Uncategorized | Tagged | 1,264 Comments

SSC Journal Club: Friston On Computational Mood

A few months ago, I wrote Toward A Predictive Theory Of Depression, which used the predictive coding model of brain function to speculate about mood disorders and emotions. Emotions might be a tendency toward unusually high (or low) precision of predictions:

Imagine the world’s most successful entrepreneur. Every company they found becomes a multibillion-dollar success. Every stock they pick shoots up and never stops. Heck, even their personal life is like this. Every vacation they take ends out picture-perfect and creates memories that last a lifetime; every date they go on leads to passionate soul-burning love that never ends badly.

And imagine your job is to advise this entrepreneur. The only advice worth giving would be “do more stuff”. Clearly all the stuff they’re doing works, so aim higher, work harder, run for President. Another way of saying this is “be more self-confident” – if they’re doubting whether or not to start a new project, remind them that 100% of the things they’ve ever done have been successful, odds are pretty good this new one will too, and they should stop wasting their time second-guessing themselves.

Now imagine the world’s least successful entrepreneur. Every company they make flounders and dies. Every stock they pick crashes the next day. Their vacations always get rained-out, their dates always end up with the other person leaving halfway through and sticking them with the bill.

What if your job is advising this guy? If they’re thinking of starting a new company, your advice is “Be really careful – you should know it’ll probably go badly”. If they’re thinking of going on a date, you should warn them against it unless they’re really sure. A good global suggestion might be to aim lower, go for low-risk-low-reward steady payoffs, and wait on anything risky until they’ve figured themselves out a little bit more.

Corlett, Frith and Fletcher linked mania to increased confidence. But mania looks a lot like being happy. And you’re happy when you succeed a lot. And when you succeed a lot, maybe having increased confidence is the way to go. If happiness were a sort of global filter that affected all your thought processes and said “These are good times, you should press really hard to exploit your apparent excellence and not worry too much about risk”, that would be pretty evolutionarily useful. Likewise, if sadness were a way of saying “Things are going pretty badly, maybe be less confident and don’t start any new projects”, that would be useful too.

Depression isn’t normal sadness. But if normal sadness lowers neural confidence a little, maybe depression is the pathological result of biological processes that lower neural confidence a lot. To give a total fake example which I’m not saying is what actually happens, if you run out of whatever neurotransmitter you use to signal high confidence, that would give you permanent pathological low confidence and might look like depression.

This would explain a lot about depression. It would explain why depressed people have such low motivation. It would explain why their movements are less forceful (“psychomotor retardation”). It would even explain why sense data are less distinct (depressed people literally see the world in washed out shades of grey). I thought this was plausible, but said I’d wait for real scientists to say the same thing before believing it too much.

What Is Mood: A Computational Perspective by Clark, Watson, and Friston – is real scientists saying the same thing. Sort of. With a lot more rigor. Let’s look into it and see what they get.

Recent theoretical arguments have converged on the idea that emotional states reflect changes in the uncertainty about the somatic consequences of action (Joffily & Coricelli, 2013; Wager et al. 2015; Seth & Friston, 2016). This uncertainty refers to the precision with which motor and physiological states can be predicted. In this setting, negative emotions contextualise events that induce expectations of unpredictability, while positive emotions refer to events that resolve uncertainty and confer a feeling of control (Barrett & Satpute, 2013; Gu et al. 2013). This ties emotional states to the resolution of uncertainty and, through the biophysical encoding of precision, to neuromodulation and cortical gain control (Brown & Friston, 2012).

In summary, one can associate the valence of emotional stimuli with the precision of prior beliefs about the consequences of action. In this view, positively valenced brain states are necessarily associated with increases in the precision of predictions about the (controllable) future – or, more simply, predictable consequences of motor or autonomic behaviour. Conversely, negative emotions correspond to a loss of prior precision and a sense of helplessness and uncertainty about the consequences of action.

Here they’re saying that emotions – the day-to-day variation in whether we feel happy or sad – is meant to track what kind of environment we’re in. Is it a predictable environment that we should rush out to manipulate so we can harvest a big heap of utility? Or is it an unpredictable environment where we’re probably wrong about everything and should try to limit damage?

It’s not really clear from this quote, but later on they’re going to shift from happiness being “the world is predictable” to “the world is good”, which – sounds a lot more common-sensical. I think this has to do with Friston’s commitment to believing that uncertainty-resolution is the only drive, and every form of goodness is a sort of predictability in a way. See Monday’s post God Help Us, Let’s Try To Understand Friston On Free Energy – or don’t, for all the good it will do you.

Any hierarchical inference relies on hyperpriors. These furnish higher level predictions of the likely value of lower level parameters. From the above, one can see that important parameters are the precisions of prediction errors at high and low levels of the hierarchy (i.e. prior and sensory precision). These precisions reflect the confidence we place in our prior beliefs relative to sensory evidence. If emotional states in the brain reflect the precision of prior beliefs about the consequences of action, then distinct neuronal populations must also encode hyperpriors. In other words, short-term fluctuations in precision (i.e. emotional fluctuations) will themselves be constrained by hyperpriors encoding their long-term average (i.e. mood).

Here, we propose that mood corresponds to hyperpriors about emotional states, or confidence about the consequences of action. In other words, mood states reflect the prior expectation about precision that nuances (emotional) fluctuations in confidence or uncertainty. If emotion reflects interoceptive precision, and is biophysically encoded by neuromodulatory gain control, then this suggests that mood is neurobiologically encoded as the set-point of neuromodulator systems that determine synaptic gain control over principal cells reporting prediction errors at different levels of the interoceptive hierarchy. This set-point is the sensitivity of responses to prediction errors and has a profound and enduring effect on subsequent inference.

The traditional definition says that “mood is like climate, emotions are like weather”. I think they’re saying that mood – long-lasting states like being depressed or being a generally carefree person – are second-level priors about emotions, which themselves are first-level priors about actions.

So suppose you see a vaguely greenish piece of paper on the ground. If you’re happy, you have a prior for the world being good, and so you might be more likely to interpret it as possibly a dollar bill. And you have a prior for the world being exploitable, so you might be more likely to think you can reach down and take it and have an extra dollar. And if you do, and it really is a dollar bill, you might become happier, since you’ve gained a little evidence that your senses are trustworthy (you were right to perceive it as a dollar), the world is exploitable (your cunning plan to pick up the paper and gain $1 worked!), and you’re in the sort of high-reward environment where you should go off and do other exciting things.

On the other hand, if you’re sad, you have a prior for the world being bad, so you might expect it to be litter. You have a prior that you can’t really predict or affect the world, so it might not be worth bending down to pick it up – you might just end up disappointed. But if you did bend down to pick it up, and it did turn out to be a dollar bill, you might brighten up a little, just as the happy person would. You’ve gained a little bit of evidence that you’re in a nice part of the world where good things happen to you, and that you can execute a simple plan like picking up a dollar bill to gain money.

A depressed person would have the same prior that the world is bad and the paper is probably just litter. But if perhaps she did pick up the dollar, and feel tempted to conclude that the world was good and she should feel happy, a higher-level prior would kick in: even when it seems like the world is good, that’s wrong and you should ignore it. The world is never actually good. When good things happen that look like they should convince you that the world is good, those are just lies.

Friston et al bring up learned helplessness. Let’s say you shock a rat a lot. In fact, let’s say you’re even more cruel, and you constantly give the rat apparent escapes, only to close them off at the last second and keep shocking it. You give the rat what look like food pellets, but they turn out to just be rocks painted to look like food. You eventually gaslight the hell out of the rat. Finally, you stop doing this, and you give the rat some actual food and a way out, and the rat just doesn’t care. Yes, food and escape should be good things that make it feel lik the world is reward-filled and exploitable, but it’s been let down so many times before that it assumes anything seemingly-good is a mirage.

Here’s the picture they eventually draw:

Depression is a prediction of bad outcomes with high confidence. Mania is a prediction of good outcomes with high confidence. Anxiety (or “agitated depression”) is a prediction of bad outcomes with low confidence. There’s a blank space where it looks like there ought to be an extra emotion; maybe God will release it later as DLC.

Friston et al speculate that these hyperpriors over emotions can either be genetically encoded, or “learned” over very long periods of consistent stimuli. For example, if your childhood is unbearably terrible, that might be long enough to “burn in” a high-confidence hyperprior that the world is always bad.

(they don’t mention this, but if prediction and action are as linked as everyone always says, I wonder if this would explain why people with terrible childhoods are always mysteriously sabotaging themselves into have adulthoods that are terrible in the exact same way – eg someone with an abusive alcoholic father marrying an abusive alcoholic).

These hyperpriors can reach the level of a mood disorder when they become resistant to feedback. They present a couple of different arguments for how this might happen. In one, a depressed person doesn’t feel any positive emotions, since there’s such a strong prior on everything being terrible that these never reach the level of plausibility. Since positive emotions are a useful tool for figuring out what makes you happy and urging you to do it, depressed people aren’t motivated to make themselves happy, and so never end up contradicting their bias towards believing they’re sad all the time. This fits really well with “behavioral activation”, a common psychotherapy where therapists tell depressed people to just go out and do happy things whether they want to or not, and which often helps the depression resolve.

In another, all the brain’s predictions are so low-precision that it can’t even properly predict interoceptive sensations (the sensations received from organs, eg the heartbeat). Maybe it will think “I guess maybe my heart will beat right now”, but it’s not the sort of clear confident precision that really enters into its mental model. That means these interoceptive sensations are always predicted slightly incorrectly, and this keeps the brain feeling like it’s sick and confused and the world is unpredictable.

They don’t seem to mention this, but it also seems intuitively plausible that the strong prior on negativity could prevent the perception of positive factors directly. You see the piece of paper on the street, you think “the world is always terrible, so no way that’s a dollar bill”, you pass it by, and you miss an opportunity to feel lucky and give yourself a tiny bit of pleasure.

The rest of the paper is just a survey of some findings from biology and neuroscience that seem to support this, though they’re not all very specific. For example, the HPA axis is dysregulated, which fits with predictive processing, but it also fits with everything else. The main part I found interesting was this:

In healthy systems, mood should be affected by the valence of tightly controlled prediction errors. Recent animal work has shown that positive prediction errors (receiving more food than expected), show a strong positive correlation with dopaminergic change in the nucleus accumbens (Hart et al. 2014) with corresponding changes in functional brain activity in humans during a financial reward task (Rutledge et al. 2010). Similarly, it has been shown that signal change in the anterior insula is significantly related to the magnitude of prediction error (Bossaerts, 2010). The pharmacological manipulation of these networks was recently demonstrated where participants were given electric shocks (harms) in exchange for financial reward (gains), and offered the option of increasing the number of shocks in exchange for greater reward. It was shown that citalopram increased harm-aversion, while levodopa made individuals more likely to harm themselves than others (Crockett et al. 2015). This fits nicely with our notion that serotonin levels (and other neuromodulators) encode expectations about likely negative outcomes and encourage the fulfilment of these predictions through action (i.e. low levels promote behaviour with negative outcomes).

Focus on this sentence: “serotonin encodes expectations about likely negative outcomes and encourages the fulfilment of these predictions through action”. Also this one: “Low levels [of serotonin] promote behavior with negative outcomes”.

I don’t think I’m misunderstanding this – the authors cite some evidence that low serotonin causes self-harm, and yes, it certainly does. But what does it mean to have a system for promoting behavior with negative outcomes? Why have a neurotransmitter whose level corresponds to how much you should be trying to do negative-outcome behavior? Surely the answer is just “never do this”.

The only way I can make sense of this is through the paragraph above talking about the shocks-for-money game, where SSRIs decrease people’s willingness to get shocks. It sounds like maybe Friston et al are claiming that we have a “willingness to be harmed” lever so that we can calculate how willing we are to accept some levels of harm in exchange for a greater good. In that case, maybe self-harm is what happens when the “willingness to be harmed” lever is set so high that random noise, the chance of getting other people’s attention, or just passing the time presents some tiny reward, and your harm-for-reward tradeoff rate is so high that even that tiny reward is worth the harm.

More broadly, what should we think of this theory?

In retrospect, if you know Bayesian math, the idea of depression as a prior on bad outcomes seems pretty fricking obvious. I’m not even sure if it’s any different from the sort of stuff Aaron Beck was saying in the seventies. The big advance in this model is uniting “prior on bad outcomes” with “low precision of predictions / low neural confidence”. The low-precision part helps explain anergia, anhedonia, low motivation, psychomotor retardation, sensory washout, and probably (with a little more work) depression with psychotic features. Flipped around, it offers an explanation of psychomotor agitation, grandiosity, psychosis, and pareidolia in mania.

The only problem is that I still haven’t seen “prior on bad outcomes” and “low precision” really get unified. The authors seems to equivocate between “sadness means you’re in an unpredictable environment” and “sadness means you’re in a bad environment where everything sucks”. There is at least a little bit of work to add the hyperprior on top of the prior, so that at least we don’t get suspicious when we remember that depressed people are very confident in their depression. But it still seems like a world of low-precision predictions should be one where people just have no idea whether the paper in front of them is a dollar, not one where they’re really sure it isn’t. A world of high-precision predictions should look more like sitting in a bright room with a metronome, predicting each subsequent beat, rather than a world where everything is great and your life goes well. I’m not even sure this theory can explain why winning the lottery makes you happy rather than sad. It ought to make you think the world is really confusing and unpredictable (really? the thing you thought had a one in ten million chance happened?) – but in fact most lottery winners look pretty happy to me.

If this is confusing, at least it isn’t a new confusion. We know that a big part of the free energy research agenda is to try to unify desire-satisfaction with uncertainty-resolution, and claim that expectation and desire are (somehow, despite how it looks) the same thing. If we just assume that works, for the sake of argument, it allows this paper to be an impressive unification of several lines of research on mood disorder into a coherent and actionable whole.

Posted in Uncategorized | Tagged | 93 Comments

God Help Us, Let’s Try To Understand Friston On Free Energy

I’ve been trying to delve deeper into predictive processing theories of the brain, and I keep coming across Karl Friston’s work on “free energy”.

At first I felt bad for not understanding this. Then I realized I wasn’t alone. There’s an entire not-understanding-Karl-Friston internet fandom, complete with its own parody Twitter account and Markov blanket memes.

From the journal Neuropsychoanalysis (which based on its name I predict is a center of expertise in not understanding things):

At Columbia’s psychiatry department, I recently led a journal club for 15 PET and fMRI researhers, PhDs and MDs all, with well over $10 million in NIH grants between us, and we tried to understand Friston’s 2010 Nature Reviews Neuroscience paper – for an hour and a half. There was a lot of mathematical knowledge in the room: three statisticians, two physicists, a physical chemist, a nuclear physicist, and a large group of neuroimagers – but apparently we didn’t have what it took. I met with a Princeton physicist, a Stanford neurophysiologist, a Cold Springs Harbor neurobiologist to discuss the paper. Again blanks, one and all.

Normally this is the point at which I say “screw it” and give up. But almost all the most interesting neuroscience of the past decade involves this guy in one way or another. He’s the most-cited living neuroscientist, invented large parts of modern brain imaging, and received the prestigious Golden Brain Award (which is somehow a real thing). His Am I Autistic – An Intellectual Autobiography short essay, written in a weirdly lucid style and describing hijinks like deriving the Schrodinger equation for fun in school, is as consistent with genius as anything I’ve ever read.

As for free energy, it’s been dubbed “a unified brain theory” (Friston 2010), a key through which “nearly every aspect of [brain] anatomy and physiology starts to make sense” (Friston 2009), “[the source of] the ability of biological systems to resist a natural tendency to disorder” (Friston 2012), an explanation of how life “inevitably and emergently” arose from the primordial soup (Friston 2013), and “a real life version of Isaac Asimov’s psychohistory” (description here of Allen 2018).

I continue to hope some science journalist takes up the mantle of explaining this comprehensively. Until that happens, I’ve been working to gather as many perspectives as I can, to talk to the few neuroscientists who claim to even partially understand what’s going on, and to piece together a partial understanding. I am not at all the right person to do this, and this is not an attempt to get a gears-level understanding – just the kind of pop-science-journalism understanding that gives us a slight summary-level idea of what’s going on. My ulterior motive is to get to the point where I can understand Friston’s recent explanation of depression, relevant to my interests as a psychiatrist.

Sources include Dr. Alianna Maren’s How To Read Karl Friston (In The Original Greek), Wilson and Golonka’s Free Energy: How the F*ck Does That Work, Ecologically?, Alius Magazine’s interview with Friston, Observing Ideas, and (especially) the ominously named Wo’s Weblog.

From these I get the impression that part of the problem is that “free energy” is a complicated concept being used in a lot of different ways.

First, free energy is a specific mathematical term in certain Bayesian equations.

I’m getting this from here, which goes into much more detail about the math than I can manage. What I’ve managed to extract: Bayes’ theorem, as always, is the mathematical rule for determining how much to weigh evidence. The brain is sometimes called a Bayesian machine, because it has to create a coherent picture of the world by weighing all the different data it gets – everything from millions of photoreceptors’ worth of vision, to millions of cochlear receptors worth of hearing, to all the other sense, to logical reasoning, to past experience, and so on. But actually using Bayes on all this data quickly gets computationally intractable.

Free energy is a quantity used in “variational Bayesian methods”, a specific computationally tractable way of approximating Bayes’ Theorem. Under this interpretation, Friston is claiming that the brain uses this Bayes-approximation algorithm. Minmizing the free energy quantity in this algorithm is equivalent-ish to trying to minimize prediction error, trying to minimize the amount you’re surprised by the world around you, and trying to maximize accuracy of mental models. This sounds in line with standard predictive processing theories. Under this interpretation, the brain implements predictive processing through free energy minimization.

Second, free energy minimization is an algorithm-agnostic way of saying you’re trying to approximate Bayes as accurately as possible.

This comes from the same source as above. It also ends up equivalent-ish to all those other things like trying to be correct in your understanding of the world, and to standard predictive processing.

Third, free energy minimization is a claim that the fundamental psychological drive is the reduction of uncertainty.

I get this claim from the Alius interview, where Friston says:

If you subscribe to the premise that that creatures like you and me act to minimize their expected free energy, then we act to reduce expected surprise or, more simply, resolve uncertainty. So what’s the first thing that we would do on entering a dark room — we would turn on the lights. Why? Because this action has epistemic affordance; in other words, it resolves uncertainty (expected free energy). This simple argument generalizes to our inferences about (hidden or latent) states of the world — and the contingencies that underwrite those states of affairs.

The discovery that the only human motive is uncertainty-reduction might come as a surprise to humans who feel motivated by things like money, power, sex, friendship, or altruism. But the neuroscientist I talked to about this says I am not misinterpreting the interview. The claim really is that uncertainty-reduction is the only game in town.

In a sense, it must be true that there is only one human motivation. After all, if you’re Paris of Troy, getting offered the choice between power, fame, and sex – then some mental module must convert these to a common currency so it can decide which is most attractive. If that currency is, I dunno, dopamine in the striatum, then in some reductive sense, the only human motivation is increasing striatal dopamine (don’t philosophize at me, I know this is a stupid way of framing things, but you know what I mean). Then the only weird thing about the free energy formulation is identifying the common currency with uncertainty-minimization, which is some specific thing that already has another meaning.

I think the claim (briefly mentioned eg here) is that your brain hacks eg the hunger drive by “predicting” that your mouth is full of delicious food. Then, when your mouth is not full of delicious food, it’s a “prediction error”, it sets off all sorts of alarm bells, and your brain’s predictive machinery is confused and uncertain. The only way to “resolve” this “uncertainty” is to bring reality into line with the prediction and actually fill your mouth with delicious food. On the one hand, there is a lot of basic neuroscience research that suggests something like this is going on. On the other, Wo’s writes about this further:

The basic idea seems to go roughly as follows. Suppose my internal probability function Q assigns high probability to states in which I’m having a slice of pizza, while my sensory input suggests that I’m currently not having a slice of pizza. There are two ways of bringing Q in alignment with my sensory input: (a) I could change Q so that it no longer assigns high probability to pizza states, (b) I could grab a piece of pizza, thereby changing my sensory input so that it conforms to the pizza predictions of Q. Both (a) and (b) would lead to a state in which my (new) probability function Q’ assigns high probability to my (new) sensory input d’. Compared to the present state, the sensory input will then have lower surprise. So any transition to these states can be seen as a reduction of free energy, in the unambitious sense of the term.

Action is thus explained as an attempt to bring one’s sensory input in alignment with one’s representation of the world.

This is clearly nuts. When I decide to reach out for the pizza, I don’t assign high probability to states in which I’m already eating the slice. It is precisely my knowledge that I’m not eating the slice, together with my desire to eat the slice, that explains my reaching out.

There are at least two fundamental problems with the simple picture just outlined. One is that it makes little sense without postulating an independent source of goals or desires. Suppose it’s true that I reach out for the pizza because I hallucinate (as it were) that that’s what I’m doing, and I try to turn this hallucination into reality. Where does the hallucination come from? Surely it’s not just a technical glitch in my perceptual system. Otherwise it would be a miraculous coincidence that I mostly hallucinate pleasant and fitness-increasing states. Some further part of my cognitive architecture must trigger the hallucinations that cause me to act. (If there’s no such source, the much discussed “dark room problem” arises: why don’t we efficiently minimize sensory surprise (and thereby free energy) by sitting still in a dark room until we die?)

The second problem is that efficient action requires keeping track of both the actual state and the goal state. If I want to reach out for the pizza, I’d better know where my arms are, where the pizza is, what’s in between the two, and so on. If my internal representation of the world falsely says that the pizza is already in my mouth, it’s hard to explain how I manage to grab it from the plate.

A closer look at Friston’s papers suggests that the above rough proposal isn’t quite what he has in mind. Recall that minimizing free energy can be seen as an approximate method for bringing one probability function Q close to another function P. If we think of Q as representing the system’s beliefs about the present state, and P as a representation of its goals, then we have the required two components for explaining action. What’s unusual is only that the goals are represented by a probability function, rather than (say) a utility function. How would that work?

Here’s an idea. Given the present probability function Q, we can map any goal state A to the target function Q^A, which is Q conditionalized on A — or perhaps on certain sensory states that would go along with A. For example, if I successfully reach out for the pizza, my belief function Q will change to a function Q^A that assigns high probability to my arm being outstretched, to seeing and feeling the pizza in my fingers, etc. Choosing an act that minimizes the difference between my belief function and Q^A is then tantamount to choosing an act that realizes my goal.

This might lead to an interesting empirical model of how actions are generated. Of course we’d need to know more about how the target function Q^A is determined. I said it comes about by (approximately?) conditionalizing Q on the goal state A, but how do we identify the relevant A? Why do I want to reach out for the pizza? Arguably the explanation is that reaching out is likely (according to Q) to lead to a more distal state in which I eat the pizza, which I desire. So to compute the proximal target probability Q^A we presumably need to encode the system’s more distal goals and then use techniques from (stochastic) control theory, perhaps, to derive more immediate goals.

That version of the story looks much more plausible, and much less revolutionary, than the story outlined above. In the present version, perception and action are not two means to the same end — minimizing free energy. The free energy that’s minimized in perception is a completely different quantity than the free energy that’s minimized in action. What’s true is that both tasks involve mathematically similar optimization problems. But that isn’t too surprising given the well-known mathematical and computational parallels between conditionalizing and maximizing expected utility.

It’s tempting to throw this out entirely. But part of me does feel like there’s a weird connection between curiosity and every other drive. For example, sex seems like it should be pretty basic and curiosity-resistant. But how often do people say that they’re attracted to someone “because he’s mysterious”? And what about the Coolidge Effect (known in the polyamory community as “new relationship energy”)? After a while with the same partner, sex and romance lose their magic – only to reappear if the animal/person hooks up with a new partner. Doesn’t this point to some kind of connection between sexuality and curiosity?

What about the typical complaint of porn addicts – that they start off watching softcorn porn, find after a while that it’s no longer titillating, move on to harder porn, and eventually have to get into really perverted stuff just to feel anything at all? Is this a sort of uncertainty reduction?

The only problem is that this is a really specific kind of uncertainty reduction. Why should “uncertainty about what it would be like to be in a relationship with that particular attractive person” be so much more compelling than “uncertainty about what the middle letter of the Bible is”, a question which almost no one feels the slightest inclination to resolve? The interviewers ask Friston something sort of similar, referring to some experiments where people are happiest not when given easy things with no uncertainty, nor confusing things with unresolvable uncertainty, but puzzles – things that seem confusing at first, but actually have a lot of hidden order within them. They ask Friston whether he might want to switch teams to support a u-shaped theory where people like being in the middle between too little uncertainty or too much uncertainty. Friston…does not want to switch teams.

I do not think that “different laws may apply at different levels”. I see a singular and simple explanation for all the apparent dialectics above: they are all explained by minimization of expected free energy, expected surprise or uncertainty. I feel slightly puritanical when deflating some of the (magical) thinking about inverted U curves and “sweet spots”. However, things are just simpler than that: there is only one sweet spot; namely, the free energy minimum at the bottom of a U-shaped free energy function […]

This means that any opportunity to resolve uncertainty itself now becomes attractive (literally, in the mathematical sense of a random dynamical attractor) (Friston, 2013). In short, as nicely articulated by (Schmidhuber, 2010), the opportunity to answer “what would happen if I did that” is one of the most important resolvers of uncertainty. Formally, the resolution of uncertainty (aka intrinsic motivation, intrinsic value, epistemic value, the value of information, Bayesian surprise, etc. (Friston et al., 2017)) corresponds to salience. Note that in active inference, salience becomes an attribute of an action or policy in relation to the lived world. The mathematical homologue for contingencies (technically, the parameters of a generative model) corresponds to novelty. In other words, if there is an action that can reduce uncertainty about the consequences of a particular behavior, it is more likely to be expressed.
Given these imperatives, then the two ends of the inverted U become two extrema on different dimensions. In a world full of novelty and opportunity, we know immediately there is an opportunity to resolve reducible uncertainty and will immediately embark on joyful exploration — joyful because it reduces uncertainty or expected free energy (Joffily & Coricelli, 2013). Conversely, in a completely unpredictable world (i.e., a world with no precise sensory evidence, such as a dark room) there is no opportunity and all uncertainty is irreducible — a joyless world. Boredom is simply the product of explorative behavior; emptying a world of its epistemic value — a barren world in which all epistemic affordance has been exhausted through information seeking, free energy minimizing action.

Note that I slipped in the word “joyful” above. This brings something interesting to the table; namely, the affective valence of shifts in uncertainty — and how they are evaluated by our brains.

The only thing at all I am able to gather from this paragraph – besides the fact that apparently Karl Friston cites himself in conversation – is the Schmidhuber reference, which is actually really helpful. Schmidhuber is the guy behind eg the Formal Theory Of Fun & Creativity Explains Science, Art, Music, Humor, in which all of these are some form of taking a seemingly complex domain (in the mathematical sense of complexity) and reducing it to something simple (discovering a hidden order that makes it more compressible). I think Friston might be trying to hint that free energy minimization works in a Schmidhuberian sense where it applies to learning things that suddenly make large parts of our experience more comprehensible at once, rather than just “Here are some numbers: 1, 5, 7, 21 – now you have less uncertainty over what numbers I was about to tell you, isn’t that great?”

I agree this is one of life’s great joys, though maybe me and Karl Friston are not a 100% typical subset of humanity here. Also, I have trouble figuring out how to conceptualize other human drives like sex as this same kind complexity-reduction joy.

One more concern here – a lot of the things I read about this equivocate between “model accuracy maximization” and “surprise minimization”. These end really differently. Model accuracy maximization sounds like curiosity – you go out and explore as much of the world as possible to get a model that precisely matches reality. Surprise minimization sounds like locking yourself in a dark room with no stimuli, then predicting that you will be in a dark room with no stimuli, and never being surprised when your prediction turns out to be right. I understand Friston has written about the so-called “dark room problem”, but I haven’t had a chance to look into it as much as I should, and I can’t find anything that takes one or the other horn of the equivocation and says “definitely this one”.

Fourth, okay, all of this is pretty neat, but how does it explain all biological systems? How does it explain the origin of life from the primordial soup? And when do we get to the real-world version of psychohistory? In his Alius interview, Friston writes:

I first came up with a prototypical free energy principle when I was eight years old, in what I have previously called a “Gerald Durrell” moment (Friston, 2012). I was in the garden, during a gloriously hot 1960s British summer, preoccupied with the antics of some woodlice who were frantically scurrying around trying to find some shade. After half an hour of observation and innocent (childlike) contemplation, I realized their “scurrying” had no purpose or intent: they were simply moving faster in the sun — and slower in the shade. The simplicity of this explanation — for what one could artfully call biotic self-organization — appealed to me then and appeals to me now. It is exactly the same principle that underwrites the ensemble density dynamics of the free energy principle — and all its corollaries.

How do the wood lice have anything to do with any of the rest of this?

As best I can understand (and I’m drawing from here and here again), this is an ultimate meaning of “free energy” which is sort of like a formalization of homeostasis. It goes like this: consider a probability distribution of all the states an organism can be in. For example, your body can be at (90 degrees F, heart rate 10), (90 degrees F, heart rate 70), (98 degrees F, heart rate 10), (98 degrees F, heart rate 70), or any of a trillion other different combinations of possible parameters. But in fact, living systems successfully restrict themselves to tiny fractions of this space – if you go too far away from (98 degrees F, heart rate 70), you die. So you have two probability distributions – the maximum-entropy one where you could have any combination of heart rate and body temperature, and the one your body is aiming for with a life-compatible combination of heart rate and body temperature. Whenever you have a system trying to convert one probability distribution into another probability distribution, you can think of it as doing Bayesian work and following free energy principles. So free energy seems to be something like just a formal explanation of how certain systems display goal-directed behavior, without having to bring in an anthropomorphic or teleological concept of “goal-directedness”.

Friston mentions many times that free energy is “almost tautological”, and one of the neuroscientists I talked to who claimed to half-understand it said it should be viewed more as an elegant way of looking at things than as a scientific theory per se. From the Alius interview:

The free energy principle stands in stark distinction to things like predictive coding and the Bayesian brain hypothesis. This is because the free energy principle is what it is — a principle. Like Hamilton’s Principle of Stationary Action, it cannot be falsified. It cannot be disproven. In fact, there’s not much you can do with it, unless you ask whether measurable systems conform to the principle.

So we haven’t got a real-life version of Asimov’s psychohistory, is what you’re saying?

But also:

The Bayesian brain hypothesis is a corollary of the free energy principle and is realized through processes like predictive coding or abductive inference under prior beliefs. However, the Bayesian brain is not the free energy principle, because both the Bayesian brain hypothesis and predictive coding are incomplete theories of how we infer states of affairs.

This missing bit is the enactive compass of the free energy principle. In other words, the free energy principle is not just about making the best (Bayesian) sense of sensory impressions of what’s “out there”. It tries to understand how we sample the world and author our own sensations. Again, we come back to the woodlice and their scurrying — and an attempt to understand the imperatives behind this apparently purposeful sampling of the world. It is this enactive, embodied, extended, embedded, and encultured aspect that is lacking from the Bayesian brain and predictive coding theories; precisely because they do not consider entropy reduction […]

In short, the free energy principle fully endorses the Bayesian brain hypothesis — but that’s not the story. The only way you can change “the shape of things” — i.e., bound entropy production — is to act on the world. This is what distinguishes the free energy principle from predictive processing. In fact, we have now taken to referring to the free energy principle as “active inference”, which seems closer to the mark and slightly less pretentious for non-mathematicians.

So maybe the free energy principle is the unification of predictive coding of internal models, with the “action in the world is just another form of prediction” thesis mentioned above? I guess I thought that was part of the standard predictive coding story, but maybe I’m wrong?

Overall, the best I can do here is this: the free energy principle seems like an attempt to unify perception, cognition, homeostasis, and action.

“Free energy” is a mathematical concept that represents the failure of some things to match other things they’re supposed to be predicting.

The brain tries to minimize its free energy with respect to the world, ie minimize the difference between its models and reality. Sometimes it does that by updating its models of the world. Other times it does that by changing the world to better match its models.

Perception and cognition are both attempts to create accurate models that match the world, thus minimizing free energy.

Homeostasis and action are both attempts to make reality match mental models. Action tries to get the organism’s external state to match a mental model. Homeostasis tries to get the organism’s internal state to match a mental model. Since even bacteria are doing something homeostasis-like, all life shares the principle of being free energy minimizers.

So life isn’t doing four things – perceiving, thinking, acting, and maintaining homeostasis. It’s really just doing one thing – minimizing free energy – in four different ways – with the particular way it implements this in any given situation depending on which free energy minimization opportunities are most convenient. Or something.

This might be useful in some way? Or it might just be a cool philosophical way of looking at the world? Or maybe something in between? Or maybe a meaningless way of looking at the world? Or something? Somebody please help?


Discussion question for machine ethics researchers – if the free energy principle were right, would it disprove the orthogonality thesis? Might it be impossible to design a working brain with any goal besides free energy reduction? Would anything – even a paperclip maximizer – have to start by minimizing uncertainty, and then add paperclip maximization in later as a hack? Would it change anything if it did?

SSC Meetup: Bay Area 3/3

.
WHEN: 3:33 PM on Saturday, 3/3

WHERE: Berkeley campus, meet at the open space beside the intersection of West and Free Speech. Please disregard any kabbalistic implications of the meetup cross-streets.

WHO: Special guest is Gwern of gwern.net. Also me, Katja, and the usual Bay Area crowd.

WHY: Cause it’ll be fun. A lot of people have said before that they considered not going because they “don’t think they’re the typical SSC reader” or they’re “not sure they’d be able to keep up” or things like that. In the past, these people have usually had a good time and encouraged me to post something like this encouraging other people like them to come. The more unique and atypical people we get, the more fun it is getting to talk and exchange ideas. It’s a pretty low-key environment and very open to just hanging out on the edges of interesting conversations until you find one you’re comfortable joining. Also, you’ll probably be less socially awkward than I am, and it’s my meetup, so everyone has to tolerate me, so they’ll have to tolerate you too.

HOW: We haven’t done well with cafes or other traditional meetup spaces in the past, so we’ll just meet outside and sit on the grass. Bring blankets / refreshments if you want them. If it’s raining, we’ll meet just inside the Natural History Museum nearby and figure out what to do from there.

See you there!

Posted in Uncategorized | Tagged | 17 Comments

Links 2/18: Link Biao Incident

Punding, an uncommon side effect of abusing amphetamines and other dopaminergic drugs, involves “compulsive fascination with and performance of repetitive, mechanical tasks, such as assembling and disassembling, collecting, or sorting household objects, [for example] collecting pebbles and lining them up as perfectly as possible, disassembling wristwatches and putting them back together again, building hundreds of small wooden boxes”, etc. Also: “They are not generally aware that there is a compulsive element, but will continue even when they have good reason to stop. Rylander describes a burglar who started punding, and could not stop, even though he was suffering from an increasing apprehension of being caught.”

After the US repealed net neutrality provisions, the state of Montana has made its own rule demanding neutrality from providers receiving state contracts. Not sure how much this matters for broader society – or how many internet providers the average Montana state government office has to choose from, or what they’ll do if none of them agree to be neutral.

Surprisingly, Tibetan monks are more afraid of death than any other group studied.

Trump places tariffs on solar panels (and washing machines) in a move some people warn could set back renewable energy (and laundry, I guess). Anyone have an explanation for how focusing on solar in particular isn’t just gratuitiously evil? (commenters answer)

Less-covered spaceflight news: New Zealand startup Rocket Lab reaches orbit with a low-cost rocket using an electric-pump driven engine and 3D-printed parts. In more depressing space news: Google Lunar X Prize has officially announced that everyone loses and they will not be extending the contest further.

Was looking into tinnitus for a patient recently and came across this weird (temporary?) tinnitus treatment on Reddit that everyone says works. Possible explanation for why it might work here gives interesting insight into (some) tinnitus mechanism.

One reason the US doesn’t use the metric system: the scientist shipped in from Europe to testify to Congress on the issue was kidnapped by pirates. Bonus: the pirates may also have got one of the six Standard Kilograms.

NSA removes “honesty” and “openness” from its list of core values.

Paul Addis, a San Francisco activist and attorney famous for setting the Burning Man man on fire early to protest the corporatization of the event. Burning Man founder said Addis’ arson was “the single most pure act of radical self-expression to occur at this massive hipster tail-gate party in over a decade” – but Addis was sentenced to four years in prison for arson anyway. After release, he committed suicide by jumping in front of the BART.

More from the Department Of Weird Blockchain Projects Named Luna: “Luna DNA” allows users to upload their genetic data in exchange for a crypto-token called “Luna Coin”. What could possibly go wrong?

“[Aristotle has] a slight but consistent and habitual penchant in the corpus for humorous verbal play…there seems to be only about one pun per score of Bekker pages, but…there is no class or area of study in which Aristotle totally avoids punning.” (h/t Lou Keep)

New Statesman on Jacob Rees-Mogg, the Tories’ answer to Jeremy Corbyn: “He has never been seen (except perhaps by his wife) in anything other than a suit and tie. He speaks in sonorous Edwardian English and is unfailingly courteous…[In primary school], he played the stock markets using a £50 inheritance from a relative, standing up at the General Electric Company’s annual meeting and castigating a board – that included his father – for the firm’s “pathetic” dividend. A contemporary newspaper photograph showed the precocious 12-year-old solemnly reading the Financial Times beside his teddy bears…[He was married] in Canterbury Cathedral, the archbishop having authorised a Tridentine mass in ecclesiastical Latin in light of Rees-Mogg’s fervent Catholicism. The couple now have six children aged between seven months and ten, all bearing the names of Catholic popes and saints.” From his Wikipedia page: “Speaking in July 2017, Rees-Mogg conceded that ‘I’ve made no pretence to be a modern man at all, ever'”. Despite being by all accounts a colorful and likeable character, he doesn’t seem very competent and his opinions are out-of-touch and (imho) pretty dumb. Based on Jeremy Corbyn’s career path, Rees-Mogg will probably be Prime Minister within a year. Article is also interesting as an example of how left-leaning media has developed a counterproductive habit of sometimes covering the Right in terms of “We all know we should dislike this person, but look how cool they are!” This seems new and surprising and seems to require an explanation, maybe in terms of outgroup-fargroup dynamics.

After a lot of work, some people have been able to find an economic argument for why open borders would be a bad idea – but it still implies “a case against the stringency of current [immigration] restrictions” (though see here).

Credentialism watch: MIT is launching a new masters program in economics that doesn’t require a college or high school degree. Applicants need to take some free online courses and pass some non-free online tests, and then if they do well they can move on to the in-school part of the course. The program is being offered in affiliation with a group studying developmental economics and poverty, and is at least partly aimed at poor students from Third World countries. But Americans are already taking advantage of it, and it has more promise than most things in this sphere to help increase social mobility and bring down education costs.

Related: congratulations to Trinity College in Connecticut, the first (?) US college to break the $70,000/year price barrier. $100K or bust!

Related, if you think about it: It’s sometimes reported that SAT score and college GPA “only” correlate at a modest 0.35. But a book on education (h/t Timofey Pnin) points out that this is because higher-SAT-scoring students go to more elite colleges and major in more difficult subjects. Once this and some other confounders are adjusted for, the correlation rises to 0.68.

Contrary to what you might have learned in school, the tallest mountain in the solar system isn’t Olympos Mons. It’s Rheasilvia, a mountain on the asteroid Vesta whose height is almost 9% of the total radius of the asteroid.

Amazon enters the health care sector, so far just in order to provide health care for its own employees and those of a few other participating large companies. Claims that this mission will make it “free from profit-making incentives”, though some might ask how exactly profit-making incentives differ from cost-cutting incentives, which they’ll definitely have. Shares in major insurance companies fell 5% on the announcement. Interesting that the US health system has accidentally incentivized corporations to figure out solutions to rising health care costs, but I am not sure this is actually possible under current regulations other than by just providing worse care – the one cost-cutting measure that always works.

Study claims that pain tolerance predicts how many friends you have, although the theorized mechanism is something about the opiate system, and not just that social interaction is inherently painful and the number of friends you have depends on your ability to tolerate it (what does it say about me that this was my first guess?) Anyhow, Reddit seems to have mostly debunked it, which pretty closely matches my expectations for how this sort of result would fare.

For reasons lost to time, apprentice attorneys in the UK are called “devils”, their apprenticeship is called a “devilling”, and their supervisor is called a “devil-master”. May be related to similar practice of calling apprentice printers printer’s devils, likewise mysterious in origin. Theories include puns (they always got covered in ink, so they were practicing “the black arts”), superstition (originally people thought printing was really creepy and possibly satanic because you could create a book full of perfect identical letters), and racism (one of the first printer’s apprentices was an African, and everyone just assumed the only reasonable explanation for a person having black skin was that they were the Devil). A final theory is that printers’ devils were responsible for managing the box of discarded or broken letters, colorfully known as a hellbox. (h/t Eric Rall)

Campus free speech watch: FIRE demands college release its records about its firing of a professor who vocally supported Black Lives Matter.

Hawaiian Redditors describe their experiences receiving the false-alarm broadcast that Hawaii was about to be nuked. Some of these stories must be fake, but they’re still fun to read.

Your Twitter Followers Are Probably Bots. Everyone important, including honest people who don’t deliberately pay for bots to follow them, probably has bots following them on Twitter, mostly because bots follow a bunch of famous people in order to look more like real accounts. There are some techniques you can use to determine how many of your followers are bots. Complete with an analysis of how a New York attorney general who’s conducting investigations into people with fake followers on Twitter has…a bunch of fake followers on Twitter.

Marginal Revolution commenters on why automating trucking will take longer than you think.

A lot of big nutrition studies coming out recently. I’m not going to describe the results because there’s a lot of debate on how they should best be described and I don’t want to take a position without much more room to explain myself. But one is a randomized controlled trial on how adding sugar to the diet affects insulin sensitivity – this is really impressive since (for what I assume are ethical/IRB reasons) nobody had ever studied this via RCT before. The other is a large sample size study testing low-fat vs. low-carb diets over a long period with high compliance, partly sponsored by Gary Taubes-affiliated Nutritional Science Initiative.

Contrary to previous research, newer research suggests that increased incentives (eg paying people for a good score) does not increase adult IQ test performance. Related: IQ predicts self-control in chimpanzees.

Did you know: Blue is a dating-site for verified “blue check” Twitter users only. All we need is a policy of giving the children of two bluecheck users their own bluecheck and then we can have a true hereditary aristocracy.

Close to my heart: the relationship between sensory processing problems and obsessive-compulsive symptoms.

List Of Substances Administered To Adolf Hitler. If you’ve ever thought “Man, some of that Nazi stuff sounds like it came from a guy who was on a cocktail of methamphetamine, cocaine, adrenaline, testosterone, strychnine, heroin, oxycodone, morphine, barbituates, and human fecal bacteria”, well, you’re not wrong.

Related: the story of the most-unfortunately-named person in American history: Dr. Gay Hitler.

New meta-analysis: no evidence mindfulness works for anything. I suspect this is true the way it’s commonly practiced and studied (“if you’re feeling down, listen to this mindfulness tape for five minutes a day!”), less true for more becoming-a-Buddhist-monk-level stuff.

KnowYourMeme: “Hamilkin refers to a subculture of people who identify with characters from the musical Hamilton to the point where they believe they are those characters, spiritually.” Sort of wonder if closer examination would reveal this to consist entirely of eight very vocal twelve-year olds, three schizophrenics, several thousand trolls pretending to believe it for the lolz, and a bunch of writers exaggerating it for clicks – but I also sort of wonder this about flat-earthers and the alt-right.

More in the “contra poverty traps” research agenda: children whose parents are kicked off disability insurance are less likely to use disability insurance themselves as adults.

George Strait, the best-selling country singer of all time, is Jeff Bezos’ cousin. Also interesting: “Bezos” is a Cuban name, although Jeff himself is not of Cuban descent and got it from his stepfather.

The naming convention for the Trojan asteroids dictates that asteroids in front of Jupiter are named for Greek heroes from the Trojan War, and asteroids behind Jupiter are named for Trojan heroes. Two asteroids – 617 Patroclus and 624 Hektor – were named before the convention arose and are “on the wrong side” (h/t Alice Maz)

Trump is considering replacing some food stamp benefits with delivery of pre-prepared food boxes – I’ve previously written here about reasons I think something like this is a bad idea.

Just when everyone agreed ego depletion was debunked and dead, Baumeister et al strike back with a pre-registered study that continues to show the effect. Haven’t gotten a chance to look at it seriously yet, but glad that pre-registration etc are catching on.

Redditors who work in gun shops talk about their job and recount their weird experiences.

Russian lifehack: “Moscow residents say they have found that the only way to get the [government] to clear snow is to write the name of opposition leader Alexei Navalny on it”. Sort of related: in the 1970s, the West Virginia government refused to fund a necessary bridge in the town of Vulcan. The people of Vulcan appealed to the USSR to provide the funding; after the USSR expressed interest in helping, West Virginia approved it immediately.

Greg Cochran: most likely cause of the global decrease in frog populations is a fungal disease, possibly spread by researchers investigating the most likely cause of the global decrease in frog populations.

Related to a discussion from a while ago: update in the field of sexual-orientation-detecting neural networks replicates that they are clearly more accurate than humans in using faces to guess whether or not people are gay. Their claim that, given five images, they can detect gay men with 91% accuracy seems unbelievable; waiting to hear further research.

Peter at Bayesian Investor responds to my predictions for the next five years. Related: M at Unremediated Genderspace responds to my article about categorization systems and gender.

Lincoln Network releases their survey on viewpoint diversity in the tech industry. Key points include a self-described moderate saying “I’ve never heard of anyone who left tech because of their views. That’s ridiculous”, and 59% of self-identified very conservative people saying they know people who avoided or left jobs in tech because they felt they weren’t welcome due to their political views. People in five out of six political categories (including liberals, but not very-liberals) say they feel less comfortable sharing viewpoints with colleagues after Google diversity memo issue. Keep in mind high likelihood of sampling bias, though this shouldn’t affect results aggregated by political group as much.

The Tiffany Problem is an issue sometimes encountered by authors and other creative types, where trying to be realistic makes a work feel more unrealistic. Named after a medievalist who included a character named Tiffany (common medieval name), only to be told her book was unrealistic because obviously nobody would be named that back then.

In 1957, Mad Magazine published an article on a made-up system of measurement written by a 19-year old Donald Knuth.

Nobody really knows what the languages of the now-extinct Tasmanian aborigines sounded like, but various scholars have created Palawi kana, a conlang intended to resemble them as much as possible, and it’s even caught on a little in Tasmanian schools and government. Also, am I just pattern-matching, or do a suspicious number of unrelated languages use some version of “mina” to mean “me”?

Related: fascinated by this unsourced claim on Wikipedia that the Ewe of Nigeria believe themselves to be descendants of the one guy who didn’t participate in building the Tower of Babel, and their language to be the perfect language. Anyone know more about this belief, or how common stories like these are for different groups’ languages?

California state government is considering a bill that would mandate very strong pro-housing pro-development policies in almost all major urban areas. By the usual boring standard of state government issues, this is a unfathomably huge deal and could end the housing crisis single-handedly. Possible unintended consequence: since it works by mandating pro-development policies within a certain radius of mass transit, expect no more mass transit ever if it passes. Other possible unintended consequence: I’m less sure than many of my friends that pro-development policies are always good in all cases – but right now the pendulum is so far in the other direction that I’m happy to have one state shake things up a little (okay, maybe a lot) and put the fear of God into NIMBYs so they’ll compromise more elsewhere. Needless to say, Berkeleyans are already writing op-eds about how it will “cause massive damage to the global environment for thousands of years, possibly enough to tip the balance to the extinction of the entire human race.” No word yet on whether the bill has any chance of getting passed in the real world. Some discussion on Marginal Revolution.

China cracks down on funeral strippers.

Posted in Uncategorized | Tagged | 659 Comments

SSC Journal Club: Cipriani On Antidepressants

I.

The big news in psychiatry this month is Cipriani et al’s Comparative efficacy and acceptability of 21 antidepressant drugs for the acute treatment of adults with major depressive disorder: a systematic review and network meta-analysis. It purports to be the last word in the “do antidepressants work?” question, and a first (or at least early) word in the under-asked “which antidepressants are best?” question.

This study is very big, very sophisticated, and must have taken a very impressive amount of work. It meta-analyzes virtually every RCT of antidepressants ever done – 522 in all – then throws every statistical trick in the book at them to try to glob together into a coherent account of how antidepressants work. It includes Andrea Cipriani, one of the most famous research psychiatrists in the world – and John Ioannidis, one of the most famous statisticians. It’s been covered in news sources around the world: my favorite headline is Newsweek’s unsubtle Antidepressants Do Work And Many More People Should Take Them, but honorable mention to Reuters’ Study Seeks To End Antidepressant Debate: The Drugs Do Work.

Based on the whole “we’ve definitely proven antidepressants work” vibe in coverage, you would think that they’d directly contradicted Irving Kirsch’s claim that antidepressants aren’t very effective. I’ve mentioned my disagreements with Kirsch before, but it would be nice to have a definitive refutation of his work. This study isn’t really it. Both Kirsch and Cipriani agree that antidepressants have statistical significance – they’re not literally doing nothing. The main debate was whether they were good enough to be worth it. Kirsch argues they aren’t, using a statistic called “effect size”. Cipriani uses a different statistic called “odds ratio” that is hard to immediately compare.

[EDIT: Commenters point out that once you convert Cipriani’s odds ratios to effect sizes, the two studies are pretty much the same – in fact, Cipriani’s estimates are (slightly) lower. That is, “the study proving antidepressants work” presents a worse picture of antidepressants than “the study proving antidepressants don’t work”. If I had realized this earlier, this would have been the lede for this article. This makes all the media coverage of this study completely insane and means we’re doing science based entirely on how people choose to sum up their results. Strongly recommend this Neuroskeptic article on the topic. This is very important and makes the rest of this article somewhat trivial in comparison.]

Kirsch made a big deal of trying to get all the evidence, not just the for-public-consumption pharma-approved data. Cipriani also made such an effort, but I’m not sure how comparable the two are. Kirsch focused on FDA trials of six drugs. Cipriani took every trial ever published – FDA, academia, industry, whatever- of twenty-one drugs. Kirsch focused on using the Freedom Of Information Act to obtain non-public data from various failed trials. Cipriani says he looked pretty hard for unpublished data, but he might not have gone so far as to harass government agencies. Did he manage to find as many covered-up studies as Kirsch did? Unclear.

How confident should we be in the conclusion? These are very good researchers and their methodology is unimpeachable. But a lot of the 522 studies they cite are, well, kind of crap. The researchers acknowledge this and have constructed some kind of incredibly sophisticated model that inputs the chance of bias in each study and weights everything and simulates all sorts of assumptions to make sure they don’t change the conclusions too much. But we are basically being given a giant edifice of suspected-crap fed through super-powered statistical machinery meant to be able to certify whether or not it’s safe.

Of particular concern, 78% of the studies they cite are sponsored by pharmaceutical industries. The researchers run this through their super-powered statistical machinery and determine that this made no difference – in fact, if you look in the supplement, the size of the effect was literally zero:

In our analyses, funding by industry was not associated with substantial differences in terms of response or dropout rates. However, non-industry funded trials were few and many trials did not report or disclose any funding.

This is surprising, since other papers (which the researchers dutifully cite) find that pharma-sponsored trials are about five times more likely to get positive results than non-sponsored ones (though see this comment). Cipriani’s excuse is that there weren’t enough non-industry trials to really get a good feel for the differences, and that a lot of the trials marked “non-industry” were probably secretly by industry anyway (more on this later). Fair enough, but if we can’t believe their “sponsorship makes zero difference to outcome” result, then the whole thing starts seeming kind of questionable.

I don’t want to come on too strong here. Science is never supposed to have to wait for some impossible perfectly-unbiased investigator. It’s supposed to accept that everyone will have an agenda, but strive through methodological rigor, transparency, and open debate to transcend those agendas and create studies everyone can believe. On the other hand, we’re really not very good at that yet, and nobody ever went broke overestimating the deceptiveness of pharmaceutical companies.

And there was one other kind of bias that did show up, hard. When a drug was new and exciting, it tended to do better in studies. When it was old and boring, it tended to do worse. You could argue this is a placebo effect on the patients, but I’m betting it’s a sign that people were able to bias the studies to fit their expected results (excited high-tech thing is better) in ways we’re otherwise not catching.

All of this will go double as we start looking at the next part, the ranking of different antidepressants.

II.

All antidepressants, from best to worst! (though note the wide error bars)

If this were for real, it would be an amazing resource. Psychiatrists have longed to know if any antidepressant is truly better than any other. Now that we know this, should we just start everyone on amitriptyline (or mirtazapine if we’re worried about tricyclic side effects) and throw out the others?

(as a first line, of course. In reality, we try the best one first, but keep going down the list until we find one that works for you and your unique genetic makeup.)

This matches some parts of the psychiatric conventional wisdom and overturns other parts. How much should we trust this versus all of the rest of the lore and heuristics and smaller studies that have accreted over the years?

Some relevant points:

1. The study finds that all the SSRIs cluster together as basically the same, as they should. The drugs that stand out as especially good or especially bad are generally unique ones with weird pharmacology that ought to be especially different. Amitriptyline is a tricyclic, and very different from clomipramine which is the only other tricyclic tested. Mirtazapine does weird things to presynaptic norepinephrine. Duloxetine and venlafaxine are SNRIs. This passes the most obvious sanity check.

2. Amitriptyline, the most effective antidepressant in this study, is widely agreed to be very good. See eg Amitriptyline: Still The Leading Antidepressant After 40 Years Of Randomized Controlled Trials. Amitriptyline does have many side effects that limit its use despite its impressive performance. I secretly still believe MAOIs, like phenelzine and tranylcypromine, to be even better than amitriptyline, but this study doesn’t include them so we can’t be sure.

3. Reboxetine, the least effective antidepressant in this study, is widely known to suck. It is not available in the United States becaues the FDA wouldn’t even approve it here.

4. On the other hand, agomelatine, another antidepressant widely known to suck, gains solid mid-tier status here, being about as good as anything else. The study even lists it as one of seven antidepressants that seems to do especially well (though it’s unclear what they mean and it’s obviously a different measure than this graph). But agomelatine was rejected by the FDA for not being good enough, scathingly rejected by the European regulators (although their decision was later reversed on appeal), and soundly mocked by various independent organizations and journals (1, 2). It doesn’t look like Cipriani has access to any better data than anyone else, so how come his results are so much more positive?

5. Venlafaxine and desvenlafaxine are basically the same drug, minus a bunch of BS from the pharma companies trying to convince that desvenlafaxine is a super-new-advanced version that you should spend twenty times as much money on. But venlafaxine is the fourth most efficacious drug in the analysis; desvenlafaxine is the second least efficacious drug. Why should this be? I have similar complaints about citalopram and escitalopram. Should we privilege common sense over empiricism and say Cipriani has done something wrong? Or should we privilege empiricism over common sense and conclude that the super-trivial differences between these chemicals have some outsized metabolic significance that makes a big clinical difference? Or should we just notice that the 95% confidence intervals of almost everything in the study (including these two) overlap, so really Cipriani isn’t claiming to know anything about anything and it’s not surprising if the data are wrong?

6. I’m sad to see clomipramine doing so badly here, since I generally find it helpful and have even evangelized it to my friends. I accept that it has serious side effects, but I expected it to do at least a little better in terms of efficacy.

Hoping to rescue its reputation, I started looking through some of the clomipramine studies cited. First was Andersen 1986, which compared clomipramine to Celexa and found some nice things about Celexa. This study doesn’t say a pharmaceutical company was involved in any way. But I notice the study was done in Denmark. And I also notice that Celexa is made by Lundbeck Pharmaceuticals, a Danish company. Am I accusing an entire European country of being in a conspiracy to promote Celexa? Would that be crazy?

The second clomipramine study listed is De Wilde 1982, which compared clomipramine to Luvox and found some nice things about Luvox. This study also doesn’t say a pharmaceutical company was involved in any way. But I notice the study was done in Belgium. And I also notice that Luvox is made by Solvay Pharmaceuticals, a Belgian company. Again, I’m sure Belgium is a lovely country full of many people who are not pharma shills, but this is starting to get a little suspicious.

To Cipriani’s credit, his team did notice these sorts of things and mark these trials as having “unclear” sponsorship levels, which got fed into the analysis. But I’m actually a little concerned about the exact way he did this. If a pharma company sponsored a trial, he called the pharma company’s drug’s results biased, and the comparison drugs unbiased. That is, suppose that Lundbeck sponsors a study, comparing their new drug Celexa to old drug clomipramine. We assume that they’re trying to make it look like Celexa is better. In this study, Cipriani would mark the Celexa patients as biased, but the clomipramine patients as unbiased.

But surely if Lundbeck wants to make Celexa look good, they can either finagle the Celexa numbers upward, finagle the clomipramine numbers downward, or both. If you flag Celexa as high risk of being finagled upwards, but don’t flag clomipramine as at risk of being finagled downwards, I worry you’re likely to understate clomipramine’s case.

I make a big deal of this because about a dozen of the twenty clomipramine studies included in the analysis were very obviously pharma companies using clomipramine as the comparison for their own drug that they wanted to make look good; I suspect some of the non-obvious ones were too. If all of these are marked as “no risk of bias against clomipramine”, we’re going to have clomipramine come out looking pretty bad.

Clomipramine is old and canonical, so most of the times it gets studied are because some pharma company wants to prove their drug is at least as good as this well-known older drug. There are lots of things like this, where certain drugs tend to inspire a certain type of study. Cipriani says they adjusted for this. I hope they were able to do a good job, because this is a big deal and really hard to factor out entirely.

This is my excuse for why I’m not rushing to prescribe drugs in the exact order Cipriani found. It’s a good study and will definitely influence my decisions. But it’s got enough issues that I feel justified in taking my priors into account too.

III.

Speaking of which, here’s another set of antidepressant rankings:

This is from Alexander et al 2017, which started life as this blog post but which with help from some friends I managed to get published by a journal. We looked at some different antidepressants than Cipriani did, but there are enough of the same ones that we can compare results.

Everything is totally different. I haven’t checked formally, but the correlation between those two lists looks like about zero. We find mirtazapine and venlafaxine to be unusually bad, and amitriptyline to be only somewhere around the middle.

I don’t claim anywhere near the sophistication or brilliance or level of work that Cipriani et al put in. But my list – I will argue – makes sense. Drugs with near-identical chemical structure – like venlafaxine and desvenlafaxine, or citalopram and escitalopram – are ranked similarly. Drugs with similar mechanisms of action are in the same place. We match pieces of psychiatric conventional wisdom like “Paroxetine is the worst SSRI”.

Part of the disagreement may be related to all the antidepressants being very close together on both lists. On Cipriani’s, the difference between the 25th vs. 75th percentile is OR 1.75 vs. OR 1.52. On mine, it’s a rating of 7.14 vs. 6.52. Aside from a few outliers, there’s not a lot of light between any of the antidepressants here, which makes it likely that different methodologies will come up with very different orders. And the few outliers that each of us did identify as truly distinct often didn’t make it into the other’s study – Cipriani doesn’t have MAOIs and I don’t have reboxetine. But this isn’t a good enough excuse. One of my top performers, clomipramine, is near the bottom for Cipriani. One of my bottom performers, mirtazapine, is near his top. I have to admit that these just don’t match.

And a big part of the disagreement has to be that we’re not doing the same things Cipriani did – we’re looking at a measure that combines efficacy and acceptability, whereas Cipriani looked at each separately. This could explain why my data penalizes some side-effect-heavy drugs like mirtazapine and amitriptyline. But again, this isn’t a good enough excuse. Why doesn’t my list penalize other side-effect-heavy meds like clomipramine?

In the end, these are two very different lists that can’t be easily reconciled. If you have any sense, trust a major international study before you trust me playing around with online drug ratings. But also be aware of the study’s flaws and why you might want to retain a bit of uncertainty.

OT96: Snopen Thread

This is the bi-weekly visible open thread (there are also hidden open threads twice a week you can reach through the Open Thread tab on the top of the page). Post about anything you want, ask random questions, whatever. You can also talk at the SSC subreddit or the SSC Discord server. Also:

1. New advertisement for Altruisto, a browser extension that automatically connects you to affiliate/referral programs for online shopping and donates the money to effective charities. Endorsed by eg Steven Pinker and Peter Singer.

2. There will be a Slate Star Codex meetup at 3:33 PM on 3/3 at 3 West Circle, Berkeley CA. The numerological conjunction will be used to summon an avatar of Gwern into the material world. More on this as it develops.

Posted in Uncategorized | Tagged | 800 Comments