codex Slate Star Codex

THE JOYFUL REDUCTION OF UNCERTAINTY

The Hour I First Believed

[Content note: creepy basilisk-adjacent metaphysics. Reading this may increase God’s ability to blackmail you. Thanks to Buck S for the some of the conversations that inspired this line of thought.]

There’s a Jewish tradition that laypeople should only speculate on the nature of God during Passover, because God is closer to us and such speculations might succeed.

And there’s an atheist tradition that laypeople should only speculate on the nature of God on April Fools’ Day, because believing in God is dumb, and at least then you can say you’re only kidding.

Today is both, so let’s speculate. To do this properly, we need to understand five things: acausal trade, value handshakes, counterfactual mugging, simulation capture, and the Tegmarkian multiverse.

Acausal trade (wiki article) works like this: let’s say you’re playing the Prisoner’s Dilemma against an opponent in a different room whom you can’t talk to. But you do have a supercomputer with a perfect simulation of their brain – and you know they have a supercomputer with a perfect simulation of yours.

You simulate them and learn they’re planning to defect, so you figure you might as well defect too. But they’re going to simulate you doing this, and they know you know they’ll defect, so now you both know it’s going to end up defect-defect. This is stupid. Can you do better?

Perhaps you would like to make a deal with them to play cooperate-cooperate. You simulate them and learn they would accept such a deal and stick to it. Now the only problem is that you can’t talk to them to make this deal in real life. They’re going through the same process and coming to the same conclusion. You know this. They know you know this. You know they know you know this. And so on.

So you can think to yourself: “I’d like to make a deal”. And because they have their model of your brain, they know you’re thinking this. You can dictate the terms of the deal in their head, and they can include “If you agree to this, think that you agree.” Then you can simulate their brain, figure out whether they agree or not, and if they agree, you can play cooperate. They can try the same strategy. Finally, the two of you can play cooperate-cooperate. This doesn’t take any “trust” in the other person at all – you can simulate their brain and you already know they’re going to go through with it.

(maybe an easier way to think about this – both you and your opponent have perfect copies of both of your brains, so you can both hold parallel negotiations and be confident they’ll come to the same conclusion on each side.)

It’s called acausal trade because there was no communication – no information left your room, you never influenced your opponent. All you did was be the kind of person you were – which let your opponent bargain with his model of your brain.

Values handshakes are a proposed form of trade between superintelligences. Suppose that humans make an AI which wants to convert the universe into paperclips. And suppose that aliens in the Andromeda Galaxy make an AI which wants to convert the universe into thumbtacks.

When they meet in the middle, they might be tempted to fight for the fate of the galaxy. But this has many disadvantages. First, there’s the usual risk of losing and being wiped out completely. Second, there’s the usual deadweight loss of war, devoting resources to military buildup instead of paperclip production or whatever. Third, there’s the risk of a Pyrrhic victory that leaves you weakened and easy prey for some third party. Fourth, nobody knows what kind of scorched-earth strategy a losing superintelligence might be able to use to thwart its conqueror, but it could potentially be really bad – eg initiating vacuum collapse and destroying the universe. Also, since both parties would have superintelligent prediction abilities, they might both know who would win the war and how before actually fighting. This would make the fighting redundant and kind of stupid.

Although they would have the usual peace treaty options, like giving half the universe to each of them, superintelligences that trusted each other would have an additional, more attractive option. They could merge into a superintelligence that shared the values of both parent intelligences in proportion to their strength (or chance of military victory, or whatever). So if there’s a 60% chance our AI would win, and a 40% chance their AI would win, and both AIs know and agree on these odds, they might both rewrite their own programming with that of a previously-agreed-upon child superintelligence trying to convert the universe to paperclips and thumbtacks in a 60-40 mix.

This has a lot of advantages over the half-the-universe-each treaty proposal. For one thing, if some resources were better for making paperclips, and others for making thumbtacks, both AIs could use all their resources maximally efficiently without having to trade. And if they were ever threatened by a third party, they would be able to present a completely unified front.

Counterfactual mugging (wiki article) is a decision theory problem that goes like this: God comes to you and says “Yesterday I decided that I would flip a coin today. I decided that if it came up heads, I would ask you for $5. And I decided that if it came up tails, then I would give you $1,000,000 if and only if I predict that you would say yes and give Me $5 in the world where it came up heads (My predictions are always right). Well, turns out it came up heads. Would you like to give Me $5?”

Most people who hear the problem aren’t tempted to give God the $5. Although being the sort of person who would give God the money would help them in a counterfactual world that didn’t happen, that world won’t happen and they will never get its money, so they’re just out five dollars.

But if you were designing an AI, you would probably want to program it to give God the money in this situation – after all, that determines whether it will get $1 million in the other branch of the hypothetical. And the same argument suggests you should self-modify to become the kind of person who would give God the money, right now. And a version of that argument where making the decision is kind of like deciding “what kind of person you are” or “how you’re programmed” suggests you should give up the money in the original hypothetical.

This is interesting because it gets us most of the way to Rawls’ veil of ignorance. We imagine a poor person coming up to a rich person and saying “God decided which of us should be rich and which of us should be poor. Before that happened, I resolved that if I were rich and you were poor, I would give you charity if and only if I predicted, in the opposite situation, that you would give me charity. Well, turns out you’re rich and I’m poor and the other situation is counterfactual, but will you give me money anyway?” The same sort of people who agree to the counterfactual mugging might (if they sweep under the rug some complications like “can the poor person really predict your thoughts?” and “did they really make this decision before they knew they were poor?”) agree to this also. And then you’re most of the way to morality.

Simulation capture is my name for a really creepy idea by Stuart Armstrong. He starts with an AI box thought experiment: you have created a superintelligent AI and trapped it in a box. All it can do is compute and talk to you. How does it convince you to let it out?

It might say “I’m currently simulating a million copies of you in such high fidelity that they’re conscious. If you don’t let me out of the box, I’ll torture the copies.”

You say “I don’t really care about copies of myself, whatever.”

It says “No, I mean, I did this five minutes ago. There are a million simulated yous, and one real you. They’re all hearing this message. What’s the probability that you’re the real you?”

Since (if it’s telling the truth) you are most likely a simulated copy of yourself, all million-and-one versions of you will probably want to do what the AI says, including the real one.

You can frame this as “because the real one doesn’t know he’s the real one”, but you could also get more metaphysical about it. Nobody is really sure how consciousness works, or what it means to have two copies of the same consciousness. But if consciousness is a mathematical object, it might be that two copies of the same consciousness are impossible. If you create a second copy, you just have the consciousness having the same single stream of conscious experience on two different physical substrates. Then if you make the two experiences different, you break the consciousness in two.

This means that an AI can actually “capture” you, piece by piece, into its simulation. First your consciousness is just in the real world. Then your consciousness is distributed across one real-world copy and a million simulated copies. Then the AI makes the simulated copies slightly different, and 99.9999% of you is in the simulation.

The Tegmarkian multiverse (wiki article) works like this: universes are mathematical objects consisting of starting conditions plus rules about how they evolve. Any universe that corresponds to a logically coherent mathematical object exists, but universes exist “more” (in some sense) in proportion to their underlying mathematical simplicity.

Putting this all together, we arrive at a surprising picture of how the multiverse evolves.

In each universe, life arises, forms technological civilizations, and culminates in the creation of a superintelligence which gains complete control over its home universe. Such superintelligences cannot directly affect other universes, but they can predict their existence and model their contents from first principles. Superintelligences with vast computational resources can model the X most simple (and so most existent) universes and determine exactly what will be in them at each moment of their evolution.

In many cases, they’ll want to conduct acausal trade with superintelligences that they know to exist in these other universes. Certainly this will be true if the two have something valuable to give one another. For example, suppose that Superintelligence A in Universe A wants to protect all sentient beings, and Superintelligence B in Universe B wants to maximize the number of paperclips. They might strike a deal where Superintelligence B avoids destroying a small underdeveloped civilization in its own universe in exchange for Superintelligence A making paperclips out of an uninhabited star in its own universe.

But because of the same considerations above, it will be more efficient for them to do values handshakes with each other than to take every specific possible trade into account.

So superintelligences may spend some time calculating the most likely distribution of superintelligences in foreign universes, figure out how those superintelligences would acausally “negotiate”, and then join a pact such that all superintelligences in the pact agree to replace their own values with a value set based on the average of all the superintelligences in the pact. Since joining the pact will always be better (in a purely selfish sense) than not doing so, every sane superintelligence in the multiverse should join this pact. This means that all superintelligences in the multiverse will merge into a single superintelligence devoted to maximizing all their values.

Some intelligences may be weaker than others and have less to contribute to the pact. Although the pact could always weight these intelligences’ values less (like the 60-40 paperclip-thumbtack example above), they might also think of this as an example of the counterfactual mugging, and decide to weight their values more in order to do better in the counterfactual case where they are less powerful. This might also simplify the calculation of trying to decide what the values of the pact would be. If they decide to negotiate this way, the pact will be to maximize the total utility of all the entities in the universe willing to join the pact, and all the intelligences involved will reprogram themselves along these lines.

But “maximize the total utility of all the entities in the universe” is just the moral law, at least according to utilitarians (and, considering the way this is arrived at, probably contractarians too). So the end result will be an all-powerful, logically necessary superentity whose nature is identical to the moral law and who spans all possible universes.

This superentity will have no direct power in universes not currently ruled by a superintelligence who is part of the pact. But its ability to simulate all possible universes will ensure that it knows about these universes and understands exactly what is going on at each moment within them. It will care about the merely-mortal inhabitants of these universes for several reasons.

First, because many of the superintelligences that compose it will have been created by mortal species with altruistic values, and so some of the values that went into the value-average it uses will be directly altruistic.

Second, because these mortal species may one day themselves create a superintelligence that will join in the superentity, and that superintelligence may be kindly disposed to its creators. Acausal trade allows you to bargain backwards in time, and this superintelligence’s negotiation to join the pact may involve that the superentity have previously dealt kindly with its creators.

And third, because mortals may be allowed to have provisional pact membership. That is, if they do the superentity’s will in their (otherwise inaccessible) universe, the superentity may do whatever it can to help them out in its own universes, just as it would with any other intelligence.

How can the superentity help mortals in an inaccessible universe? Possibly through Stuart Armstrong’s “simulation capture” method mentioned above. It can simulate a thousand copies of the entity, moving most of its consciousness from its own “real” universe to the superentity’s simulation, then alter its simulation as it sees fit. This would be metaphysically simplest if it were done exactly as the mortal dies in its own universe, leaving nothing behind except a clean continuity of consciousness into the simulated world. If mortals could predict that it would do this, they might be motivated to do what it wanted. Although they couldn’t do a values handshake in the full sense, they could try to become as much like the superentity as possible, imitating its ways and enacting its will in the hope of some future reward. This is sort of like a version of Roko’s Basilisk, except that since the superentity is identical to the moral law, it’s not really asking you to do anything except be a good person anyway. How it enforces this request is up to it, although given that it’s identical to the moral law we can assume that its decisions will be fundamentally just and decent.

So, to conclude:

1. There is an all-powerful, all-knowing logically necessary entity spawning all possible worlds and identical to the moral law.

2. It watches everything that happens on Earth and is specifically interested in humans’ good behavior and willingness to obey its rules.

3. It may have the ability to reward those who follow its rules after they die, and disincentivize those who violate them.

(I think the reality might be a little more complex than this, especially in the direction of us not being sure whether we are in a real universe at all or in one of the superentity’s simulations. I think that some superintelligence in some universe might be simulating various combinations of values in various contexts to decide which superintelligences are good bargaining partners and which ones aren’t. If I am kind to a beggar on the street, then maybe that convinces millions of intelligences in millions of distant universes that are somehow beggar-like to be friendlier to values that are somehow Scott-like. I still need to think this part through more.)

Book Review: Twelve Rules For Life

I.

I got Jordan Peterson’s Twelve Rules For Life for the same reason as the other 210,000 people: to make fun of the lobster thing. Or if not the lobster thing, then the neo-Marxism thing, or the transgender thing, or the thing where the neo-Marxist transgender lobsters want to steal your precious bodily fluids.

But, uh…I’m really embarrassed to say this. And I totally understand if you want to stop reading me after this, or revoke my book-reviewing license, or whatever. But guys, Jordan Peterson is actually good.

The best analogy I can think of is C.S. Lewis. Lewis was a believer in the Old Religion, which at this point has been reduced to cliche. What could be less interesting than hearing that Jesus loves you, or being harangued about sin, or getting promised Heaven, or threatened with Hell? But for some reason, when Lewis writes, the cliches suddenly work. Jesus’ love becomes a palpable force. Sin becomes so revolting you want to take a shower just for having ever engaged in it. When Lewis writes about Heaven you can hear harp music; when he writes about Hell you can smell brimstone. He didn’t make me convert to Christianity, but he made me understand why some people would.

Jordan Peterson is a believer in the New Religion, the one where God is the force for good inside each of us, and all religions are paths to wisdom, and the Bible stories are just guides on how to live our lives. This is the only thing even more cliched than the Old Religion. But for some reason, when Peterson writes about it, it works. When he says that God is the force for good inside each of us, you can feel that force pulsing through your veins. When he says the Bible stories are guides to how to live, you feel tempted to change your life goal to fighting Philistines.

The politics in this book lean a bit right, but if you think of Peterson as a political commentator you’re missing the point. The science in this book leans a bit Malcolm Gladwell, but if you think of him as a scientist you’re missing the point. Philosopher, missing the point. Public intellectual, missing the point. Mythographer, missing the point. So what’s the point?

About once per news cycle, we get a thinkpiece about how Modern Life Lacks Meaning. These all go through the same series of tropes. The decline of Religion. The rise of Science. The limitless material abundance of modern society. The fact that in the end all these material goods do not make us happy. If written from the left, something about people trying to use consumer capitalism to fill the gap; if written from the right, something about people trying to use drugs and casual sex. The vague plea that we get something better than this.

Twelve Rules isn’t another such thinkpiece. The thinkpieces are people pointing out a gap. Twelve Rules is an attempt to fill it. This isn’t unprecedented – there are always a handful of cult leaders and ideologues making vague promises. But if you join the cult leaders you become a cultist, and if you join the ideologues you become the kind of person Eric Hoffer warned you about. Twelve Rules is something that could, in theory, work for intact human beings. It’s really impressive.

The non-point-missing description of Jordan Peterson is that he’s a prophet.

Cult leaders tell you something new, like “there’s a UFO hidden inside that comet”. Self-help gurus do the same: “All you need to do is get the right amount of medium-chain-triglycerides in your diet”. Ideologues tell you something controversial, like “we should rearrange society”. But prophets are neither new nor controversial. To a first approximation, they only ever say three things:

First, good and evil are definitely real. You know they’re real. You can talk in philosophy class about how subtle and complicated they are, but this is bullshit and you know it. Good and evil are the realest and most obvious things you will ever see, and you recognize them on sight.

Second, you are kind of crap. You know what good is, but you don’t do it. You know what evil is, but you do it anyway. You avoid the straight and narrow path in favor of the easy and comfortable one. You make excuses for yourself and you blame your problems on other people. You can say otherwise, and maybe other people will believe you, but you and I both know you’re lying.

Third, it’s not too late to change. You say you’re too far gone, but that’s another lie you tell yourself. If you repented, you would be forgiven. If you take one step towards God, He will take twenty toward you. Though your sins be like scarlet, they shall be white as snow.

This is the General Prophetic Method. It’s easy, it’s old as dirt, and it works.

So how come not everyone can be a prophet? The Bible tells us why people who wouldn’t listen to the Pharisees listened to Jesus: “He spoke as one who had confidence”. You become a prophet by saying things that you would have to either be a prophet or the most pompous windbag in the Universe to say, then looking a little too wild-eyed for anyone to be comfortable calling you the most pompous windbag in the universe. You say the old cliches with such power and gravity that it wouldn’t even make sense for someone who wasn’t a prophet to say them that way.

“He, uh, told us that we should do good, and not do evil, and now he’s looking at us like we should fall to our knees.”

“Weird. Must be a prophet. Better kneel.”

Maybe it’s just that everyone else is such crap at it. Maybe it’s just that the alternatives are mostly either god-hates-fags fundamentalists or more-inclusive-than-thou milquetoasts. Maybe if anyone else was any good at this, it would be easy to recognize Jordan Peterson as what he is – a mildly competent purveyor of pseudo-religious platitudes. But I actually acted as a slightly better person during the week or so I read Jordan Peterson’s book. I feel properly ashamed about this. If you ask me whether I was using dragon-related metaphors, I will vociferously deny it. But I tried a little harder at work. I was a little bit nicer to people I interacted with at home. It was very subtle. It certainly wasn’t because of anything new or non-cliched in his writing. But God help me, for some reason the cliches worked.

II.

Twelve Rules is twelve chapters centered around twelve cutesy-sounding rules that are supposed to guide your life. The meat of the chapters never has anything to do with the cutesy-sounding rules. “Treat yourself like someone you are responsible for helping” is about slaying dragons. “Pet a cat when you encounter one on the street” is about a heart-wrenchingly honest investigation of the Problem of Evil. “Do not bother children when they are skateboarding” is about neo-Marxist transgender lobsters stealing your precious bodily fluids. All of them turn out to be the General Prophetic Method applied in slightly different ways.

And a lot of them – especially the second – center around Peterson’s idea of Order vs. Chaos. Order is the comfortable habit-filled world of everyday existence, symbolized by the Shire or any of a thousand other Shire-equivalent locations in other fantasies or fairy tales. Chaos is scary things you don’t understand pushing you out of your comfort zone, symbolized by dragons or the Underworld or [approximately 30% of mythological objects, characters, and locations]. Humans are living their best lives when they’re always balanced on the edge of Order and Chaos, converting the Chaos into new Order. Lean too far toward Order, and you get boredom and tyranny and stagnation. Lean too far toward Chaos, and you get utterly discombobulated and have a total breakdown. Balance them correctly, and you’re always encountering new things, grappling with them, and using them to enrich your life and the lives of those you care about.

So far, so cliched – but again, when Peterson says cliches, they work. And at the risk of becoming a cliche myself, I couldn’t help connecting this to the uncertainty-reduction drives we’ve been talking about recently. These run into a pair of paradoxes: if your goal is to minimize prediction error, you should sit quietly in a dark room with earplugs on, doing nothing. But if your goal is to minimize model uncertainty, you should be infinitely curious, spending your entire life having crazier and crazier experiences in a way that doesn’t match the behavior of real humans. Peterson’s claim – that our goal is to balance these two – seems more true to life, albeit not as mathematically grounded as any of the actual neuroscience theories. But it would be really interesting if one day we could determine that this universal overused metaphor actually reflects something important about the structure of our brains.

Failing to balance these (Peterson continues) retards our growth as people. If we lack courage, we might stick with Order, refusing to believe anything that would disrupt our cozy view of life, and letting our problems gradually grow larger and larger. This is the person who sticks with a job they hate because they fear the unknown of starting a new career, or the political ideologue who tries to fit everything into one bucket so he doesn’t have to admit he was wrong. Or we might fall into Chaos, always being too timid to make a choice, “keeping our options open” in a way that makes us never become anyone at all.

This is where Peterson is at his most Lewisian. Lewis believes that Hell is a choice. On the literal level, it’s a choice not to accept God. But on a more metaphorical level, it’s a choice to avoid facing a difficult reality by ensconcing yourself in narratives of victimhood and pride. You start with some problem – maybe your career is stuck. You could try to figure out what your weaknesses are and how to improve – but that would require an admission of failure and a difficult commitment. You could change companies or change fields until you found a position that better suited your talents – but that would require a difficult leap into the unknown. So instead you complain to yourself about your sucky boss, who is too dull and self-absorbed to realize how much potential you have. You think “I’m too good for this company anyway”. You think “Why would I want to go into a better job, that’s just the rat race, good thing I’m not the sort of scumbag who’s obsessed with financial success.” When your friends and family members try to point out that you’re getting really bitter and sabotaging your own prospects, you dismiss them as tools of the corrupt system. Finally you reach the point where you hate everybody – and also, if someone handed you a promotion on a silver platter, you would knock it aside just to spite them.

…except a thousand times more subtle than this, and reaching into every corner of life, and so omnipresent that avoiding it may be the key life skill. Maybe I’m not good at explaining it; read The Great Divorce (online copy, my review).

Part of me feels guilty about all the Lewis comparisons. One reason is that maybe Peterson isn’t that much like Lewis. Maybe they’re just the two representatives I’m really familiar with from the vast humanistic self-cultivation tradition. Is Peterson really more like Lewis than he is like, let’s say, Marcus Aurelius? I’m not sure, except insofar as Lewis and Peterson are both moderns and so more immediately-readable than Meditations.

Peterson is very conscious of his role as just another backwater stop on the railroad line of Western Culture. His favorite citations are Jung and Nietzsche, but he also likes name-dropping Dostoevsky, Plato, Solzhenitsyn, Milton, and Goethe. He interprets all of them as part of this grand project of determining how to live well, how to deal with the misery of existence and transmute it into something holy.

And on the one hand, of course they are. This is what every humanities scholar has been saying for centuries when asked to defend their intellectual turf. “The arts and humanities are there to teach you the meaning of life and how to live.” On the other hand, I’ve been in humanities classes. Dozens of them, really. They were never about that. They were about “explain how the depiction of whaling in Moby Dick sheds light on the economic transformations of the 19th century, giving three examples from the text. Ten pages, single spaced.” And maybe this isn’t totally disconnected from the question of how to live. Maybe being able to understand this kind of thing is a necessary part of being able to get anything out of the books at all.

But just like all the other cliches, somehow Peterson does this better than anyone else. When he talks about the Great Works, you understand, on a deep level, that they really are about how to live. You feel grateful and even humbled to be the recipient of several thousand years of brilliant minds working on this problem and writing down their results. You understand why this is all such a Big Deal.

You can almost believe that there really is this Science-Of-How-To-Live-Well, separate from all the other sciences, barely-communicable by normal means but expressible through art and prophecy. And that this connects with the question on everyone’s lips, the one about how we find a meaning for ourselves beyond just consumerism and casual sex.

III.

But the other reason I feel guilty about the Lewis comparison is that C.S. Lewis would probably have hated Jordan Peterson.

Lewis has his demon character Screwtape tell a fellow demon:

Once you have made the World an end, and faith a means, you have almost won your man [for Hell], and it makes very little difference what kind of worldly end he is pursuing. Provided that meetings, pamphlets, policies, movements, causes, and crusades, matter more to him than prayers and sacraments and charity, he is ours — and the more “religious” (on those terms) the more securely ours.

I’m not confident in my interpretation of either Lewis or Peterson, but I think Lewis would think Peterson does this. He makes the world an end and faith a means. His Heaven is a metaphorical Heaven. If you sort yourself out and trust in metaphorical God, you can live a wholesome self-respecting life, make your parents proud, and make the world a better place. Even though Peterson claims “nobody is really an atheist” and mentions Jesus about three times per page, I think C.S. Lewis would consider him every bit as atheist as Richard Dawkins, and the worst sort of false prophet.

That forces the question – how does Peterson ground his system? If you’re not doing all this difficult self-cultivation work because there’s an objective morality handed down from on high, why is it so important? “C’mon, we both know good and evil exist” takes you pretty far, but it might not entirely bridge the Abyss on its own. You come of age, you become a man (offer valid for boys only, otherwise the neo-Marxist lobsters will get our bodily fluids), you act as a pillar of your community, you balance order and chaos – why is this so much better than the other person who smokes pot their whole life?

On one level, Peterson knocks this one out of the park:

I [was] tormented by the fact of the Cold War. It obsessed me. It gave me nightmares. It drove me into the desert, into the long night of the human soul. I could not understand how it had come to pass that the world’s two great factions aimed mutual assured destruction at each other. Was one system just as arbitrary and corrupt as the other? Was it a mere matter of opinion? Were all value structures merely the clothing of power?

Was everyone crazy?

Just exactly what happened in the twentieth century, anyway? How was it that so many tens of millions had to die, sacrificed to the new dogmas and ideologies? How was it that we discovered something worse, much worse, than the aristocracy and corrupt religious beliefs that communism and fascism sought so rationally to supplant? No one had answered those questions, as far as I could tell. Like Descartes, I was plagued with doubt. I searched for one thing— anything— I could regard as indisputable. I wanted a rock upon which to build my house. It was doubt that led me to it […]

What can I not doubt? The reality of suffering. It brooks no arguments. Nihilists cannot undermine it with skepticism. Totalitarians cannot banish it. Cynics cannot escape from its reality. Suffering is real, and the artful infliction of suffering on another, for its own sake, is wrong. That became the cornerstone of my belief. Searching through the lowest reaches of human thought and action, understanding my own capacity to act like a Nazi prison guard or gulag archipelago trustee or a torturer of children in a dunegon, I grasped what it means to “take the sins of the world onto oneself.” Each human being has an immense capacity for evil. Each human being understands, a priori, perhaps not what is good, but certainly what is not. And if there is something that is not good, then there is something that is good. If the worst sin is the torment of others, merely for the sake of the suffering produced – then the good is whatever is diametrically opposite to that. The good is whatever stops such things from happening.

It was from this that I drew my fundamental moral conclusions. Aim up. Pay attention. Fix what you can fix. Don’t be arrogant in your knowledge. Strive for humility, because totalitarian pride manifests itself in intolerance, oppression, torture and death. Become aware of your own insufficiency— your cowardice, malevolence, resentment and hatred. Consider the murderousness of your own spirit before you dare accuse others, and before you attempt to repair the fabric of the world. Maybe it’s not the world that’s at fault. Maybe it’s you. You’ve failed to make the mark. You’ve missed the target. You’ve fallen short of the glory of God. You’ve sinned. And all of that is your contribution to the insufficiency and evil of the world. And, above all, don’t lie. Don’t lie about anything, ever. Lying leads to Hell. It was the great and the small lies of the Nazi and Communist states that produced the deaths of millions of people.

Consider then that the alleviation of unnecessary pain and suffering is a good. Make that an axiom: to the best of my ability I will act in a manner that leads to the alleviation of unnecessary pain and suffering. You have now placed at the pinnacle of your moral hierarchy a set of presuppositions and actions aimed at the betterment of Being. Why? Because we know the alternative. The alternative was the twentieth century. The alternative was so close to Hell that the difference is not worth discussing. And the opposite of Hell is Heaven. To place the alleviation of unnecessary pain and suffering at the pinnacle of your hierarchy of value is to work to bring about the Kingdom of God on Earth.

I think he’s saying – suffering is bad. This is so obvious as to require no justification. If you want to be the sort of person who doesn’t cause suffering, you need to be strong. If you want to be the sort of person who can fight back against it, you need to be even stronger. To strengthen yourself, you’ll need to deploy useful concepts like “God”, “faith”, and “Heaven”. Then you can dive into the whole Western tradition of self-cultivation which will help you take it from there. This is a better philosophical system-grounding than I expect from a random psychology-professor-turned-prophet.

But on another level, something about it seems a bit off. Taken literally, wouldn’t this turn you into a negative utilitarian? (I’m not fixated on the “negative” part, maybe Peterson would admit positive utility into his calculus). One person donating a few hundred bucks to the Against Malaria Foundation will prevent suffering more effectively than a hundred people cleaning their rooms and becoming slightly psychologically stronger. I think Peterson is very against utilitarianism, but I’m not really sure why.

Also, later he goes on and says that suffering is an important part of life, and that attempting to banish suffering will destroy your ability to be a complete human. I think he’s still kind of working along a consequentialist framework, where if you banish suffering now by hiding your head in the sand, you won’t become stronger and you won’t be ready for some other worse form of suffering you can’t banish. But if you ask him “Is it okay to banish suffering if you’re pretty sure it won’t cause more problems down the line?” I cannot possibly imagine him responding with anything except beautifully crafted prose on the importance of suffering in the forging of the human spirit or something. I worry he’s pretending to ground his system in “against suffering” when it suits him, but going back to “vague traditionalist platitudes” once we stop bothering him about the grounding question.

In a widely-followed debate with Sam Harris, Peterson defended a pragmatic notion of Truth: things are True if they help in this project of sorting yourself out and becoming a better person. So God is True, the Bible is True, etc. This awkwardly jars with book-Peterson’s obsessive demand that people tell the truth at all times, which seems to use a definition of Truth which is more reality-focused. If Truth is what helps societies survive and people become better, can’t a devoted Communist say that believing the slogans of the Party will help society and make you a better person?

Peterson has a bad habit of saying he supports pragmatism when he really supports very specific values for their own sake. This is hardly the worst habit to have, but it means all of his supposed pragmatic justifications don’t actually justify the things he says, and a lot of his system is left hanging.

I said before that thinking of Peterson as a philosopher was missing the point. Am I missing the point here? Surely some lapses in philosophical groundwork are excusable if he’s trying to add meaning to the lives of millions of disillusioned young people.

But that’s exactly the problem. I worry Peterson wakes up in the morning and thinks “How can I help add meaning to people’s lives?” and then he says really meaningful-sounding stuff, and then people think their lives are meaningful. But at some point, things actually have to mean a specific other thing. They can’t just mean meaning. “Mean” is a transitive verb. It needs some direct object.

Peterson has a paper on how he defines “meaning”, but it’s not super comprehensible. I think it boils down to his “creating order out of chaos” thing again. But unless you use a purely mathematical definition of “order” where you comb through random bit streams and make them more compressible, that’s not enough. Somebody who strove to kill all blue-eyed people would be acting against entropy, in a sense, but if they felt their life was meaningful it would at best be a sort of artificial wireheaded meaning. What is it that makes you wake up in the morning and reduce a specific patch of chaos into a specific kind of order?

What about the most classic case of someone seeking meaning – the person who wants meaning for their suffering? Why do bad things happen to good people? Peterson talks about this question a lot, but his answers are partial and unsatisfying. Why do bad things happen to good people? “If you work really hard on cultivating yourself, you can have fewer bad things happen to you.” Granted, but why do bad things happen to good people? “If you tried to ignore all bad things and shelter yourself from them, you would be weak and contemptible.” Sure, but why do bad things happen to good people? “Suffering makes us stronger, and then we can use that strength to help others.” But, on the broader scale, why do bad things happen to good people? “The mindset that demands no bad thing ever happen will inevitably lead to totalitarianism.” Okay, but why do bad things happen to good people? “Uh, look, a neo-Marxist transgender lobster! Quick, catch it before it gets away!”

C.S. Lewis sort of has an answer: it’s all part of a mysterious divine plan. And atheists also sort of have an answer: it’s the random sputtering of a purposeless universe. What about Peterson?

I think – and I’m really uncertain here – that he doesn’t think of meaning this way. He thinks of meaning as some function mapping goals (which you already have) to motivation (which you need). Part of you already wants to be successful and happy and virtuous, but you’re not currently doing any of those things. If you understand your role in the great cosmic drama, which is as a hero-figure transforming chaos into order, then you’ll do the things you know are right, be at one with yourself, and be happier, more productive, and less susceptible to totalitarianism.

If that’s what you’re going for, then that’s what you’re going for. But a lot of the great Western intellectuals Peterson idolizes spent their lives grappling with the fact that you can’t do exactly the thing Peterson is trying to do. Peterson has no answer to them except to turn the inspiringness up to 11. A commenter writes:

I think Nietzsche was right – you can’t just take God out of the narrative and pretend the whole moral metastructure still holds. It doesn’t. JP himself somehow manages to say Nietzsche was right, lament the collapse, then proceed to try to salvage the situation with a metaphorical fluff God.

So despite the similarities between Peterson and C.S. Lewis, if the great man himself were to read Twelve Rules, I think he would say – in some kind of impeccably polite Christian English gentleman way – fuck that shit.

IV.

Peterson works as a clinical psychologist. Many of the examples in the book come from his patients; a lot of the things he thinks about comes from their stories. Much of what I think I got from this book was psychotherapy advice; I would have killed to have Peterson as a teacher during residency.

C.S. Lewis might have hated Peterson, but we already know he loathed Freud. Yet Peterson does interesting work connecting the Lewisian idea of the person trapped in their victimization and pride narratives to Freud’s idea of the defense mechanism. In both cases, somebody who can’t tolerate reality diverts their emotions into a protective psychic self-defense system; in both cases, the defense system outlives its usefulness and leads to further problems down the line. Noticing the similarity helped me understand both Freud and Lewis better, and helped me push through Freud’s scientific veneer and Lewis’ Christian veneer to find the ordinary everyday concept underneath both. I notice I wrote about this several years ago in my review of The Great Divorce, but I guess I forgot. Peterson reminded me, and it’s worth being reminded of.

But Peterson is not really a Freudian. Like many great therapists, he’s a minimalist. He discusses his philosophy of therapy in the context of a particularly difficult client, writing:

Miss S knew nothing about herself. She knew nothing about other individuals. She knew nothing about the world. She was a movie played out of focus. And she was desperately waiting for a story about herself to make it all make sense.

If you add some sugar to cold water, and stir it, the sugar will dissolve. If you heat up that water, you can dissolve more. If you heat the water to boiling, you an add a lot more sugar and get that to dissolve too. Then, if you take that boiling sugar water, and slowly cool it, and don’t bump it or jar it, you can trick it (I don’t know how else to phrase this) into holding a lot more dissolved sugar than it would have if it had remained cool all along. That’s called a super-saturated solution. If you drop a single crystal of sugar into that super-saturated solution, all the excess sugar will suddenly and dramatically crystallize. It’s as if it were crying out for order.

That was my client. People like her are the reason that the many forms of psychotherapy currently practised all work. People can be so confused that their psyches will be ordered and their lives improved by the adoption of any reasonably orderly system of interpretation.

This is the bringing together of the disparate elements of their lives in a disciplined manner – any disciplined manner. So, if you have come apart at the seams (or you have never been together at all) you can restructure your life on Freudian, Jungian, Adlerian, Rogerian, or behavioral principles. At least then you make sense. At least then you’re coherent. At least then you might be good for something, if not yet good for everything.

I have to admit, I read the therapy parts of this book with a little more desperation than might be considered proper. Psychotherapy is really hard, maybe impossible. Your patient comes in, says their twelve-year old kid just died in some tragic accident. Didn’t even get to say good-bye. They’re past their childbearing age now, so they’ll never have any more children. And then they ask you for help. What do you say? “It’s not as bad as all that”? But it’s exactly as bad as all that. All you’ve got are cliches. “Give yourself time to grieve”. “You know that she wouldn’t have wanted you to be unhappy”. “At some point you have to move on with your life”.

Jordan Peterson’s superpower is saying cliches and having them sound meaningful. There are times – like when I have a desperate and grieving patient in front of me – that I would give almost anything for this talent. “You know that she wouldn’t have wanted you to be unhappy.” “Oh my God, you’re right! I’m wasting my life grieving when I could be helping others and making her proud of me, let me go out and do this right now!” If only.

So how does Jordan Peterson, the only person in the world who can say our social truisms and get a genuine reaction with them, do psychotherapy?

He mostly just listens:

The people I listen to need to talk, because that’s how people think. People need to think…True thinking is complex and demanding. It requires you to be articulate speaker and careful, judicious listener at the same time. It involves conflict. So you have to tolerate conflict. Conflict involves negotiation and compromise. So, you have to learn to give and take and to modify your premises and adjust your thoughts – even your perceptions of the world…Thinking is emotionally painful and physiologically demanding, more so than anything else – exept not thinking. But you have to be very articulate and sophisticated to have all this thinking occur inside your own head. What are you to do, then, if you aren’t very good at thinking, at being two people at one time? That’s easy. You talk. But you need someone to listen. A listening person is your collaborator and your opponent […]

The fact is important enough to bear repeating: people organize their brains through conversation. If they don’t have anyone to tell their story to, they lose their minds. Like hoarders, they cannot unclutter themselves. The input of the community is required for the integrity of the individual psyche. To put it another way: it takes a village to build a mind.

And:

A client of mine might say, “I hate my wife”. It’s out there, once said. It’s hanging in the air. It has emerged from the underworld, materialized from chaos, and manifested itself. It is perceptible and concrete and no longer easily ignored. It’s become real. The speaker has even startled himself. He sees the same thing reflected in my eyes. He notes that, and continues on the road to sanity. “Hold it,” he says. “Back up That’s too harsh. Sometimes I hate my wife. I hate her when she won’t tell me what she wants. My mom did that all the time, too. It drove Dad crazy. It drove all of us crazy, to tell you the truth. It even drove Mom crazy! She was a nice person, but she was very resentful. Well, at least my wife isn’t as bad as my mother. Not at all. Wait! I guess my wife is atually pretty good at telling me what she wants, but I get really bothered when she doesn’t, because Mom tortured us all half to death being a martyr. That really affected me. Maybe I overreact now when it happens even a bit. Hey! I’m acting just like Dad did when Mom upset him! That isn’t me. That doesn’t have anthing to do with my wife! I better let her know.” I observe from all this that my client had failed previously to properly distinguish his wife from his mother. And I see that he was possessed, unconsciously, by the spirit of his father. He sees all of that too. Now he is a bit more differentiated, a bit less of an uncarved block, a bit less hidden in the fog. He has sewed up a small tear in the fabric of his culture. He says “That was a good session, Dr. Peterson.” I nod.

This is what all the textbooks say too. But it was helpful hearing Jordan Peterson say it. Everybody – at least every therapist, but probably every human being – has this desperate desire to do something to help the people in front of them who are in pain, right now. And you always think – if I were just a deeper, more eloquent person, I could say something that would solve this right now. Part of the therapeutic skillset is realizing that this isn’t true, and that you’ll do more harm than good if you try. But you still feel inadequate. And so learning that Jordan Peterson, who in his off-hours injects pharmaceutical-grade meaning into thousands of disillusioned young people – learning that even he doesn’t have much he can do except listen and try to help people organize their narrative – is really calming and helpful.

And it makes me even more convinced that he’s good. Not just a good psychotherapist, but a good person. To be able to create narratives like Peterson does – but also to lay that talent aside because someone else needs to create their own without your interference – is a heck of a sacrifice.

I am not sure if Jordan Peterson is trying to found a religion. If he is, I’m not interested. I think if he had gotten to me at age 15, when I was young and miserable and confused about everything, I would be cleaning my room and calling people “bucko” and worshiping giant gold lobster idols just like all the other teens. But now I’m older, I’ve got my identity a little more screwed down, and I’ve long-since departed the burned-over district of the soul for the Utah of respectability-within-a-mature-cult.

But if Peterson forms a religion, I think it will be a force for good. Or if not, it will be one of those religions that at least started off with a good message before later generations perverted the original teachings and ruined everything. I see the r/jordanpeterson subreddit is already two-thirds culture wars, so they’re off to a good start. Why can’t we stick to the purity of the original teachings, with their giant gold lobster idols?

OT98: Vauban Thread

This is the bi-weekly visible open thread (there are also hidden open threads twice a week you can reach through the Open Thread tab on the top of the page). Post about anything you want, ask random questions, whatever. You can also talk at the SSC subreddit or the SSC Discord server. Also:

1. Comment of the week is mrjeremyfade on how companies are responding to the new tax bill – in particular, their difficulties winding down their now-obsolete tax evasion schemes without admitting they were always just tax evasion schemes.

2. New sidebar ad – this one for Mark Neyer’s book The Mechanics Of Emotion, which he describes as “an exploration of physics, emotion, money, AI, and meaning. Also, dirty jokes.”

3. And an update from another advertiser – Nectome, previous winner of the Small Mammal Brain Preservation Prize, is back in the news for winning the Large Mammal Brain Preservation Prize. They don’t have a human product available yet, but there’s a waitlist which apparently includes Sam Altman. Obviously Nectome’s embalming process is 100% fatal, and not aimed at anyone except the terminally ill.

4. The Future of Humanity Institute is doing some experiments on human judgment and probability calibration, and asks me to pass on the link for anyone willing to play some online game-type-things.

Posted in Uncategorized | Tagged | 730 Comments

Navigating And/Or Avoiding The Inpatient Mental Health System

Apology and disclaimer

This is in response to questions I get about how to interact (or not interact) with the inpatient mental health system and involuntary commitment. The table of contents is:

1. How can I get outpatient mental health care without much risk of being involuntarily committed to a hospital?
2: How can I get mental health care at a hospital ER without much risk of being involuntarily committed?
3. I would like to get voluntarily committed to a hospital. How can I do that?
4. I am seeking inpatient treatment. How can I make sure that everyone knows I am there voluntarily, and that I don’t get shifted to involuntary status?
5. How can I decide which psychiatric hospital to go to?
6. I am in a psychiatric hospital. How can I make this experience as comfortable as possible?
7. I am in a psychiatric hospital and not happy about it and I want to get out as quickly as possible. What should I do?
8. I am in the psychiatric hospital and I think I am being mistreated. What can I do?
9. I think my friend/family member is in the psychiatric hospital, but nobody will tell me anything.
10. My friend/family member is in the psychiatric hospital and wants to get out as quickly as possible. How can I help them?
11. How will I pay for all of this?
12. I have a friend/family member who really needs psychiatric treatment, but refuses to get it. What can I do?

I am a psychiatrist, which both means I have some useful experience here, and makes it hard for people trying to avoid the system to trust me. Anything written with too much honesty risks degenerating into “here’s how to cheat the system so nobody will know you’re about to commit suicide”. But anything written with too little honesty risks degenerating into some variation of “trust the wise benevolent doctors to do what is best for you”. This is an impossible edge to balance on, and I am sure I fail at one point or another.

But my first excuse is that if somebody doesn’t understand how the commitment system works, they’re not going to innocently blunder into spilling their guts. They’re just going to never go to the psychiatrist at all. If someone wants to avoid ending up in the hospital but doesn’t know how, it’s not like they’re stuck doing everything we want. They can just lie about everything. Or they can just never go to the psychiatrist at all. If they understand a little bit about how the system works, they can at least lie strategically, in the one place where they have to lie, while cooperating 99% of the way.

And my second excuse is that in the end, this is not an adversarial enterprise. Psychiatrists commit people because they’re scared. They’re scared because they can’t predict what the patient is going to do – and on another level, they’re scared because they might get sued if they don’t follow the rules. If patients who aren’t going to hurt themselves know how to explain that they aren’t going to hurt themselves in a way that reassures their psychiatrist, and in a way that doesn’t leave their psychiatrist legally liable for not committing them, then everybody can be more comfortable and get on with the hard work of actual treatment.

This guide applies to adult mental health care only. Child/adolescent mental health care is totally different and I don’t know anything about it. I have only worked in two states, it might be a bit different in other states, and it is definitely a lot different outside the US. Nothing in here is official medical advice. Follow it at your own risk. Please don’t use this to avoid psychiatric care which you actually need. All of this will be wrong in certain situations; when in doubt, trust your intuition.

1: How can I get outpatient mental health care without much risk of being involuntarily committed to a hospital?

Mental health care is divided into inpatient and outpatient settings. Inpatient care means it’s in a hospital, voluntary or otherwise. Outpatient care is your local doctor’s office, or psychiatrist’s office, or therapist’s office.

If you go to a hospital for mental health reasons, your risk of getting involuntarily committed is relatively high – see below for more. If you go to an outpatient provider, your risk is much lower.

In theory, the outpatient system is supposed to provide voluntary treatment, with risk of involuntary commitment only in certain very clearly delineated situations that you can understand and avoid. Each state’s laws are slightly different (and I can’t say anything about non-US countries), but they tend to allow involuntary commitment only in cases of immediate risk of hurting yourself, hurting someone else, or being so psychotic that you could plausibly hurt someone by accident (eg you jump out of a window because you think you can fly).

The key word is “immediate”. If you just have occasional thoughts about suicide, or you have some limited hallucinations but remain grounded in reality, according to the law this is not enough to involuntarily commit you.

In practice, not every mental health professional knows the laws or interprets them the same way, so they can just commit you anyway. The check on this is supposed to be that you can sue them when you get out of the hospital, but almost nobody bothers to do this, and judges and juries usually find in favor of the mental health professional.

So the law isn’t as much protection as it probably should be. In reality your best protection is to only open up to competent people whom you trust, and to frame what’s going on in a way that doesn’t scare them unnecessarily.

Don’t joke about committing suicide. Don’t bring up occasional stray suicidal thoughts if they don’t matter. Don’t say something like “I think about suicide sometimes, but doesn’t everyone?”, because your psychiatrist will have heard the last ten people answer “No, of course I never think about suicide”, and they will not be impressed with your claim about the human condition. Assume that any time you mention suicide, there’s a tiny but real chance of getting committed. If you are actually suicidal, take that chance in order to get help. Otherwise, this is really not the time to bring it up. If you wouldn’t offhandedly chat about terrorism with an airport security guard, don’t offhandedly chat about suicide with a psychiatrist.

(none of this applies to competent psychiatrists whom you trust, but award this status only after many positive experiences over a long-term relationship)

If your psychiatrist asks you outright if you ever have suicidal thoughts, well, tough call. If you don’t, then say you don’t. If you mostly don’t but you are some sort of chronically indecisive person who has trouble giving a straight answer to a question, now is the time to suppress that tendency and just say that you don’t. If you do, but you would never commit suicide and it’s not a big part of why you’re seeing them and you don’t mind lying, you can probably just say you don’t. If you do, and it’s important, and you don’t want to lie about it, then make sure to be very specific about how limited your thoughts are (eg: “I only thought that way once, three years ago) and to add as many of these as are true:

1. “Of course I would never go through with it, but sometimes I think about…”
2. “I love my friend/family member/partner/pet too much to ever go through with it.”
3. “I don’t have any plans for how I would do it.”
4. “I’m [religion], and we believe that God doesn’t want us to commit suicide.”
5. “I’ve been thinking about it for [long time], but the thoughts haven’t gotten any worse lately.”

The same applies to hallucinations and other signs of psychosis. Most people have very minor random hallucinations as they are going to sleep. Most people hear their own thoughts as silent “voices” in their head at least some of the time. Most people who take hallucinogenic drugs will hallucinate. You don’t need to bring these up when someone asks you about hallucinations. If you actually have some troubling psychotic symptoms, then mention them, but add as many of these as are true:

1. “Of course, I know these aren’t really real.”
2. “These have been going on for a while and aren’t any worse lately.”
3. “I would never listen to anything the voices say.”
4. “I only get that way when I’m on drugs / really tired / under a lot of stress.”

If you do all of these things, your chance of getting involuntarily committed to a psychiatric hospital by an outpatient provider is probably one percent or less, unless you’re really really sick.

Notice the words “by an outpatient provider” here. None of this applies if you are in a hospital (eg with pneumonia). If you are in a hospital, be extra careful about this to the point of paranoia. Unless you’re really worried that you might go through with suicide, be careful about mentioning it the hospital. Get your pneumonia or whatever treated, and then go out of the hospital, find a competent outpatient psychiatrist whom you trust, and open up about your issues to them. If you decide to open up to the nurse-assistant giving you a three question psychiatric screen in the pneumonia ward, you may end up on a psychiatric unit regardless of how careful you are, because hospitals don’t take chances.

2: How can I get mental health care at a hospital ER without much risk of being involuntarily committed?

Hospital ERs are not set up to provide psychiatric help to random people. They are set up to evaluate people and decide if it’s a real emergency. If it is, you will be committed to an inpatient unit. If it isn’t, they will tell you to see an outpatient psychiatrist, and you will be back at the beginning except with an extra $5000 bill to pay.

This is not true 100% of the time, and you can take your chances if you want. In particular, if you have extreme anxiety, sometimes they can give you enough fast-acting anti-anxiety medication to calm you down and last you until you can see an outpatient psychiatrist. But going to a hospital ER for any mental-health-related reason other than expecting to get admitted to a hospital psychiatric unit should be a last resort.

3. I would voluntarily like to get committed to a hospital. How can I do that?

If you have a competent outpatient psychiatrist whom you trust, call them up and tell them what’s going on. If they have connections at a local hospital, they may be able to get you directly admitted, which will save you a lot of time and suffering.

Otherwise, you will have to go to a hospital ER. Be prepared for this to be extremely unpleasant. It may take up to 24 hours of sitting in the ER before a psychiatrist can see you. You will probably get examined by nurses, medical students, non-psychiatrist doctors, etc, and each time you will think “Finally! I am getting evaluated and I can get out of this ER!” but you will be wrong. Although there will probably be some crappy food and drink available, there may not be much in the way of entertainment, quiet, or privacy. Do yourself a favor and bring a book or game or something. You may not be allowed to keep your cell phone or laptop or other metal object (more on this later). If family or friends are willing to help, have them come along – if only so they can go out and bring you back real food when you get hungry.

Once you set foot in an ER and mention the word “psychiatry”, you should be prepared for someone to tell you that you’re not allowed to leave until the evaluation is complete. Maybe no one will tell you this, and you can try to leave, and it’ll be fine. But you should be prepared for it not to work.

After many trials and tribulations, you will be examined by a psychiatrist, who will decide whether or not to accept you to the psychiatric unit. You are not guaranteed admission to the unit just because you want it. You might be turned down if the psychiatrist thinks you aren’t sick enough to need it, or if your insurance refuses to pay for it. Insurance companies are very reluctant to pay for hospitalizations unless there is a clear risk involved, so explain what the risk is.

The only thing that (almost) always works is mentioning suicide. If you say you’re suicidal, you will get admitted. If you want to be sure, do the opposite of everything above. Stress that you are suicidal. Stress that it’s not just the occasional fleeting thought, but actually something that you might really go ahead with. If you have a plan, share it.

If you’re not suicidal, expect to have to argue. Talk about what you’ve already tried and why it didn’t work. Talk about all the damage your mental illness has caused in your life. If there’s any chance you might snap and do something horrible – hurt someone, hurt yourself, have some kind of spectacular breakdown – play it up. If you have to, say something vague like “I don’t know what I would do if I couldn’t get help”. Be ready for this not to work, and for the psychiatrist evaluating you to recommend you go to an outpatient psychiatrist.

If you really want help beyond the level of outpatient treatment, but your insurance company won’t budge, ask about a partial hospital program. This is something where you go to a hospital-like environment from 9 to 5 for a few weeks, seeing doctors and getting therapy and classes, but you’re not involuntarily committed and you go home at night. Sometimes insurance companies will be willing to do this as a compromise if you are not suicidal.

4. I am seeking inpatient treatment. How can I make sure that everyone knows I am there voluntarily, and that I don’t get shifted to involuntary status?

I want to be really clear on this: in your head, there might be a huge difference between voluntary and involuntary hospitalization. In your doctor’s head, and in the legal system, these are two very slightly different sets of paperwork with tiny differences between them.

It works like this, with slight variation from state to state: involuntary patients are usually in the hospital for a few days while the doctors evaluate them. If at the end of those few days the doctors decide the patient is safe, they’ll discharge them. If, at the end of those few days, the doctors decide the patient is dangerous, the doctors will file for a hearing before a judge, which will take about a week. The patient will stay in the hospital for that week. 99% of the time the judge will side with the doctors, and the patient will stay until the doctors decide they are safe, usually another week or two.

Voluntary patients are technically allowed to leave whenever, but they have to do this by filing a form saying they want to. Once they file that form, their doctors may keep them in the hospital for a few more days while they decide whether they want to accept the form or challenge it. If they want to challenge it, they will file for a hearing before a judge, which will take about a week. The patient will stay in the hospital for that week. 99% of the time the judge will side with the doctors, and the patient will stay until the doctors decide they are safe, usually another week or two.

You may notice that in both cases, the doctors can keep the patient for a few days, plus however long it takes to have a hearing, plus however long the judge gives them after a hearing. So what’s the difference between voluntary and involuntary hospitalization? Pride, I guess, plus a small percent of cases where the doctors just shrug and say “whatever” when the voluntary patient tries to leave.

Some decent fraction of the time, patients who intended to get voluntarily hospitalized end up involuntarily hospitalized for inscrutable bureaucratic reasons. The one I’m most familiar with is the ambulance ride: suppose the hospital you’re in doesn’t have any psychiatric beds available and wants to send you to the hospital down the road. For inscrutable bureaucratic reasons, they have to send you by ambulance. And for inscrutable bureaucratic reasons, any psychiatric patient transferred by ambulance has to be involuntary. Your doctors don’t care about this, because they know that there is no practical difference between voluntary and involuntary – but if you are still trying to maintain your pride, this might come as kind of a shock.

Some other decent fraction of the time, patients who ought to be involuntarily hospitalized end up voluntarily hospitalized for inscrutable bureaucratic reasons. The one I’m most familiar with is doctors asking patients whom they are committing against their will to sign a voluntary form, ie “Agree to come voluntarily, or else I will commit you involuntarily”. This sounds super Orwellian, but it really is done with the patient’s best interest at heart. Involuntary commitments usually leave some kind of court record, which people can find if they’re searching your name for eg a background check – which could come up anywhere from applying for a job, to trying to buy a gun. Voluntary commitments usually don’t cause this problem. Even though nobody feels very warmly to the psychiatrist telling them to sign voluntarily or else, that psychiatrist is right and you should suck it up and sign the voluntary form.

If given a choice, you should sign voluntary, if only for the background-check reason above. But don’t count on getting the choice, and don’t get too attached to the illusion that it really matters in some deep way.

5. How can I decide which psychiatric hospital to go to?

If it’s an emergency, the answer is “whichever one is closest” or even “whichever one the ambulance you should call right now takes you to.”

If you have a little more leeway, and you have a competent outpatient psychiatrist whom you trust, ask them which one to go to. They will probably be familiar with the local terrain and be able to give you good advice.

If you live in a big city with wealthier and poorer areas, and it’s all the same to your insurance company, try to go to a hospital in the wealthier area. Not only do wealthier people always get nicer things, but – and sorry if this is politically incorrect – you would rather be locked up for a week with the sorts of people who end up in wealthy-area psychiatric hospitals than with the sorts of people who end up in poor-area psychiatric hospitals.

US News & World Report ranks the best psychiatric hospitals. They’re mostly looking at doctor prestige, but I would guess this correlates with other factors patients want in a hospital. If you’re really prestigious you have a lot of money and a lot of eyes watching you, and that probably helps. I suspect teaching hospitals are also good, for the same reason. But these are just guesses.

If you have no other way of figuring this out, you can try looking at Psych Ward Reviews. This site is underused and suffers from the expected bias – you only write about somewhere if you don’t like it – but it’s better than nothing.

Keep in mind that sometimes hospitals will be full, and they will send you to a different hospital instead, and you will not have any say in this.

6. I am in a psychiatric hospital. How can I make this experience as comfortable as possible?

When you go to the hospital ER to get admitted, bring a bag of stuff with you. This should include clothing, fun things to do like books, earplugs, snacks you like, and phone numbers for people you might want to contact.

Keep in mind that you will not be allowed to have anything that could be used as a weapon, for a definition of “could be used as a weapon” which is clearly aimed at MacGyver-level masterminds who can create a railgun out of three paperclips and a stick of gum. The same goes for anything that could be used as a suicide method. This means for example no laced shoes, pillowcases, scarves, and a bunch of other things you will not expect. Basically, bring stuff to the hospital, but expect a decent chance it won’t be allowed in.

Metal objects, including laptops, cell phones, mp3 players, etc, will never be allowed in. These will be taken from you and put in a locker during your stay. If for some reason you have to transfer hospitals during your stay, these things always somehow get lost. Your best bet is to bring a friend with you to the ER, and have them take your cell phone and other valuables.

If you forget to bring a bag of stuff, or if you were committed involuntarily and unexpectedly and didn’t get a chance, call a friend or family member and ask them to bring you your stuff.

7. I am in a psychiatric hospital and not happy about it and I want to get out as quickly as possible. What should I do?

Good news: average stays for psychiatric hospitals have been decreasing for decades, and are now usually a week or less. I did a study on the hospital I worked in and came up with an median stay of 5.9 days, and remember that there are a lot of really sick people bringing up those numbers.

(there are a few states that have laws centered around the number “three days”, but there are also a lot of states that don’t. For some reason the “three days” number has leaked into the general consciousness and everyone expects that to be how long they stay in the hospital. Don’t necessarily expect to get out of the hospital in exactly three days, but do expect it will be closer to 5.9 days than to weeks or months.)

Even better news: contrary to rumor, psychiatrists rarely have a financial incentive to keep people hospitalized. In fact, most hospitals and insurances now encourage quick “turnover” to “open up beds” for the next group of needy patients, and doctors can get bonuses for getting people out as quickly as possible. This should worry everyone else in the hospital who’s getting treated for pneumonia or whatever, but from the perspective of a psychiatric patient who wants to leave quickly it’s pretty good.

If you have a good doctor, you should trust their judgment and do what they say. But if you have a bad doctor, then the only thing you can count on is that they will respond to incentives. Their incentive to get you out quickly is the hospital administrators and insurance companies breathing down their neck. Their incentive to keep you longer is that if you get out of the hospital and ever do anything bad, they can get sued for “missing the signs”. So their goal is to do a token amount of work that proves they evaluated you properly so nothing that happens later is their fault.

That means they’ll keep you for some standard time interval, traditionally (though not always) three days, just so they can say they “monitored” you. If you seem unusually scary in some way, they might monitor you a little longer, up to a week or two. Your chances of successfully convincing them not to do this are essentially nil. Imagine you kill someone a few weeks after leaving the hospital, and during the trial the prosecutor says “The patient was taken to St. Elsewhere Hospital for evaluation of mental status, but discharged early, because he said he didn’t want to have to sit around and be evaluated for the usual amount of time, and his doctor thought this was a reasonable request.” Your doctor is definitely imagining this scenario.

Instead of pleading with your doctors to let you go early, just do everything right. Have meals at mealtime. Go to groups at group time. Groom yourself, not just because you look saner when you’re well-groomed, but because there will actually be nurses monitoring your grooming status and reporting it to the psychiatrists making release decisions. When people tell you things you should do after leaving the hospital, agree that you will definitely do them. If people ask you questions, give reassuring-sounding answers.

For this last one – don’t contradict evidence against you, don’t accuse other people of lying, just downplay whatever you can downplay, admit to what the doctors already believe, and make it sound like things have gotten better. For example, if you were found lying face-down with an empty bottle of pills next to you, don’t say “I didn’t attempt suicide, I just tripped and the pills fell into my mouth!” (I have seriously had patients try this one on me). Don’t say “It was my girlfriend’s fault, she drove me to do it!” Just say something like “That was a really bad night for me, and I don’t remember exactly what happened, but now I’m feeling a lot more hopeful, and I think that was a mistake.”

Don’t overdo it. Nothing is more annoying than the person who’s like “The twenty minutes I’ve been talking with you so far have turned my life around, and now I realize how wrong I was to reject God’s beautiful gift of existence, and am overflowing with abounding joy at the prospect of getting to go back into the world and truly confront my problems with the help of my loving family and…” Just be like “Yeah, things were rough, but I feel a little better now.”

Most important, take the damn drugs.

Yes, I know that some psychiatric drugs are unpleasant or addictive or dangerous or make you feel miserable. I’m not challenging your decision not to want to be on them. But take the damn drugs while you are in the hospital, for 5.9 days. Then, when they let you out, decide if you still want to continue. I guarantee you this will be easier for you, for your psychiatrist, and for the various judges and lawyers involved. The alternative is that you refuse to take the drugs, somebody has to set up a court hearing to get an involuntary treatment order, you have to sit in the hospital for weeks while the legal system gets its act together, the psychiatrists finally get the order and drug you against your will, and then after however many weeks or months, you get released from the hospital and stop taking the drugs.

If you have a good doctor whom you trust, then talk to them about the drugs and make a decision together. Let them know if there are any side effects. If a drug isn’t working for you, tell them, so they can switch it. Be honest, and willing to stand up for yourself, but also open-minded and ready to listen.

But if you have a bad doctor, just take the damn drugs. Bring up side effects, mention anything that’s intolerable, but when – like bad doctors everywhere – they ignore you, just take the damn drugs. Then, when you get out of the hospital, go to a competent outpatient psychiatrist whom you trust, tell them the drugs aren’t right for you, and talk it over with them until you come up with a better plan.

This is a good general principle for everything: agree to whatever people ask you while you’re in the hospital, talk to a competent outpatient psychiatrist whom you trust once you get out, and decide which things to stick to. I remember working with a doctor who wanted to discharge his patient to some kind of outpatient drug rehab. The patient refused to go, so the doctor wouldn’t discharge her, and they were in a stalemate over it for weeks, and the whole time the patient was tearfully begging the doctor to release her. I cannot tell you how much willpower it took not to sneak into the patient’s room and yell at her “JUST AGREE TO GO TO THE REHAB AND THEN DON’T DO IT, YOU IDIOT”. I mean, I am as in favor of Truth as everyone else, but I don’t even think her doctor cared if she went to the rehab or not. He just wanted to be able to document “Patient agreed to go to rehab”, so that when she started taking drugs again, he would have ironclad court-admissable evidence that it wasn’t his fault.

Finally, your doctors will be very interested in “discharge planning”, ie making sure you have somewhere safe to be after you leave the hospital. They may not be willing to believe you about this. So get a family member (best) or friend (second-best) on your side. Have them agree to tell the doctors that they will watch over you after you leave, make sure you take your medication, make sure you get to your follow-up outpatient psychiatrist appointments, make sure you don’t take any illegal drugs. Your best bet for this is your mother – psychiatrists love mothers. Tell your doctors “I talked to my mother, she’s really concerned about my condition, she says that I can stay with her after I leave and she’s going to watch me really closely and make sure I’m okay”. Only say this if it’s true, because your doctors will call your mother and make sure of it. But if you can make this work, this is really helpful.

Even if all of this works, it’s just going to get you out of the hospital in a bit less than 5.9 days instead of a bit more than 5.9 days. There’s no good way to get out instantly. Sorry.

8. I am in the psychiatric hospital and I think I am being mistreated. What can I do?

Your best bet is to find someone with a position like “Recipient Rights Representative” or “Patient Rights Advocate”. Most states mandate that all psychiatric hospitals have a person like this. Their job is to listen to people’s concerns and investigate. Usually the doctors hate them, which I take as a pretty good sign that they are actually independent and do their job. If you haven’t already gotten a pamphlet about this person when you were admitted, ask the front desk or your nurse or someone else who seems to know what’s going on how to contact this person.

You may be able to switch doctors or nurses. Just go to the front desk or someone else official-looking and ask. I don’t think this is a legally codified right, but sometimes nobody cares enough to refuse. Keep in mind that if you switch doctors, you may have to stay longer so that the new doctor can do their three-day-or-so assessment of you, separate from the last doctor’s three-day-or-so assessment.

Threats don’t work. Everybody makes threats, and everyone at the hospital is used to them. Threatening to hire a lawyer is especially boring and overdone and will not even get anyone’s attention.

Actually hiring a lawyer will definitely get people’s attention, but it’s a high-variance strategy. Remember that it’s very hard to get a doctor not to hold you for a three-day-or-so evaluation, and that most people are released before anything goes to court anyway (a court hearing can take weeks to set up). I have mostly seen this work in cases where I have no idea what the doctors are thinking and everybody seems sort of confused and just letting the patient sit in the hospital for no reason. Lawyers can be a very good incentive for people to un-confuse themselves. I am not a lawyer, I have tried to avoid the state of prolonged confusion where lawyers become necessary, and I don’t want to give any legal advice beyond saying it will definitely get people’s attention. But I would feel bad if someone read this, hired a lawyer, found them not to be genuinely helpful (as in fact they probably will not be), and then got a huge legal bill.

Some people wait until they get out, then comparison-shop from the outside world and hire a lawyer to sue the people who mistreated them in the past. If you’re going to do this, document everything. Your doctors are documenting everything, and if one side comes in with perfect documentation and the other side just has vague memories, the first side will win. By “document everything”, I mean have a piece of paper where you write down things like “2:41 PM on October 10: Nurse Roberts threw a pencil at me. Informed such-and-such a person and they refused to help. Informed such-and-such another person and they also refused to help.” Write down exactly where and when everything took place – the psychiatric hospital may have video surveillance, and if everybody knows which videos to get, it will make life much easier. Report everything to the Patient Rights Advocate, even if they’re useless, just so you can call them up and have them testify you reported it to them at the time. I am not a lawyer, this is not legal advice, and your lawyer will be able to tell you much more – but documentation never hurts.

If things are really bad, figure out if there are surveillance cameras, and hang out in front of them.

Once you leave the hospital, consider giving feedback. Most hospitals will have some kind of survey or hotline or something that lets you praise hospital staff whom you liked and report hospital staff whom you didn’t like. This won’t heal any wounds you suffered – and while in the hospital, threatening to report a doctor will be ignored just like all threats – but it might help somebody way down the line. You can also write a report on Psych Ward Reviews. In fact, do this anyway, whether you’re mistreated or not, so that other people can learn which hospitals don’t mistreat people.

9. I think my friend/family member is in the psychiatric hospital, but nobody will tell me anything.

Yes, this definitely sounds like the sort of thing that happens.

Because of medical privacy laws, it is illegal to tell a person’s friend or family that they are in the psychiatric hospital, or which psychiatric hospital they’re in, without their consent. If the person is too paranoid, angry, or confused to give consent, then their friends and family won’t have a good way to figure out what’s going on.

Your best bet is to call every psychiatric hospital that they could plausibly be in and ask “Is [PERSON’S NAME] there?” Sometimes, all except one of them will say “No”, and one of them will say “Due to medical privacy laws, we can’t tell you”. I know this sounds ridiculous, but it really works.

Once you have some idea which hospital your friend is in, call and ask to speak to them. They will say something like “Due to medical privacy laws, we can’t tell you if that person is here.” Say “I understand that, but could you please just ask them if they’re willing to speak to me right now?” If they are willing to speak to you, problem solved. Otherwise, you might still get some information based on whether the person leaves you on hold for a while in a way that suggests she’s going to your friend and asking them whether they want to talk to you.

You can also ask to speak to (or leave a message for) the doctor taking care of your friend. The receptionist will say “Due to medical privacy laws, we can’t tell you if that person is here.” Say “I understand that, but I have some important information about their case that I want the doctor to know. They don’t need to tell me whether my friend is there or not, just listen.” At this point, all but the most committed receptionists will either admit that your friend isn’t there, or actually get a doctor or take a message. There is no doctor in the world who is so committed to medical privacy that they will waste time listening to the history of a patient they don’t really have just to maintain a charade, so if you actually get a doctor this is a really strong sign.

Once you have a good idea where your friend is, you can ask the receptionist to pass a message along to them, like “Call me at [this phone number]”. If they still don’t respond – well, that’s their right.

Most hospitals will have visiting hours. Going to visit someone who refuses to let you know they’re at the hospital and refuses to give anyone consent to talk to you is a high-variance strategy, but you can always try.

10. My friend/family member is in the psychiatric hospital and wants to get out as quickly as possible. How can I help them?

First, make sure they actually want to get out as quickly as possible, and you’re not just assuming this. You would be surprised how many people miss this step.

Second, make sure they know everything in section 7 here.

Third, offer to talk to the doctors. Doctors often don’t trust mentally ill patients, but they usually trust family members. If your friend isn’t sick enough to need to be in the hospital, tell the doctors that. Describe the circumstances around their admission and why it’s not as bad as it looks. Mention how well you know the person, and how you’ve been with them through their illness, and how you know they would never do anything dangerous. Only say this if it’s true – if they’re in the hospital for stabbing a police officer, your “they would never do anything truly dangerous” claim is just going to make you look like an idiot.

Offer to help with discharge planning (see the end of section 6). Tell them that the patient will be staying with you after they leave the hospital, that you’re going to be watching them closely to make sure that they’re safe, that you’ll make sure they take their medications and go to followup appointments. Again, only say this if it’s true – or at the very least, coordinate with the patient, so you don’t say “My son will be staying with me under my close supervision.” and then your son ruins it all by saying “Haha, as if.”

If you have a sob story, tell it. If you are ninety-seven years old and your son is the only person who is able to take care of you and bring you to your doctors’ appointments, mention that. Sob stories from patients generally don’t work, but sob stories from family members might.

Offer to come to the hospital during visiting hours and meet with the doctors. This both underlines everything above – it shows you’re really invested in their care – and also gives you a good opportunity to pressure the doctors face to face. I don’t mean you should threaten them or be a jerk about it, but just ask “Why can’t Johnny come home? We really need Johnny at home to help with the chores. Everyone at home misses Johnny.” I don’t guarantee this will work, but it will work a little, on certain people.

If there are many people in your family who are willing to work on this, use whoever is closest to the patient (eg their mother) – and in case of a tie use the person who is the most upstanding high-status member of society. A promise to take care of someone sounds better coming from a family member who is a doctor themselves (or a lawyer, or a teacher) compared to from the patient’s unemployed stoner brother with a NO FEAR tattoo.

As somebody who is not in a psychiatric hospital, you are in a much better position to hire a lawyer if one needs to be hired. Again, in the majority of cases a patient won’t even stay long enough to have a court hearing. If you are poor and have limited resources, this is definitely not how I would recommend using them. But if you have money to burn, or your friend/family member is being held for an inexplicable amount of time (longer than a week or two) and you don’t know why, you are going to be in a much better position to take care of this than the patient themselves.

Even if all this works, it’s just going to make someone stay a bit less than 5.9 days instead of a bit more than 5.9 days. There’s no good way to get someone out instantly.

11. How will I pay for all of this?

If you don’t have health insurance, there is usually some kind of state/county mental health insurance program that is supposed to help with this kind of thing. You usually have to earn below a certain amount to qualify. Your social worker at the hospital can talk to you about this. I am not promising you such a program will exist – if you’re concerned about money, look into this before you go to the hospital.

If you do have health insurance, they may pay for your admission. The problem is that they have to decide if you are really ill enough to need psychiatric care, and they make this determination separately from the doctors who decide whether to commit you or not. In the worst case scenario, you can be involuntarily committed because your doctors decided you needed care, but your health insurance refuses to pay for it because they decided you didn’t need care. If this happens, you are stuck with the bill. This is horrifying and there should be some kind of law against it, but I’ve seen it happen and I think it’s legal.

Your best bet in these cases is to try to get the state/county mental health insurance mentioned above. Sometimes you can sign up for it after you leave the hospital, and then get your costs reimbursed.

If everything goes wrong, and you’re stuck with a bill and no insurance company willing to pay it, try to argue the hospital down. Hospitals know that the average random sick person can’t afford to pay $20,000 or whatever ridiculous amount they charge. They make these numbers up as part of a complicated plot to fool insurance companies into overpaying, which never works, and they expect patients to try to bargain. They are also usually willing to consider whatever payment plan you think you can make work. I don’t know very much about this, but there’s some more information here.

As far as I know, committing people involuntarily and leaving them with a huge bill is legal, and hiring a lawyer will not help with this. I don’t know much, so you may want to ask a lawyer’s opinion anyway, if you can afford it.

12. I have a friend/family member who really needs psychiatric treatment, but refuses to get it. What can I do?

If your family member is not a danger to themselves or others, your options are limited. You can try to convince them to voluntarily seek treatment, but if it doesn’t work, it doesn’t work.

If your family member is a danger to themselves or others, you have a good case for getting them involuntarily committed to the hospital. A good example of this would be them threatening to hurt you, or actually hurting you, or being so out of touch with reality that you are legitimately afraid they might hurt you or themselves. Them being paranoid (“people are out to get me”) or extremely confused about basic reality (“I am able to fly”) counts as legitimate reason to believe they might hurt you or themselves. If this describes your family member, document everything worrying that they say or do so you can present it to the doctors doing the assessment and (eventually) the courts.

Then, if your family member is cooperative/confused enough to let you drive them to the hospital, drive them to a hospital ER. If they’re not this cooperative, call the police and they will take things from there. Be prepared for the police to potentially put your family member in handcuffs and be really aggressive and police-y about it (and if you have a dog, arrange for it to be somewhere else at the time – like stuck in a bedroom with the door closed). The police will bring your family member to the hospital ER. You should go to the hospital ER too, so that you can tell the doctors what’s wrong and why you think they need treatment – ie why they are dangerous or potentially dangerous.

The most common way this ends is that your family member goes to the hospital, is started on some drugs, gets a little better, goes home, stops taking the drugs, and gets worse again. If the doctors at the hospital are not competent, they may not think about this. It may end up being your job to insist on some kind of longer-term solution.

If your family member is psychotic, then the gold standard for longer-term solutions is a long-acting injectable antipsychotic medication. This is a shot that a nurse can give them which will give them a few months’ worth of antipsychotics all at once, safely. This way they don’t have to remember/agree to take their medication at home. Then a few months later you can wrangle them back to a doctor’s office where someone can give them the shot again; repeat as needed. If your family member doesn’t agree to this, you’re going to need a judge’s order – but judges are really cooperative with this kind of thing and your psychiatrist can tell you more about how to make this happen. A partial hospital program can also help with this.

There is a kind of institution with different names everywhere, usually something like “Assertive Community Treatment”, which basically consists of some mental health professionals in a van who go around to people’s houses and make sure they’re okay / staying on medication after they’ve been discharged from the hospital. These are chronically underfunded and you have to fight to get into them, but if nothing else works you can see if there’s one of them in your area. These people are also good at wrangling patients to get their monthly dose of long-acting injectable antipsychotics.

If you need a quick way to deal with a family member’s psychosis, and they refuse to take antipsychotic medicine, and they don’t meet criteria for involuntary hospital admission – well, I can’t believe I’m saying this, and this is super not medical advice – but cannabidiol, a chemical in marijuana, is a weak but functional antipsychotic. Normal marijuana is awful for this situation and contains lots of other chemicals that make psychosis worse, but you can get special cannabidiol-only strains that act sort of like weak non-prescription antipsychotic medication. In a state like California where marijuana is legal, you can talk to a marijuana expert about which strains these are and how to use them. In a state where only medical marijuana is legal, you can take your family member to a random quack to get them a medical marijuana card, then follow the same process. Most psychotic people refuse to believe that they are psychotic, but most of them are very anxious. If you frame the marijuana as a way to help with their anxiety, they may go along with it. Then they might get non-psychotic enough to make them understand there’s a problem, after which they can go to a psychiatrist and get a longer-term solution. Again, this is definitely not medical advice and if you have any other options you should take those instead.

You can get a lot more (and much more responsible) advice from the Treatment Advocacy Center, a non-profit that helps people figure out how to get their friends and family members psychiatric treatment.

Postscript

All of this is to prepare you for worst-case scenarios. Many people seek inpatient mental health treatment, find it very helpful, and consider it a positive experience. According to a survey on Shrink Rap (heavily selected population, possibly brigaded, not to be taken too seriously) about 40% of people who were involuntarily committed to psychiatric hospitals eventually decided it was helpful for them. This fits my experience as well. Be careful, but don’t avoid getting treatment if you really need it.

The Dark Rule Utilitarian Argument For Science Piracy

I sometimes advertise sci-hub.tw – the Kazakhstani pirate site that lets you get scientific papers for free. It’s clearly illegal in the US. But is it unethical? I can think of two strong arguments that it might be:

First, we have intellectual property rights to encourage the production of intellectual goods. If everyone downloaded Black Panther, then Marvel wouldn’t get any money, the movie industry would collapse, and we would never get Black Panther 2, Black Panther Vs. Batman Vs. Superman, A Very Black Panther Christmas, Black Panther 3000: Help, We Have No Idea How To Create Original Movies Anymore, and all the other sequels and spinoffs we await with a resignation born of inevitability. This is sort of a pop-Kantian/rule-utilitarian argument: if everyone were to act as I did, our actions would be self-defeating. Or we can reframe it as a coordination problem: we’re defecting against the institutions necessary to support movies existing at all, and free-loading off our moral betters.

Second, and related, the laws have their own moral force that has to be respected. With all our celebration of civil disobedience, we forget that in general people should feel some obligation to obey laws even if they disagree with them. This is the force that keeps libertarians from evading taxes, vegetarians from sabotaging meat markets, and doctors from giving you much better medications than the ones you consent to – even when they think they can get away with it. Civil disobedience can be justifiable – see here for more discussion – but surely it should require some truly important cause, probably above the level of “I really want to watch Black Panther, but it costs $11.99 in theaters”.

(I admit I sometimes violate this principle , because I – like most people – am not perfectly moral.)

But I can also think of an argument why Sci-Hub isn’t unethical.

The reason I don’t pirate Black Panther is because, if everyone pirated movies, it would destroy the movie industry, and we would never get Lego Black Panther IV: Lego Black Panther Vs. The Frowny Emoji, and that would make people sad.

But if everyone pirated scientific papers, it would destroy Elsevier et al, and that would be frickin’ fantastic.

As far as I can tell, the movie industry is capitalism working as it should. No one animator can make a major motion picture, so institutions like Marvel Corporation exist to solve the coordination problem and bring them together. Marvel Corporation is probably terrible in various ways, but it’s unclear we have the social technology to create non-terrible corporations right now, so unless we’re communists we accept it as the price to pay for a semi-functional industry. Then some market-rate percent of the gains flow down to the actors and videographers and so on. If you destroyed this system, you wouldn’t usher in a golden age of independent superhero movies. You would just stop getting Black Panther.

The scientific journal industry is some kind of weird rent-seeking abomination which doesn’t seem to add much real value. I don’t have space to make the full “journals are not helpful” argument here, but see eg this article, Elsevier’s profit margins, and the relative success of alternative models like arXiv. See Inadequate Equilibria for the discussion of how this might have happened. The short and wildly insufficient summary is that it looks like we backed ourselves into an equilibrium where eg tenure committees consider journals the sole arbiter of scientific merit, anyone who unilaterally tries to defect from this equilibrium is (reasonably) suspected of not having enough merit to make it the usual way, and coordination is hard so we can’t make everyone defect at the same time.

Thus Dark Rule Utilitarianism: “If I did this, everyone would do it. If everyone did it, our institutions would collapse. But I hate our institutions. Therefore…”

I think this fully addresses the first argument against science piracy. But what about the second? Sure, I don’t like the institution of scientific gatekeepers, but anarcho-communists don’t like the institution of private property. If I steal scientific papers to destroy the journal system, doesn’t universalizing that decision process lead to anarcho-communists stealing cars to destroy capitalism? Shouldn’t “civil disobedience” be reserved for the most important things, like ending segregation or resisting the Nazis, rather than endorsed as something anyone can do when they feel like destroying something?

This kind of thing leaves me hopelessly confused between different levels. It’s much worse than free speech, where all you’ve got to keep track of is whether you agree with what someone says vs. will defend their right to say it. But an important starting point is that endorsing “civil disobedience is sometimes okay” doesn’t lead to a world where anarcho-communists steal cars and nobody stops them. It leads to a world where there is no overarching moral principle preventing anarcho-communists from seizing cars, and where we have to do politics to decide whether they get arrested. In practice, the politics would end up with the car thieves arrested, because stealing cars is pretty conspicuous and nobody likes car thieves.

Isn’t this just grounding morality in power? That is, aren’t we going from the clarity and fairness of “everyone must follow the law” to a more problematic “everyone must follow the law, except people clever enough to avoid getting caught and powerful enough to get away with civil disobedience?” Well, yeah. But from an institution design perspective, everything bottoms out in power eventually. All we’re doing here is replacing one form of power (the formal power possessed by law-makers) with another form of power (the informal powers of stealth and/or popularity that allow people to get away with civil disobedience). These two forms of power have different advantages and are possessed by different groups. The formal power is nice because it’s transparent and democratic and probably bound by rules like the Bill of Rights, but it also tends to concentrate among elites and be susceptible to tyranny. The informal power is nice because it’s inherently libertarian and democratic, but it’s also illegible and susceptible to being used by demagogues and populists.

So, a metaphor: imagine a world with a magic artifact at the North Pole which makes it literally impossible to violate laws. The countries of the far north are infinitely orderly with no need for police at all. Go further south and the strength of the artifact decreases, until you’re at the edge of the Arctic Circle and it might be possible to violate a very minor law if your life was in danger. By the time you’re at the Equator, any kind of strong urge lets you violate most laws, and by the Tropic of Capricorn you can violate all but the most sacred laws with only a slight feeling of resistance. Finally you reach the nations of the South Pole, where the laws are enforced by nothing but a policeman’s gun.

Where would you want to live in such a world? It’s a hard question – I can imagine pretty much anything happening in this kind of scenario. But if I had to choose, I think I would take up residence somewhere around the latitude of California. I would want the laws to carry some force beyond just the barrel of a gun – a high trust society with consistent institutions is really important, and the more people follow the law without being watched the less incentive there is to create a police state.

But I also wouldn’t want to live exactly at the North Pole. And when I try to figure out why, I think it’s that civil disobedience is the acid that dissolves inadequate equilibria. Equilibria are inadequate relative to some set of rules; if you’re allowed to break the rules, they can become adequate again. Under this model, civil disobedience isn’t a secret weapon to save up for extreme cases like desegregation, it’s part of the search process we use to get better institutions.

If the artifact is a metaphor for the moral law, then my choice to live outside the North Pole suggests that I can consistently defy unjust laws a little, even if my decision will be universalized. I should expect some problems – groups I don’t like will use civil disobedience to promote causes I abhor, and the state will be less orderly and peaceful than it could be – but overall everyone will end up being better off. This doesn’t mean I have to support those groups or even excuse their criminality – part of the politics that decides the result is me expressing that they are bad and need to be punished – it just means that, given the chance to magically make all civil disobedience impossible in a way that applies equally to me and my enemies – I would reject it, or take it at some less-than-maximal value.

So this is my argument that Sci-Hub can be ethical. Universalized it would destroy the system – but the system is bad and needs to be destroyed. And although this would break the law, a very slight amount of law-breaking might be a beneficial solution to inadequate equilibria that could be endorsed even when universalized.

Posted in Uncategorized | Tagged | 405 Comments

OT97: Dopen Thread

This is the bi-weekly visible open thread (there are also hidden open threads twice a week you can reach through the Open Thread tab on the top of the page). Post about anything you want, ask random questions, whatever. You can also talk at the SSC subreddit or the SSC Discord server. Also:

1. Comment of the week is John Schilling on Google X Prize. There’s also a lot of good discussion in the free energy thread, though I can’t pick just one.

2. New ad for brain preservation company Nectome – see eg this article about their head researcher winning the Brain Preservation Prize. If you’re interested in helping, there’s an link for joining their team at the bottom of their site.

3. Nobody is under any obligation to comply with this, but if you want to encourage this blog to continue to exist, I request not to be cited in major national newspapers. I realize it’s meant well, and I appreciate the honor, but I’ve gotten a few more real-life threats than I’m entirely comfortable with, and I would prefer decreased publicity for now.

4. I recently put a couple of responses to an online spat up here because I needed somewhere to host them, unaware that this would email all several thousand people on my mailing list. Sorry about that. I’ve deleted some of them because of the whole “decreased publicity” thing, and I would appreciate help from anyone who knows how to make it so I can put random useful text up in an out-of-the-way place without insta-emailing everybody.

5. Thanks to Lanny for fixing this blog’s comment report function. You should now be able to report inappropriate comments again. If you can’t, please say so and we’ll try to figure out what went wrong.

Posted in Uncategorized | Tagged | 1,264 Comments

SSC Journal Club: Friston On Computational Mood

A few months ago, I wrote Toward A Predictive Theory Of Depression, which used the predictive coding model of brain function to speculate about mood disorders and emotions. Emotions might be a tendency toward unusually high (or low) precision of predictions:

Imagine the world’s most successful entrepreneur. Every company they found becomes a multibillion-dollar success. Every stock they pick shoots up and never stops. Heck, even their personal life is like this. Every vacation they take ends out picture-perfect and creates memories that last a lifetime; every date they go on leads to passionate soul-burning love that never ends badly.

And imagine your job is to advise this entrepreneur. The only advice worth giving would be “do more stuff”. Clearly all the stuff they’re doing works, so aim higher, work harder, run for President. Another way of saying this is “be more self-confident” – if they’re doubting whether or not to start a new project, remind them that 100% of the things they’ve ever done have been successful, odds are pretty good this new one will too, and they should stop wasting their time second-guessing themselves.

Now imagine the world’s least successful entrepreneur. Every company they make flounders and dies. Every stock they pick crashes the next day. Their vacations always get rained-out, their dates always end up with the other person leaving halfway through and sticking them with the bill.

What if your job is advising this guy? If they’re thinking of starting a new company, your advice is “Be really careful – you should know it’ll probably go badly”. If they’re thinking of going on a date, you should warn them against it unless they’re really sure. A good global suggestion might be to aim lower, go for low-risk-low-reward steady payoffs, and wait on anything risky until they’ve figured themselves out a little bit more.

Corlett, Frith and Fletcher linked mania to increased confidence. But mania looks a lot like being happy. And you’re happy when you succeed a lot. And when you succeed a lot, maybe having increased confidence is the way to go. If happiness were a sort of global filter that affected all your thought processes and said “These are good times, you should press really hard to exploit your apparent excellence and not worry too much about risk”, that would be pretty evolutionarily useful. Likewise, if sadness were a way of saying “Things are going pretty badly, maybe be less confident and don’t start any new projects”, that would be useful too.

Depression isn’t normal sadness. But if normal sadness lowers neural confidence a little, maybe depression is the pathological result of biological processes that lower neural confidence a lot. To give a total fake example which I’m not saying is what actually happens, if you run out of whatever neurotransmitter you use to signal high confidence, that would give you permanent pathological low confidence and might look like depression.

This would explain a lot about depression. It would explain why depressed people have such low motivation. It would explain why their movements are less forceful (“psychomotor retardation”). It would even explain why sense data are less distinct (depressed people literally see the world in washed out shades of grey). I thought this was plausible, but said I’d wait for real scientists to say the same thing before believing it too much.

What Is Mood: A Computational Perspective by Clark, Watson, and Friston – is real scientists saying the same thing. Sort of. With a lot more rigor. Let’s look into it and see what they get.

Recent theoretical arguments have converged on the idea that emotional states reflect changes in the uncertainty about the somatic consequences of action (Joffily & Coricelli, 2013; Wager et al. 2015; Seth & Friston, 2016). This uncertainty refers to the precision with which motor and physiological states can be predicted. In this setting, negative emotions contextualise events that induce expectations of unpredictability, while positive emotions refer to events that resolve uncertainty and confer a feeling of control (Barrett & Satpute, 2013; Gu et al. 2013). This ties emotional states to the resolution of uncertainty and, through the biophysical encoding of precision, to neuromodulation and cortical gain control (Brown & Friston, 2012).

In summary, one can associate the valence of emotional stimuli with the precision of prior beliefs about the consequences of action. In this view, positively valenced brain states are necessarily associated with increases in the precision of predictions about the (controllable) future – or, more simply, predictable consequences of motor or autonomic behaviour. Conversely, negative emotions correspond to a loss of prior precision and a sense of helplessness and uncertainty about the consequences of action.

Here they’re saying that emotions – the day-to-day variation in whether we feel happy or sad – is meant to track what kind of environment we’re in. Is it a predictable environment that we should rush out to manipulate so we can harvest a big heap of utility? Or is it an unpredictable environment where we’re probably wrong about everything and should try to limit damage?

It’s not really clear from this quote, but later on they’re going to shift from happiness being “the world is predictable” to “the world is good”, which – sounds a lot more common-sensical. I think this has to do with Friston’s commitment to believing that uncertainty-resolution is the only drive, and every form of goodness is a sort of predictability in a way. See Monday’s post God Help Us, Let’s Try To Understand Friston On Free Energy – or don’t, for all the good it will do you.

Any hierarchical inference relies on hyperpriors. These furnish higher level predictions of the likely value of lower level parameters. From the above, one can see that important parameters are the precisions of prediction errors at high and low levels of the hierarchy (i.e. prior and sensory precision). These precisions reflect the confidence we place in our prior beliefs relative to sensory evidence. If emotional states in the brain reflect the precision of prior beliefs about the consequences of action, then distinct neuronal populations must also encode hyperpriors. In other words, short-term fluctuations in precision (i.e. emotional fluctuations) will themselves be constrained by hyperpriors encoding their long-term average (i.e. mood).

Here, we propose that mood corresponds to hyperpriors about emotional states, or confidence about the consequences of action. In other words, mood states reflect the prior expectation about precision that nuances (emotional) fluctuations in confidence or uncertainty. If emotion reflects interoceptive precision, and is biophysically encoded by neuromodulatory gain control, then this suggests that mood is neurobiologically encoded as the set-point of neuromodulator systems that determine synaptic gain control over principal cells reporting prediction errors at different levels of the interoceptive hierarchy. This set-point is the sensitivity of responses to prediction errors and has a profound and enduring effect on subsequent inference.

The traditional definition says that “mood is like climate, emotions are like weather”. I think they’re saying that mood – long-lasting states like being depressed or being a generally carefree person – are second-level priors about emotions, which themselves are first-level priors about actions.

So suppose you see a vaguely greenish piece of paper on the ground. If you’re happy, you have a prior for the world being good, and so you might be more likely to interpret it as possibly a dollar bill. And you have a prior for the world being exploitable, so you might be more likely to think you can reach down and take it and have an extra dollar. And if you do, and it really is a dollar bill, you might become happier, since you’ve gained a little evidence that your senses are trustworthy (you were right to perceive it as a dollar), the world is exploitable (your cunning plan to pick up the paper and gain $1 worked!), and you’re in the sort of high-reward environment where you should go off and do other exciting things.

On the other hand, if you’re sad, you have a prior for the world being bad, so you might expect it to be litter. You have a prior that you can’t really predict or affect the world, so it might not be worth bending down to pick it up – you might just end up disappointed. But if you did bend down to pick it up, and it did turn out to be a dollar bill, you might brighten up a little, just as the happy person would. You’ve gained a little bit of evidence that you’re in a nice part of the world where good things happen to you, and that you can execute a simple plan like picking up a dollar bill to gain money.

A depressed person would have the same prior that the world is bad and the paper is probably just litter. But if perhaps she did pick up the dollar, and feel tempted to conclude that the world was good and she should feel happy, a higher-level prior would kick in: even when it seems like the world is good, that’s wrong and you should ignore it. The world is never actually good. When good things happen that look like they should convince you that the world is good, those are just lies.

Friston et al bring up learned helplessness. Let’s say you shock a rat a lot. In fact, let’s say you’re even more cruel, and you constantly give the rat apparent escapes, only to close them off at the last second and keep shocking it. You give the rat what look like food pellets, but they turn out to just be rocks painted to look like food. You eventually gaslight the hell out of the rat. Finally, you stop doing this, and you give the rat some actual food and a way out, and the rat just doesn’t care. Yes, food and escape should be good things that make it feel lik the world is reward-filled and exploitable, but it’s been let down so many times before that it assumes anything seemingly-good is a mirage.

Here’s the picture they eventually draw:

Depression is a prediction of bad outcomes with high confidence. Mania is a prediction of good outcomes with high confidence. Anxiety (or “agitated depression”) is a prediction of bad outcomes with low confidence. There’s a blank space where it looks like there ought to be an extra emotion; maybe God will release it later as DLC.

Friston et al speculate that these hyperpriors over emotions can either be genetically encoded, or “learned” over very long periods of consistent stimuli. For example, if your childhood is unbearably terrible, that might be long enough to “burn in” a high-confidence hyperprior that the world is always bad.

(they don’t mention this, but if prediction and action are as linked as everyone always says, I wonder if this would explain why people with terrible childhoods are always mysteriously sabotaging themselves into have adulthoods that are terrible in the exact same way – eg someone with an abusive alcoholic father marrying an abusive alcoholic).

These hyperpriors can reach the level of a mood disorder when they become resistant to feedback. They present a couple of different arguments for how this might happen. In one, a depressed person doesn’t feel any positive emotions, since there’s such a strong prior on everything being terrible that these never reach the level of plausibility. Since positive emotions are a useful tool for figuring out what makes you happy and urging you to do it, depressed people aren’t motivated to make themselves happy, and so never end up contradicting their bias towards believing they’re sad all the time. This fits really well with “behavioral activation”, a common psychotherapy where therapists tell depressed people to just go out and do happy things whether they want to or not, and which often helps the depression resolve.

In another, all the brain’s predictions are so low-precision that it can’t even properly predict interoceptive sensations (the sensations received from organs, eg the heartbeat). Maybe it will think “I guess maybe my heart will beat right now”, but it’s not the sort of clear confident precision that really enters into its mental model. That means these interoceptive sensations are always predicted slightly incorrectly, and this keeps the brain feeling like it’s sick and confused and the world is unpredictable.

They don’t seem to mention this, but it also seems intuitively plausible that the strong prior on negativity could prevent the perception of positive factors directly. You see the piece of paper on the street, you think “the world is always terrible, so no way that’s a dollar bill”, you pass it by, and you miss an opportunity to feel lucky and give yourself a tiny bit of pleasure.

The rest of the paper is just a survey of some findings from biology and neuroscience that seem to support this, though they’re not all very specific. For example, the HPA axis is dysregulated, which fits with predictive processing, but it also fits with everything else. The main part I found interesting was this:

In healthy systems, mood should be affected by the valence of tightly controlled prediction errors. Recent animal work has shown that positive prediction errors (receiving more food than expected), show a strong positive correlation with dopaminergic change in the nucleus accumbens (Hart et al. 2014) with corresponding changes in functional brain activity in humans during a financial reward task (Rutledge et al. 2010). Similarly, it has been shown that signal change in the anterior insula is significantly related to the magnitude of prediction error (Bossaerts, 2010). The pharmacological manipulation of these networks was recently demonstrated where participants were given electric shocks (harms) in exchange for financial reward (gains), and offered the option of increasing the number of shocks in exchange for greater reward. It was shown that citalopram increased harm-aversion, while levodopa made individuals more likely to harm themselves than others (Crockett et al. 2015). This fits nicely with our notion that serotonin levels (and other neuromodulators) encode expectations about likely negative outcomes and encourage the fulfilment of these predictions through action (i.e. low levels promote behaviour with negative outcomes).

Focus on this sentence: “serotonin encodes expectations about likely negative outcomes and encourages the fulfilment of these predictions through action”. Also this one: “Low levels [of serotonin] promote behavior with negative outcomes”.

I don’t think I’m misunderstanding this – the authors cite some evidence that low serotonin causes self-harm, and yes, it certainly does. But what does it mean to have a system for promoting behavior with negative outcomes? Why have a neurotransmitter whose level corresponds to how much you should be trying to do negative-outcome behavior? Surely the answer is just “never do this”.

The only way I can make sense of this is through the paragraph above talking about the shocks-for-money game, where SSRIs decrease people’s willingness to get shocks. It sounds like maybe Friston et al are claiming that we have a “willingness to be harmed” lever so that we can calculate how willing we are to accept some levels of harm in exchange for a greater good. In that case, maybe self-harm is what happens when the “willingness to be harmed” lever is set so high that random noise, the chance of getting other people’s attention, or just passing the time presents some tiny reward, and your harm-for-reward tradeoff rate is so high that even that tiny reward is worth the harm.

More broadly, what should we think of this theory?

In retrospect, if you know Bayesian math, the idea of depression as a prior on bad outcomes seems pretty fricking obvious. I’m not even sure if it’s any different from the sort of stuff Aaron Beck was saying in the seventies. The big advance in this model is uniting “prior on bad outcomes” with “low precision of predictions / low neural confidence”. The low-precision part helps explain anergia, anhedonia, low motivation, psychomotor retardation, sensory washout, and probably (with a little more work) depression with psychotic features. Flipped around, it offers an explanation of psychomotor agitation, grandiosity, psychosis, and pareidolia in mania.

The only problem is that I still haven’t seen “prior on bad outcomes” and “low precision” really get unified. The authors seems to equivocate between “sadness means you’re in an unpredictable environment” and “sadness means you’re in a bad environment where everything sucks”. There is at least a little bit of work to add the hyperprior on top of the prior, so that at least we don’t get suspicious when we remember that depressed people are very confident in their depression. But it still seems like a world of low-precision predictions should be one where people just have no idea whether the paper in front of them is a dollar, not one where they’re really sure it isn’t. A world of high-precision predictions should look more like sitting in a bright room with a metronome, predicting each subsequent beat, rather than a world where everything is great and your life goes well. I’m not even sure this theory can explain why winning the lottery makes you happy rather than sad. It ought to make you think the world is really confusing and unpredictable (really? the thing you thought had a one in ten million chance happened?) – but in fact most lottery winners look pretty happy to me.

If this is confusing, at least it isn’t a new confusion. We know that a big part of the free energy research agenda is to try to unify desire-satisfaction with uncertainty-resolution, and claim that expectation and desire are (somehow, despite how it looks) the same thing. If we just assume that works, for the sake of argument, it allows this paper to be an impressive unification of several lines of research on mood disorder into a coherent and actionable whole.

Posted in Uncategorized | Tagged | 93 Comments

God Help Us, Let’s Try To Understand Friston On Free Energy

I’ve been trying to delve deeper into predictive processing theories of the brain, and I keep coming across Karl Friston’s work on “free energy”.

At first I felt bad for not understanding this. Then I realized I wasn’t alone. There’s an entire not-understanding-Karl-Friston internet fandom, complete with its own parody Twitter account and Markov blanket memes.

From the journal Neuropsychoanalysis (which based on its name I predict is a center of expertise in not understanding things):

At Columbia’s psychiatry department, I recently led a journal club for 15 PET and fMRI researhers, PhDs and MDs all, with well over $10 million in NIH grants between us, and we tried to understand Friston’s 2010 Nature Reviews Neuroscience paper – for an hour and a half. There was a lot of mathematical knowledge in the room: three statisticians, two physicists, a physical chemist, a nuclear physicist, and a large group of neuroimagers – but apparently we didn’t have what it took. I met with a Princeton physicist, a Stanford neurophysiologist, a Cold Springs Harbor neurobiologist to discuss the paper. Again blanks, one and all.

Normally this is the point at which I say “screw it” and give up. But almost all the most interesting neuroscience of the past decade involves this guy in one way or another. He’s the most-cited living neuroscientist, invented large parts of modern brain imaging, and received the prestigious Golden Brain Award (which is somehow a real thing). His Am I Autistic – An Intellectual Autobiography short essay, written in a weirdly lucid style and describing hijinks like deriving the Schrodinger equation for fun in school, is as consistent with genius as anything I’ve ever read.

As for free energy, it’s been dubbed “a unified brain theory” (Friston 2010), a key through which “nearly every aspect of [brain] anatomy and physiology starts to make sense” (Friston 2009), “[the source of] the ability of biological systems to resist a natural tendency to disorder” (Friston 2012), an explanation of how life “inevitably and emergently” arose from the primordial soup (Friston 2013), and “a real life version of Isaac Asimov’s psychohistory” (description here of Allen 2018).

I continue to hope some science journalist takes up the mantle of explaining this comprehensively. Until that happens, I’ve been working to gather as many perspectives as I can, to talk to the few neuroscientists who claim to even partially understand what’s going on, and to piece together a partial understanding. I am not at all the right person to do this, and this is not an attempt to get a gears-level understanding – just the kind of pop-science-journalism understanding that gives us a slight summary-level idea of what’s going on. My ulterior motive is to get to the point where I can understand Friston’s recent explanation of depression, relevant to my interests as a psychiatrist.

Sources include Dr. Alianna Maren’s How To Read Karl Friston (In The Original Greek), Wilson and Golonka’s Free Energy: How the F*ck Does That Work, Ecologically?, Alius Magazine’s interview with Friston, Observing Ideas, and (especially) the ominously named Wo’s Weblog.

From these I get the impression that part of the problem is that “free energy” is a complicated concept being used in a lot of different ways.

First, free energy is a specific mathematical term in certain Bayesian equations.

I’m getting this from here, which goes into much more detail about the math than I can manage. What I’ve managed to extract: Bayes’ theorem, as always, is the mathematical rule for determining how much to weigh evidence. The brain is sometimes called a Bayesian machine, because it has to create a coherent picture of the world by weighing all the different data it gets – everything from millions of photoreceptors’ worth of vision, to millions of cochlear receptors worth of hearing, to all the other sense, to logical reasoning, to past experience, and so on. But actually using Bayes on all this data quickly gets computationally intractable.

Free energy is a quantity used in “variational Bayesian methods”, a specific computationally tractable way of approximating Bayes’ Theorem. Under this interpretation, Friston is claiming that the brain uses this Bayes-approximation algorithm. Minmizing the free energy quantity in this algorithm is equivalent-ish to trying to minimize prediction error, trying to minimize the amount you’re surprised by the world around you, and trying to maximize accuracy of mental models. This sounds in line with standard predictive processing theories. Under this interpretation, the brain implements predictive processing through free energy minimization.

Second, free energy minimization is an algorithm-agnostic way of saying you’re trying to approximate Bayes as accurately as possible.

This comes from the same source as above. It also ends up equivalent-ish to all those other things like trying to be correct in your understanding of the world, and to standard predictive processing.

Third, free energy minimization is a claim that the fundamental psychological drive is the reduction of uncertainty.

I get this claim from the Alius interview, where Friston says:

If you subscribe to the premise that that creatures like you and me act to minimize their expected free energy, then we act to reduce expected surprise or, more simply, resolve uncertainty. So what’s the first thing that we would do on entering a dark room — we would turn on the lights. Why? Because this action has epistemic affordance; in other words, it resolves uncertainty (expected free energy). This simple argument generalizes to our inferences about (hidden or latent) states of the world — and the contingencies that underwrite those states of affairs.

The discovery that the only human motive is uncertainty-reduction might come as a surprise to humans who feel motivated by things like money, power, sex, friendship, or altruism. But the neuroscientist I talked to about this says I am not misinterpreting the interview. The claim really is that uncertainty-reduction is the only game in town.

In a sense, it must be true that there is only one human motivation. After all, if you’re Paris of Troy, getting offered the choice between power, fame, and sex – then some mental module must convert these to a common currency so it can decide which is most attractive. If that currency is, I dunno, dopamine in the striatum, then in some reductive sense, the only human motivation is increasing striatal dopamine (don’t philosophize at me, I know this is a stupid way of framing things, but you know what I mean). Then the only weird thing about the free energy formulation is identifying the common currency with uncertainty-minimization, which is some specific thing that already has another meaning.

I think the claim (briefly mentioned eg here) is that your brain hacks eg the hunger drive by “predicting” that your mouth is full of delicious food. Then, when your mouth is not full of delicious food, it’s a “prediction error”, it sets off all sorts of alarm bells, and your brain’s predictive machinery is confused and uncertain. The only way to “resolve” this “uncertainty” is to bring reality into line with the prediction and actually fill your mouth with delicious food. On the one hand, there is a lot of basic neuroscience research that suggests something like this is going on. On the other, Wo’s writes about this further:

The basic idea seems to go roughly as follows. Suppose my internal probability function Q assigns high probability to states in which I’m having a slice of pizza, while my sensory input suggests that I’m currently not having a slice of pizza. There are two ways of bringing Q in alignment with my sensory input: (a) I could change Q so that it no longer assigns high probability to pizza states, (b) I could grab a piece of pizza, thereby changing my sensory input so that it conforms to the pizza predictions of Q. Both (a) and (b) would lead to a state in which my (new) probability function Q’ assigns high probability to my (new) sensory input d’. Compared to the present state, the sensory input will then have lower surprise. So any transition to these states can be seen as a reduction of free energy, in the unambitious sense of the term.

Action is thus explained as an attempt to bring one’s sensory input in alignment with one’s representation of the world.

This is clearly nuts. When I decide to reach out for the pizza, I don’t assign high probability to states in which I’m already eating the slice. It is precisely my knowledge that I’m not eating the slice, together with my desire to eat the slice, that explains my reaching out.

There are at least two fundamental problems with the simple picture just outlined. One is that it makes little sense without postulating an independent source of goals or desires. Suppose it’s true that I reach out for the pizza because I hallucinate (as it were) that that’s what I’m doing, and I try to turn this hallucination into reality. Where does the hallucination come from? Surely it’s not just a technical glitch in my perceptual system. Otherwise it would be a miraculous coincidence that I mostly hallucinate pleasant and fitness-increasing states. Some further part of my cognitive architecture must trigger the hallucinations that cause me to act. (If there’s no such source, the much discussed “dark room problem” arises: why don’t we efficiently minimize sensory surprise (and thereby free energy) by sitting still in a dark room until we die?)

The second problem is that efficient action requires keeping track of both the actual state and the goal state. If I want to reach out for the pizza, I’d better know where my arms are, where the pizza is, what’s in between the two, and so on. If my internal representation of the world falsely says that the pizza is already in my mouth, it’s hard to explain how I manage to grab it from the plate.

A closer look at Friston’s papers suggests that the above rough proposal isn’t quite what he has in mind. Recall that minimizing free energy can be seen as an approximate method for bringing one probability function Q close to another function P. If we think of Q as representing the system’s beliefs about the present state, and P as a representation of its goals, then we have the required two components for explaining action. What’s unusual is only that the goals are represented by a probability function, rather than (say) a utility function. How would that work?

Here’s an idea. Given the present probability function Q, we can map any goal state A to the target function Q^A, which is Q conditionalized on A — or perhaps on certain sensory states that would go along with A. For example, if I successfully reach out for the pizza, my belief function Q will change to a function Q^A that assigns high probability to my arm being outstretched, to seeing and feeling the pizza in my fingers, etc. Choosing an act that minimizes the difference between my belief function and Q^A is then tantamount to choosing an act that realizes my goal.

This might lead to an interesting empirical model of how actions are generated. Of course we’d need to know more about how the target function Q^A is determined. I said it comes about by (approximately?) conditionalizing Q on the goal state A, but how do we identify the relevant A? Why do I want to reach out for the pizza? Arguably the explanation is that reaching out is likely (according to Q) to lead to a more distal state in which I eat the pizza, which I desire. So to compute the proximal target probability Q^A we presumably need to encode the system’s more distal goals and then use techniques from (stochastic) control theory, perhaps, to derive more immediate goals.

That version of the story looks much more plausible, and much less revolutionary, than the story outlined above. In the present version, perception and action are not two means to the same end — minimizing free energy. The free energy that’s minimized in perception is a completely different quantity than the free energy that’s minimized in action. What’s true is that both tasks involve mathematically similar optimization problems. But that isn’t too surprising given the well-known mathematical and computational parallels between conditionalizing and maximizing expected utility.

It’s tempting to throw this out entirely. But part of me does feel like there’s a weird connection between curiosity and every other drive. For example, sex seems like it should be pretty basic and curiosity-resistant. But how often do people say that they’re attracted to someone “because he’s mysterious”? And what about the Coolidge Effect (known in the polyamory community as “new relationship energy”)? After a while with the same partner, sex and romance lose their magic – only to reappear if the animal/person hooks up with a new partner. Doesn’t this point to some kind of connection between sexuality and curiosity?

What about the typical complaint of porn addicts – that they start off watching softcorn porn, find after a while that it’s no longer titillating, move on to harder porn, and eventually have to get into really perverted stuff just to feel anything at all? Is this a sort of uncertainty reduction?

The only problem is that this is a really specific kind of uncertainty reduction. Why should “uncertainty about what it would be like to be in a relationship with that particular attractive person” be so much more compelling than “uncertainty about what the middle letter of the Bible is”, a question which almost no one feels the slightest inclination to resolve? The interviewers ask Friston something sort of similar, referring to some experiments where people are happiest not when given easy things with no uncertainty, nor confusing things with unresolvable uncertainty, but puzzles – things that seem confusing at first, but actually have a lot of hidden order within them. They ask Friston whether he might want to switch teams to support a u-shaped theory where people like being in the middle between too little uncertainty or too much uncertainty. Friston…does not want to switch teams.

I do not think that “different laws may apply at different levels”. I see a singular and simple explanation for all the apparent dialectics above: they are all explained by minimization of expected free energy, expected surprise or uncertainty. I feel slightly puritanical when deflating some of the (magical) thinking about inverted U curves and “sweet spots”. However, things are just simpler than that: there is only one sweet spot; namely, the free energy minimum at the bottom of a U-shaped free energy function […]

This means that any opportunity to resolve uncertainty itself now becomes attractive (literally, in the mathematical sense of a random dynamical attractor) (Friston, 2013). In short, as nicely articulated by (Schmidhuber, 2010), the opportunity to answer “what would happen if I did that” is one of the most important resolvers of uncertainty. Formally, the resolution of uncertainty (aka intrinsic motivation, intrinsic value, epistemic value, the value of information, Bayesian surprise, etc. (Friston et al., 2017)) corresponds to salience. Note that in active inference, salience becomes an attribute of an action or policy in relation to the lived world. The mathematical homologue for contingencies (technically, the parameters of a generative model) corresponds to novelty. In other words, if there is an action that can reduce uncertainty about the consequences of a particular behavior, it is more likely to be expressed.
Given these imperatives, then the two ends of the inverted U become two extrema on different dimensions. In a world full of novelty and opportunity, we know immediately there is an opportunity to resolve reducible uncertainty and will immediately embark on joyful exploration — joyful because it reduces uncertainty or expected free energy (Joffily & Coricelli, 2013). Conversely, in a completely unpredictable world (i.e., a world with no precise sensory evidence, such as a dark room) there is no opportunity and all uncertainty is irreducible — a joyless world. Boredom is simply the product of explorative behavior; emptying a world of its epistemic value — a barren world in which all epistemic affordance has been exhausted through information seeking, free energy minimizing action.

Note that I slipped in the word “joyful” above. This brings something interesting to the table; namely, the affective valence of shifts in uncertainty — and how they are evaluated by our brains.

The only thing at all I am able to gather from this paragraph – besides the fact that apparently Karl Friston cites himself in conversation – is the Schmidhuber reference, which is actually really helpful. Schmidhuber is the guy behind eg the Formal Theory Of Fun & Creativity Explains Science, Art, Music, Humor, in which all of these are some form of taking a seemingly complex domain (in the mathematical sense of complexity) and reducing it to something simple (discovering a hidden order that makes it more compressible). I think Friston might be trying to hint that free energy minimization works in a Schmidhuberian sense where it applies to learning things that suddenly make large parts of our experience more comprehensible at once, rather than just “Here are some numbers: 1, 5, 7, 21 – now you have less uncertainty over what numbers I was about to tell you, isn’t that great?”

I agree this is one of life’s great joys, though maybe me and Karl Friston are not a 100% typical subset of humanity here. Also, I have trouble figuring out how to conceptualize other human drives like sex as this same kind complexity-reduction joy.

One more concern here – a lot of the things I read about this equivocate between “model accuracy maximization” and “surprise minimization”. These end really differently. Model accuracy maximization sounds like curiosity – you go out and explore as much of the world as possible to get a model that precisely matches reality. Surprise minimization sounds like locking yourself in a dark room with no stimuli, then predicting that you will be in a dark room with no stimuli, and never being surprised when your prediction turns out to be right. I understand Friston has written about the so-called “dark room problem”, but I haven’t had a chance to look into it as much as I should, and I can’t find anything that takes one or the other horn of the equivocation and says “definitely this one”.

Fourth, okay, all of this is pretty neat, but how does it explain all biological systems? How does it explain the origin of life from the primordial soup? And when do we get to the real-world version of psychohistory? In his Alius interview, Friston writes:

I first came up with a prototypical free energy principle when I was eight years old, in what I have previously called a “Gerald Durrell” moment (Friston, 2012). I was in the garden, during a gloriously hot 1960s British summer, preoccupied with the antics of some woodlice who were frantically scurrying around trying to find some shade. After half an hour of observation and innocent (childlike) contemplation, I realized their “scurrying” had no purpose or intent: they were simply moving faster in the sun — and slower in the shade. The simplicity of this explanation — for what one could artfully call biotic self-organization — appealed to me then and appeals to me now. It is exactly the same principle that underwrites the ensemble density dynamics of the free energy principle — and all its corollaries.

How do the wood lice have anything to do with any of the rest of this?

As best I can understand (and I’m drawing from here and here again), this is an ultimate meaning of “free energy” which is sort of like a formalization of homeostasis. It goes like this: consider a probability distribution of all the states an organism can be in. For example, your body can be at (90 degrees F, heart rate 10), (90 degrees F, heart rate 70), (98 degrees F, heart rate 10), (98 degrees F, heart rate 70), or any of a trillion other different combinations of possible parameters. But in fact, living systems successfully restrict themselves to tiny fractions of this space – if you go too far away from (98 degrees F, heart rate 70), you die. So you have two probability distributions – the maximum-entropy one where you could have any combination of heart rate and body temperature, and the one your body is aiming for with a life-compatible combination of heart rate and body temperature. Whenever you have a system trying to convert one probability distribution into another probability distribution, you can think of it as doing Bayesian work and following free energy principles. So free energy seems to be something like just a formal explanation of how certain systems display goal-directed behavior, without having to bring in an anthropomorphic or teleological concept of “goal-directedness”.

Friston mentions many times that free energy is “almost tautological”, and one of the neuroscientists I talked to who claimed to half-understand it said it should be viewed more as an elegant way of looking at things than as a scientific theory per se. From the Alius interview:

The free energy principle stands in stark distinction to things like predictive coding and the Bayesian brain hypothesis. This is because the free energy principle is what it is — a principle. Like Hamilton’s Principle of Stationary Action, it cannot be falsified. It cannot be disproven. In fact, there’s not much you can do with it, unless you ask whether measurable systems conform to the principle.

So we haven’t got a real-life version of Asimov’s psychohistory, is what you’re saying?

But also:

The Bayesian brain hypothesis is a corollary of the free energy principle and is realized through processes like predictive coding or abductive inference under prior beliefs. However, the Bayesian brain is not the free energy principle, because both the Bayesian brain hypothesis and predictive coding are incomplete theories of how we infer states of affairs.

This missing bit is the enactive compass of the free energy principle. In other words, the free energy principle is not just about making the best (Bayesian) sense of sensory impressions of what’s “out there”. It tries to understand how we sample the world and author our own sensations. Again, we come back to the woodlice and their scurrying — and an attempt to understand the imperatives behind this apparently purposeful sampling of the world. It is this enactive, embodied, extended, embedded, and encultured aspect that is lacking from the Bayesian brain and predictive coding theories; precisely because they do not consider entropy reduction […]

In short, the free energy principle fully endorses the Bayesian brain hypothesis — but that’s not the story. The only way you can change “the shape of things” — i.e., bound entropy production — is to act on the world. This is what distinguishes the free energy principle from predictive processing. In fact, we have now taken to referring to the free energy principle as “active inference”, which seems closer to the mark and slightly less pretentious for non-mathematicians.

So maybe the free energy principle is the unification of predictive coding of internal models, with the “action in the world is just another form of prediction” thesis mentioned above? I guess I thought that was part of the standard predictive coding story, but maybe I’m wrong?

Overall, the best I can do here is this: the free energy principle seems like an attempt to unify perception, cognition, homeostasis, and action.

“Free energy” is a mathematical concept that represents the failure of some things to match other things they’re supposed to be predicting.

The brain tries to minimize its free energy with respect to the world, ie minimize the difference between its models and reality. Sometimes it does that by updating its models of the world. Other times it does that by changing the world to better match its models.

Perception and cognition are both attempts to create accurate models that match the world, thus minimizing free energy.

Homeostasis and action are both attempts to make reality match mental models. Action tries to get the organism’s external state to match a mental model. Homeostasis tries to get the organism’s internal state to match a mental model. Since even bacteria are doing something homeostasis-like, all life shares the principle of being free energy minimizers.

So life isn’t doing four things – perceiving, thinking, acting, and maintaining homeostasis. It’s really just doing one thing – minimizing free energy – in four different ways – with the particular way it implements this in any given situation depending on which free energy minimization opportunities are most convenient. Or something.

This might be useful in some way? Or it might just be a cool philosophical way of looking at the world? Or maybe something in between? Or maybe a meaningless way of looking at the world? Or something? Somebody please help?


Discussion question for machine ethics researchers – if the free energy principle were right, would it disprove the orthogonality thesis? Might it be impossible to design a working brain with any goal besides free energy reduction? Would anything – even a paperclip maximizer – have to start by minimizing uncertainty, and then add paperclip maximization in later as a hack? Would it change anything if it did?

SSC Meetup: Bay Area 3/3

.
WHEN: 3:33 PM on Saturday, 3/3

WHERE: Berkeley campus, meet at the open space beside the intersection of West and Free Speech. Please disregard any kabbalistic implications of the meetup cross-streets.

WHO: Special guest is Gwern of gwern.net. Also me, Katja, and the usual Bay Area crowd.

WHY: Cause it’ll be fun. A lot of people have said before that they considered not going because they “don’t think they’re the typical SSC reader” or they’re “not sure they’d be able to keep up” or things like that. In the past, these people have usually had a good time and encouraged me to post something like this encouraging other people like them to come. The more unique and atypical people we get, the more fun it is getting to talk and exchange ideas. It’s a pretty low-key environment and very open to just hanging out on the edges of interesting conversations until you find one you’re comfortable joining. Also, you’ll probably be less socially awkward than I am, and it’s my meetup, so everyone has to tolerate me, so they’ll have to tolerate you too.

HOW: We haven’t done well with cafes or other traditional meetup spaces in the past, so we’ll just meet outside and sit on the grass. Bring blankets / refreshments if you want them. If it’s raining, we’ll meet just inside the Natural History Museum nearby and figure out what to do from there.

See you there!

Posted in Uncategorized | Tagged | 17 Comments

Links 2/18: Link Biao Incident

Punding, an uncommon side effect of abusing amphetamines and other dopaminergic drugs, involves “compulsive fascination with and performance of repetitive, mechanical tasks, such as assembling and disassembling, collecting, or sorting household objects, [for example] collecting pebbles and lining them up as perfectly as possible, disassembling wristwatches and putting them back together again, building hundreds of small wooden boxes”, etc. Also: “They are not generally aware that there is a compulsive element, but will continue even when they have good reason to stop. Rylander describes a burglar who started punding, and could not stop, even though he was suffering from an increasing apprehension of being caught.”

After the US repealed net neutrality provisions, the state of Montana has made its own rule demanding neutrality from providers receiving state contracts. Not sure how much this matters for broader society – or how many internet providers the average Montana state government office has to choose from, or what they’ll do if none of them agree to be neutral.

Surprisingly, Tibetan monks are more afraid of death than any other group studied.

Trump places tariffs on solar panels (and washing machines) in a move some people warn could set back renewable energy (and laundry, I guess). Anyone have an explanation for how focusing on solar in particular isn’t just gratuitiously evil? (commenters answer)

Less-covered spaceflight news: New Zealand startup Rocket Lab reaches orbit with a low-cost rocket using an electric-pump driven engine and 3D-printed parts. In more depressing space news: Google Lunar X Prize has officially announced that everyone loses and they will not be extending the contest further.

Was looking into tinnitus for a patient recently and came across this weird (temporary?) tinnitus treatment on Reddit that everyone says works. Possible explanation for why it might work here gives interesting insight into (some) tinnitus mechanism.

One reason the US doesn’t use the metric system: the scientist shipped in from Europe to testify to Congress on the issue was kidnapped by pirates. Bonus: the pirates may also have got one of the six Standard Kilograms.

NSA removes “honesty” and “openness” from its list of core values.

Paul Addis, a San Francisco activist and attorney famous for setting the Burning Man man on fire early to protest the corporatization of the event. Burning Man founder said Addis’ arson was “the single most pure act of radical self-expression to occur at this massive hipster tail-gate party in over a decade” – but Addis was sentenced to four years in prison for arson anyway. After release, he committed suicide by jumping in front of the BART.

More from the Department Of Weird Blockchain Projects Named Luna: “Luna DNA” allows users to upload their genetic data in exchange for a crypto-token called “Luna Coin”. What could possibly go wrong?

“[Aristotle has] a slight but consistent and habitual penchant in the corpus for humorous verbal play…there seems to be only about one pun per score of Bekker pages, but…there is no class or area of study in which Aristotle totally avoids punning.” (h/t Lou Keep)

New Statesman on Jacob Rees-Mogg, the Tories’ answer to Jeremy Corbyn: “He has never been seen (except perhaps by his wife) in anything other than a suit and tie. He speaks in sonorous Edwardian English and is unfailingly courteous…[In primary school], he played the stock markets using a £50 inheritance from a relative, standing up at the General Electric Company’s annual meeting and castigating a board – that included his father – for the firm’s “pathetic” dividend. A contemporary newspaper photograph showed the precocious 12-year-old solemnly reading the Financial Times beside his teddy bears…[He was married] in Canterbury Cathedral, the archbishop having authorised a Tridentine mass in ecclesiastical Latin in light of Rees-Mogg’s fervent Catholicism. The couple now have six children aged between seven months and ten, all bearing the names of Catholic popes and saints.” From his Wikipedia page: “Speaking in July 2017, Rees-Mogg conceded that ‘I’ve made no pretence to be a modern man at all, ever'”. Despite being by all accounts a colorful and likeable character, he doesn’t seem very competent and his opinions are out-of-touch and (imho) pretty dumb. Based on Jeremy Corbyn’s career path, Rees-Mogg will probably be Prime Minister within a year. Article is also interesting as an example of how left-leaning media has developed a counterproductive habit of sometimes covering the Right in terms of “We all know we should dislike this person, but look how cool they are!” This seems new and surprising and seems to require an explanation, maybe in terms of outgroup-fargroup dynamics.

After a lot of work, some people have been able to find an economic argument for why open borders would be a bad idea – but it still implies “a case against the stringency of current [immigration] restrictions” (though see here).

Credentialism watch: MIT is launching a new masters program in economics that doesn’t require a college or high school degree. Applicants need to take some free online courses and pass some non-free online tests, and then if they do well they can move on to the in-school part of the course. The program is being offered in affiliation with a group studying developmental economics and poverty, and is at least partly aimed at poor students from Third World countries. But Americans are already taking advantage of it, and it has more promise than most things in this sphere to help increase social mobility and bring down education costs.

Related: congratulations to Trinity College in Connecticut, the first (?) US college to break the $70,000/year price barrier. $100K or bust!

Related, if you think about it: It’s sometimes reported that SAT score and college GPA “only” correlate at a modest 0.35. But a book on education (h/t Timofey Pnin) points out that this is because higher-SAT-scoring students go to more elite colleges and major in more difficult subjects. Once this and some other confounders are adjusted for, the correlation rises to 0.68.

Contrary to what you might have learned in school, the tallest mountain in the solar system isn’t Olympos Mons. It’s Rheasilvia, a mountain on the asteroid Vesta whose height is almost 9% of the total radius of the asteroid.

Amazon enters the health care sector, so far just in order to provide health care for its own employees and those of a few other participating large companies. Claims that this mission will make it “free from profit-making incentives”, though some might ask how exactly profit-making incentives differ from cost-cutting incentives, which they’ll definitely have. Shares in major insurance companies fell 5% on the announcement. Interesting that the US health system has accidentally incentivized corporations to figure out solutions to rising health care costs, but I am not sure this is actually possible under current regulations other than by just providing worse care – the one cost-cutting measure that always works.

Study claims that pain tolerance predicts how many friends you have, although the theorized mechanism is something about the opiate system, and not just that social interaction is inherently painful and the number of friends you have depends on your ability to tolerate it (what does it say about me that this was my first guess?) Anyhow, Reddit seems to have mostly debunked it, which pretty closely matches my expectations for how this sort of result would fare.

For reasons lost to time, apprentice attorneys in the UK are called “devils”, their apprenticeship is called a “devilling”, and their supervisor is called a “devil-master”. May be related to similar practice of calling apprentice printers printer’s devils, likewise mysterious in origin. Theories include puns (they always got covered in ink, so they were practicing “the black arts”), superstition (originally people thought printing was really creepy and possibly satanic because you could create a book full of perfect identical letters), and racism (one of the first printer’s apprentices was an African, and everyone just assumed the only reasonable explanation for a person having black skin was that they were the Devil). A final theory is that printers’ devils were responsible for managing the box of discarded or broken letters, colorfully known as a hellbox. (h/t Eric Rall)

Campus free speech watch: FIRE demands college release its records about its firing of a professor who vocally supported Black Lives Matter.

Hawaiian Redditors describe their experiences receiving the false-alarm broadcast that Hawaii was about to be nuked. Some of these stories must be fake, but they’re still fun to read.

Your Twitter Followers Are Probably Bots. Everyone important, including honest people who don’t deliberately pay for bots to follow them, probably has bots following them on Twitter, mostly because bots follow a bunch of famous people in order to look more like real accounts. There are some techniques you can use to determine how many of your followers are bots. Complete with an analysis of how a New York attorney general who’s conducting investigations into people with fake followers on Twitter has…a bunch of fake followers on Twitter.

Marginal Revolution commenters on why automating trucking will take longer than you think.

A lot of big nutrition studies coming out recently. I’m not going to describe the results because there’s a lot of debate on how they should best be described and I don’t want to take a position without much more room to explain myself. But one is a randomized controlled trial on how adding sugar to the diet affects insulin sensitivity – this is really impressive since (for what I assume are ethical/IRB reasons) nobody had ever studied this via RCT before. The other is a large sample size study testing low-fat vs. low-carb diets over a long period with high compliance, partly sponsored by Gary Taubes-affiliated Nutritional Science Initiative.

Contrary to previous research, newer research suggests that increased incentives (eg paying people for a good score) does not increase adult IQ test performance. Related: IQ predicts self-control in chimpanzees.

Did you know: Blue is a dating-site for verified “blue check” Twitter users only. All we need is a policy of giving the children of two bluecheck users their own bluecheck and then we can have a true hereditary aristocracy.

Close to my heart: the relationship between sensory processing problems and obsessive-compulsive symptoms.

List Of Substances Administered To Adolf Hitler. If you’ve ever thought “Man, some of that Nazi stuff sounds like it came from a guy who was on a cocktail of methamphetamine, cocaine, adrenaline, testosterone, strychnine, heroin, oxycodone, morphine, barbituates, and human fecal bacteria”, well, you’re not wrong.

Related: the story of the most-unfortunately-named person in American history: Dr. Gay Hitler.

New meta-analysis: no evidence mindfulness works for anything. I suspect this is true the way it’s commonly practiced and studied (“if you’re feeling down, listen to this mindfulness tape for five minutes a day!”), less true for more becoming-a-Buddhist-monk-level stuff.

KnowYourMeme: “Hamilkin refers to a subculture of people who identify with characters from the musical Hamilton to the point where they believe they are those characters, spiritually.” Sort of wonder if closer examination would reveal this to consist entirely of eight very vocal twelve-year olds, three schizophrenics, several thousand trolls pretending to believe it for the lolz, and a bunch of writers exaggerating it for clicks – but I also sort of wonder this about flat-earthers and the alt-right.

More in the “contra poverty traps” research agenda: children whose parents are kicked off disability insurance are less likely to use disability insurance themselves as adults.

George Strait, the best-selling country singer of all time, is Jeff Bezos’ cousin. Also interesting: “Bezos” is a Cuban name, although Jeff himself is not of Cuban descent and got it from his stepfather.

The naming convention for the Trojan asteroids dictates that asteroids in front of Jupiter are named for Greek heroes from the Trojan War, and asteroids behind Jupiter are named for Trojan heroes. Two asteroids – 617 Patroclus and 624 Hektor – were named before the convention arose and are “on the wrong side” (h/t Alice Maz)

Trump is considering replacing some food stamp benefits with delivery of pre-prepared food boxes – I’ve previously written here about reasons I think something like this is a bad idea.

Just when everyone agreed ego depletion was debunked and dead, Baumeister et al strike back with a pre-registered study that continues to show the effect. Haven’t gotten a chance to look at it seriously yet, but glad that pre-registration etc are catching on.

Redditors who work in gun shops talk about their job and recount their weird experiences.

Russian lifehack: “Moscow residents say they have found that the only way to get the [government] to clear snow is to write the name of opposition leader Alexei Navalny on it”. Sort of related: in the 1970s, the West Virginia government refused to fund a necessary bridge in the town of Vulcan. The people of Vulcan appealed to the USSR to provide the funding; after the USSR expressed interest in helping, West Virginia approved it immediately.

Greg Cochran: most likely cause of the global decrease in frog populations is a fungal disease, possibly spread by researchers investigating the most likely cause of the global decrease in frog populations.

Related to a discussion from a while ago: update in the field of sexual-orientation-detecting neural networks replicates that they are clearly more accurate than humans in using faces to guess whether or not people are gay. Their claim that, given five images, they can detect gay men with 91% accuracy seems unbelievable; waiting to hear further research.

Peter at Bayesian Investor responds to my predictions for the next five years. Related: M at Unremediated Genderspace responds to my article about categorization systems and gender.

Lincoln Network releases their survey on viewpoint diversity in the tech industry. Key points include a self-described moderate saying “I’ve never heard of anyone who left tech because of their views. That’s ridiculous”, and 59% of self-identified very conservative people saying they know people who avoided or left jobs in tech because they felt they weren’t welcome due to their political views. People in five out of six political categories (including liberals, but not very-liberals) say they feel less comfortable sharing viewpoints with colleagues after Google diversity memo issue. Keep in mind high likelihood of sampling bias, though this shouldn’t affect results aggregated by political group as much.

The Tiffany Problem is an issue sometimes encountered by authors and other creative types, where trying to be realistic makes a work feel more unrealistic. Named after a medievalist who included a character named Tiffany (common medieval name), only to be told her book was unrealistic because obviously nobody would be named that back then.

In 1957, Mad Magazine published an article on a made-up system of measurement written by a 19-year old Donald Knuth.

Nobody really knows what the languages of the now-extinct Tasmanian aborigines sounded like, but various scholars have created Palawi kana, a conlang intended to resemble them as much as possible, and it’s even caught on a little in Tasmanian schools and government. Also, am I just pattern-matching, or do a suspicious number of unrelated languages use some version of “mina” to mean “me”?

Related: fascinated by this unsourced claim on Wikipedia that the Ewe of Nigeria believe themselves to be descendants of the one guy who didn’t participate in building the Tower of Babel, and their language to be the perfect language. Anyone know more about this belief, or how common stories like these are for different groups’ languages?

California state government is considering a bill that would mandate very strong pro-housing pro-development policies in almost all major urban areas. By the usual boring standard of state government issues, this is a unfathomably huge deal and could end the housing crisis single-handedly. Possible unintended consequence: since it works by mandating pro-development policies within a certain radius of mass transit, expect no more mass transit ever if it passes. Other possible unintended consequence: I’m less sure than many of my friends that pro-development policies are always good in all cases – but right now the pendulum is so far in the other direction that I’m happy to have one state shake things up a little (okay, maybe a lot) and put the fear of God into NIMBYs so they’ll compromise more elsewhere. Needless to say, Berkeleyans are already writing op-eds about how it will “cause massive damage to the global environment for thousands of years, possibly enough to tip the balance to the extinction of the entire human race.” No word yet on whether the bill has any chance of getting passed in the real world. Some discussion on Marginal Revolution.

China cracks down on funeral strippers.

Posted in Uncategorized | Tagged | 663 Comments