codex Slate Star Codex

THE JOYFUL REDUCTION OF UNCERTAINTY

Sentimental Cartography

A long time ago, I made a map of the rationalist community.

This is in the same geographic-map-of-something-non-geographic tradition as the Greater Ribbonfarm Cultural Region or xkcd’s map of the Internet. There’s even some sort of therapy program that seems to involve making a map like this of your life, though I don’t know how seriously they take it.

There’s no good name for this art and it’s really hard to Google. If you try “map of abstract concept” you just get a bunch of concept maps. It seems the old name, from back when this was a popular Renaissance amusement, is “sentimental cartography”, since it was usually applied to sentiments like love or sorrow. This isn’t great – the Internet’s not a sentiment – but it’s what we’ve got and I’ll do what I can to try to make it catch on.

Here are some of the best (ie, only) works of modern sentimental cartography I’ve been able to find. Sorry if this ends up a little clickbaity, but I’m annoyed that these haven’t been gathered together in one place before. I am very limited by some of them being offline and copyrighted, so some of these will be teasers rather than the full map. Others will be thumbnails that you can click through to get to the full map or an approximation.

The best modern sentimental cartography I can find comes from illustrator James Turner, who made the Map of Humanity.

You can find more samples at this site. Turner also has a map of love and relationships, but it’s even harder to find any good images of.

This is all I’ve got

You can buy poster versions of both maps at the SLG Publishing Store, and you really should. This is one of my favorite things in the whole world, and it’s sold a grand total of seven copies so far (I think I might be two of the seven).


There are some similar works in the Atlas Of Experience, available in part as a website and in full as a book on Amazon. This seems to be their main offering:

And this is a magnification of one of their peninsulae:


Martin Vargic has a bunch of really good sentimental cartography. The most easily accessible is his Map Of The Internet.

He also has a Map of Literature, not particularly accessible except for teasers:

Not a thumbnail, sorry – this is the biggest I have

The full version of this map and several others with no online presence are available in Vargic’s book Miscellany Of Curious Maps.


This is road map of songs whose names sound like things that should be on a road map:

The poster version is available for purchase here, as are a similar TV map, book map, game map, and film map.

And speaking of film maps, a random Reddit commenter made one:

That’s it. These are literally all the good modern sentimental cartography maps I know. This is a good art, it shouldn’t have died with the Renaissance, and people should do more work to resurrect it.

Posted in Uncategorized | Tagged , , | 39 Comments

The Whole City Is Center

Related to yesterday’s post on people being too quick to assume value differences: some of the simplest fake value differences are where people make a big deal about routing around a certain word. And some of the most complicated real value differences are where some people follow a strategy explicitly and other people follow heuristics that approximate that strategy.

There’s a popular mental health mantra that “there’s no such thing as laziness” (here are ten different articles with approximately that title: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10). They all make the same basically good point. We shame people who don’t work very hard as “lazy”, and think they should have lower status than the rest of us. But actually, these people don’t just randomly choose not to work. Some of them have psychological issues, like anxiety and trauma that are constantly distracting them from their work, or a fear of success, or self-defeating beliefs about how nothing they do matters anyway. Others have biological issues – maybe hypothyroidism, or vitamin deficiencies, or ADHD, or other things we don’t understand that lower their energy and motivation. Still others just don’t want to do the specific thing we are asking them to do right now and can’t force themselves to work uphill against that gradient. When we call people “lazy”, we’re ignorantly dismissing all these possibilities in favor of a moralistic judgment.

A dialogue:

Sophisticus: I don’t believe in laziness.

Simplicio: What about my cousin Larry? He keeps promising to do important errands for his friends and family, and then he never does them. Instead he just plays video games all the time. This has happened consistently over the past few years, every time he’s promised to do something. One time my aunt asked him to go to the DMV to get some paperwork filled out, he promised he would do it, and then he kept putting it off for a month until it was past the deadline and she almost lost her car. He didn’t forget about it or anything, he just couldn’t bring himself to go out and do it. And he’s been fired from his last three jobs for not showing up, and…

Sophisticus: Yes, yes, I’m sure there are people like this. But he probably has some self-defeating beliefs, or vitamin deficiencies, or mental health issues.

Simplicio: Okay. Well, my mother is going to be away for the next week, and she needs someone to dog-sit for her. Her dog is old and sick and requires a lot of care each day. She’s terrified that if he doesn’t get his food and medication and daily walk on time, something terrible will happen to him. She’s willing to pay a lot of money. Do you think I should recommend she ask my cousin Larry?

Sophisticus: No, of course not.

Simplicio: Why not?

Sophisticus: He probably won’t do it. He’ll just play video games instead.

Simplicio: Why do you think so?

Sophisticus: Because he has a long history of playing video games instead of doing important tasks.

Simplicio: If only there were a word for the sort of person who does that!

Sophisticus: Oh, I see. Now you’re making fun of me. But I’m not saying everyone is equally reliable. I’m saying that instead of denouncing someone as “lazy”, we should look for the cause and try to help them.

Simplicio: Hey, we did try to help him. Larry’s family has taken him to the doctor loads of times. They didn’t anything on the lab tests, but the psychiatrist thought he might be ADHD and gave him some Adderall. I would say now he pulls through on like 20% of the things we ask him to do instead of zero percent. We also tried to get him to go to therapy, but the therapist deferred because ADHD has a very low therapy response rate. His parents tried to change the way they asked him to do things to make it easier for him, or to let him choose a different set of tasks that were more to his liking, but that only worked a little, if at all. Probably there’s some cause we don’t understand, but it’s beyond the reach of medical science, incentive design, or the understanding that exists between loving family members to identify.

Sophisticus: See! The Adderall helped! And letting him choose his own tasks helped a little too!

Simplicio: I agree it helped a little. So should I recommend him to my mother as a dog-sitter?

Sophisticus: No, of course not.

Simplicio: Then I still don’t see what the difference between us is. I agree it was worth having him go to the doctor and the therapist to rule out any obvious biological or psychological issues, and to test different ways of interacting with him in case our interaction style was making things worse. You agree that since this still hasn’t made him reliably fulfill his responsibilities and we don’t have any better ideas, he’s a bad choice for a dog-sitter. Why can’t I communicate the state of affairs we both agree on to my mother using the word “lazy”?

I imagine Sophisticus believing he has a fundamental value difference with people who use the term “lazy”. They think that some people are just bad and should be condemned, whereas he wisely believes that everything has a cause and people who have issues with motivation should be helped. But it’s not clear to me that this is a real difference. I can imagine someone signaling hard-headedness and strictness by insisting that they were against laziness, and someone else signaling compassion by insisting that they don’t believe in laziness, but it’s pretty hazy exactly where their maps of the world diverge.

But back to the dialogue:

Sophisticus: Because “lazy” is laden with the idea that lazy people should be punished. You should yell at them to get off their ass and do some work.

Simplicio: I mean, I’m not sure that’s wrong? When my aunt and uncle tried to take Larry to the psychiatrist, he didn’t want to go. My uncle started screaming at him that if he didn’t make the appointment he would never amount to anything, and he would be a loser his entire life, and they would disown him – and I guess it freaked Larry out enough that he made the appointment. And it seems like if that kind of thing makes people do important stuff for their own good – whether it’s make appointments or hold down a job – then it might be reasonable, at least from people whom the lazy person has entered into some kind of relationship with.

Sophisticus: I think that kind of strategy might occasionally work in the short-term, but that in the long-term it makes things much worse.

Simplicio: I agree that’s possible, but it seems like we have a factual disagreement here. And I think that factual disagreement is best expressed by the question “Does laziness respond to social shaming or not?”, not a claim that laziness doesn’t exist. It certainly doesn’t seem like we have a value difference unrelated to any purely-factual beliefs.

Maybe both participants are wrong here. My impression is that some forms of laziness respond to incentives and others don’t. I know many people who will start work on a project they’ve been putting off if they know it’s due the next day and worth half their grade. I also know other people who won’t. But continuing:

Sophisticus: I can imagine some cases in which it’s useful to use external rewards and punishments to encourage people with low motivation to do something. But the word “lazy” doesn’t just mean “can be motivated by external reinforcement”. It’s an attempt to judge somebody, to say they’re lesser, to lower their social status.

Simplicio: You just said that my mother should avoid hiring Larry for a lucrative job. Surely that’s a judgment, and surely keeping him unemployed forever lowers his social status.

Sophisticus: I’m judging him as bad at one thing, not as a Universally Bad Person.

Simplicio: Do you think Larry would be a good pilot?

Sophisticus: Well, no…

Simplicio: Nuclear engineer?

Sophisticus: No, but…

Simplicio: Lieutenant colonel in the army?

Sophisticus: I agree there are many things Larry would not be good at.

Simplicio: And surely the person who thinks he is lazy agrees there are some things he might be good at – for example, he might be handsome, or intelligent. Indeed, the “lazy but bright” student is a stock cultural figure. The main judgment that “lazy” represents is that he’s not a very hard worker – a judgment you seem to share.

Sophisticus: I think that for them it’s a moral judgment, and for me it isn’t.

Simplicio: A moral judgment? I don’t think of a lazy person as more likely to rob or murder. Do you think others do?

Sophisticus: No, I don’t think so. It’s not a judgment that they’re bad at a specific field we both agree is moral. It’s a judgment that they should be considered less moral just because they’re lazy.

Simplicio: But how does that cash out? Both you and they want them to not get certain jobs. Both you and they believe some level of reinforcement might make them more motivated, though we can debate the factual specifics. Is there anything that a moralist would do that you wouldn’t?

Sophisticus: I’m not sure the belief would cash itself out in some specific way, but they would have it.

Simplicio: If you both give him the same jobs and treat him the same, what’s the difference? Just give him the Heartstone and call it a day!

Sophisticus: You’re mocking me again.

Simplicio: I think we’re treating the word “laziness” differently. I’m thinking of “lazy” as a way to communicate a true fact about the world. You agree that the true fact should be communicated by some word, but you’re interpreting “lazy” to mean some sort of awful concept like “a person who avoids responsibilities in a way not caused by anything whatsoever except being bad, and so we should hurt them and make them suffer”. Are you sure this isn’t kind of dumb? Given that we need a word for the first thing, and everyone currently uses “lazy” for it, and we don’t need a word for the second thing because it’s awful, and most people would deny that “lazy” means that, why don’t we just use “lazy” for the very useful purpose it’s served thus far?

Sophisticus: I think…

Simplicio: And it’s the same with “judgment”. I’m using it to mean a reasonable thing that everyone does and has to do. You’re demanding we reserve it for some kind of ultimate judgment about everything that doesn’t really make sense and probably should never happen.

Sophisticus: I think you’re wrong about common usage. I think a lot of people – maybe not you, but a lot of people – really do use “lazy” to mean the second thing. And that even for good people like yourself, “lazy” has a bit of a connotation of the second thing which you can’t avoid letting slip into your mind.

Simplicio: If you’re right, I worry you’re going up against the euphemism treadmill. If we invent another word to communicate the true fact, like “work-rarely-doer”, then anyone who believes that people who play video games instead of working deserve to suffer will quickly conclude that work-rarely-doers deserve to suffer.

Sophisticus: Then let’s not invent something like “work-rarely-doer”. Let’s just say things like “You shouldn’t have Larry as a dog-sitter, because due to some social or psychological issue he usually plays video games instead of doing difficult tasks.”

Simplicio: I think people are naturally going to try to compress that concept. You can try to stop them, but I think you’ll fail. And I think insofar as you can communicate the concept at all, people are going to think less of Larry because of it. It’s possible you can slightly decrease the degree to which people think less of Larry, but only by slightly decreasing their ability to communicate useful information.

Sophisticus: Well, that’s a risk I’m willing to take.

Simplicio: If there were such a thing as laziness, but it was rare, then it would make sense to argue “most people aren’t lazy”, since lazy would be pointing at a particular quality that most people don’t have. But if you say there’s no such thing as laziness, then it sounds like maybe you’re kind of weird to insist on defining “laziness” to refer a quality that nobody has, yet refuse to use any word to refer to the quality that many people do have. It would be like wanting our language to have a word for “unicorn” but not for “horse”.

I think Simplicio is working off one of these kinds of models (from How An Algorithm Feels From The Inside, but see also here):

…where “lazy” is the node at the center of the second one. He’s saying that if he and Sophisticus both agree on all of the outside nodes, why exactly are they holding a debate on the status of the center, when the center is just a way to help us predict the values of the others? He feels like Sophisticus is insisting on designing the structure such that the central node has some deep metaphysical meaning that he can’t explain but which is very bad, whereas he just wants it to be a perfectly ordinary predictor. Moving on:

Sophisticus: What about this? I think that people with low motivation sometimes can be helped by reinforcement – including negative reinforcement. But other people think they should be punished. There’s a big difference between simple negative reinforcement and punishment. If you’re just using negative reinforcement, you’re trying to use as little as possible to get the result you want. But when you’re judgmental and you divide people into good and bad, you usually add that the bad people deserve to suffer, regardless of the effect.

Simplicio: This is a strange distinction. Suppose I beat up my wife and threaten to do it again. Shouldn’t I go to jail?

Sophisticus: I think we shouldn’t be excessive about it, and I don’t support mass incarceration, but I don’t want you to get off scot-free, because it seems like that would encourage future domestic violence.

Simplicio: If only there were a word for the sort of thing where we made sure people didn’t get off scot-free in a way that encouraged future crime!

Sophisticus: No no no, you still don’t get it. There’s a difference between a principled consequentialist view of discouraging actions, and wanting people to suffer.

Simplicio: Look, I happen to know one of those Hogwarts wizards everyone keeps writing books about, and he’s offered to let me take a magic Unbreakable Vow that I won’t assault anyone ever again. Now that there’s no point in discouraging me, I don’t need to deal with this jail thing, right?

Sophisticus: It’s not just about discouraging you personally. It’s about making an example of you to discourage everyone else. Also, there’s a Parfit’s Hitch-Hiker type element – the threat of punishment now could have presented you from committing the crime in the past, and the threat couldn’t be credible unless we agreed to actually punish you.

Simplicio: Then I question whether the “principled consequentialist view” ever differs from the “believing in punishment and wanting bad people to suffer” view in terms of what actions it recommends.

Sophisticus: The people who believe in punishment often say things like “I hope that person rots in jail” or “Let’s make the conditions in jail extra bad”. Whereas I want domestic violence discouraged by nice Scandinavian-style prisons and – when possible – community service.

Simplicio: If you learned that having nice jails actually…

Sophisticus: Oh, I know what you’re going to say. You’re going to say this is just a factual difference between me and the pro-punishment faction. They believe, as a matter of fact, that bad conditions discourage crime extra effectively, since some criminals who would be willing to take the risk of a nice Scandinavian-style prison would be scared off by a dark overheated cage. And I agree this is a possible axis on which people can differ, and that if you proved to me that this was true I could be persuaded to reconsider my views. But I have talked to people who have literally said the words “I don’t care how much it discourages crime or not, I want criminals to suffer.”

Simplicio: Okay. I agree that’s good evidence for your view

Sophisticus: You…do? Really? I won one of these? REALLY?!

Simplicio: I guess.

Sophisticus: So you admit sometimes there are fundamental value differences?

Simplicio: Sometimes, yeah, I guess. But I want to be really careful with this. Humans are adaptation-executors, not fitness-maximizers. Only one person in a thousand could give the principled consequentialist defense of criminal justice that you’re giving here. The game theory necessary to understand the defense is only a few decades or centuries old, depending on how exactly you define it – but even chimpanzees need to discourage defectors. Since evolution couldn’t cram the whole principled consequentialist defense into a chimpanzee brain, it just gave us the urge to punish.

Sophisticus: I agree that’s a plausible scientific account of the genesis of the urge to punish. But that doesn’t mean that I have to agree with it. After all, evolution gave us an urge to eat sugary food, but I can ignore that urge when I don’t think it’s the healthy thing for me to do at the moment.

Simplicio: Thanks to modern medical science, you’re smarter than your urge telling you to eat sugar. I’m not sure how many people are smarter than their urges to punish. If you miss the Parfit’s Hitch-Hiker angle, you punish the wrong people. If you miss the angle where you have to adjust for probability of catching the crime, you punish people the wrong amount. But the person just following their evolutionary urges would get both of those right – more or less. Imagine that, using physics, you are able to approximate the ball-trajectory-predicting power of the world’s best golfer – but the golfer still does a little bit better. Would you pooh-pooh him for merely following his base evolutionary urges?

Sophisticus: If it harmed people, yes! You’re trying to reduce this to factual differences again, but you already admitted that’s not going to work. We’re not debating the effectiveness of different punishment levels here. For all I know, evolutionary urges are more effective at the goal of keeping me alive – which is notably different from the goal of being just. But that’s not the point. The point is that I think there are people who, even if God handed them a stone tablet saying “YOU ONLY NEED TO PUNISH THIS PERSON X AMOUNT TO EFFECTIVELY DISCOURAGE FUTURE ACTIONS”, would still punish them X+1 amount just to make them suffer.

Simplicio: Okay, I didn’t want to re-open the factual differences thing. I agree they are not aiming at the same thing you are. My point is just that the only difference between you and the pro-punishment faction is that you are following an explicitly-calculated version of the principled consequentialist defense of punishment, and they are following a heuristic approximating the principled consequentialist defense of punishment, and their heuristic might actually be more accurate than your explicit calculation.

Sophisticus: So what? Again, they outright say they would deviate from the principled consequentialist position.

Simplicio: Yes. Adaptation-execution rather than fitness-maximization again. Evolution can’t quite cram the entire principled consequentialist position into our heads, so it just gives us an urge, and sometimes the urge does weird stuff that the principles wouldn’t.

Sophisticus: Again, so what? I agree there’s a biological/psychological cause for other people being wrong about punishment – just as there is a biological/psychological cause for other people being bad at fulfilling responsibilities – “lazy”, you would say – but that doesn’t mean I can’t continue to disagree with them.

Simplicio: My point is that, if you squint, this is sort of a factual disagreement. It’s not a factual disagreement between you and them. It’s a factual disagreement between you and the evolutionary/biochemical process that created their sense of justice. The evolution/biochemistry is trying to instantiate a view of punishment that does the best job of protecting them and their loved ones without expending unnecessary resources. But it’s getting it wrong – probably erring on the side of caution, as you would expect these sorts of processes to do. I agree you have a value disagreement with them, but this is less them having some value totally foreign to you, and more them attempting to implement your values but not doing a very good job.

Sophisticus: You’re screwing up levels again! If what you’re saying is true, then I have a mere “factual disagreement” with the evolutionary/biological process that produced their values, insofar as you can have a factual disagreement with a blind impersonal force. But I still don’t agree with the people who are the end result of the process! They should be abandoning their evolutionary/biochemical process in favor of what’s actually right.

Simplicio: Nobody has a coherent theory of when to abandon their evolutionary/biochemical processes, though. I have the urge to care about my children more than I care about some random people somewhere else. That’s clearly an evolutionary/biochemical process. I cannot justify it based on pure reason. But I choose, in reflective equilibrium, to keep that urge. What moral law can you tell me that allows me to ditch the irrational consequences of my excessive-punishment-urge, but keep the irrational consequences of my love-children-urge?

Sophisticus: Hmmm…what about “the excessive-punishment urge is wrong, even by the standards of the evolutionary/biochemical process that produced it, but the love children urge is right?”

Simplicio: I’ve been told that we love kittens based on a misfiring of our evolutionary urge to love children. Should I abandon that one? I’ve been told I love beauty and nature and high mountains and deep forests based on evolutionary heuristics about what kind of places will have a good food supply – now that I can order take-out, should I ditch that too? Huge chunks of our hopes and dreams are the godshatter of misfired-evolutionary processes. Tell me what principled decision lets us judge among them, rejecting one as evil but the other as good?

Sophisticus: I cannot. I make no claim that I can. I only say that, by my arbitrary choice of methods of reaching reflective equilibrium, natural beauty is good but punishment is bad. And that if someone else’s arbitrary choice of methods of reaching reflective equilibrium pronounces the opposite, they have a fundamental value difference from me, and I won’t shirk from saying so.

Simplicio: Then all I am saying is to be understanding. They’re not people who are coming from some sort of alien ideology of suffering being good for its own sake. They’re people who are taking the same godshatter you are, and applying a different process of arbitrary reflective equilibrium to it, in a world where none of us really understand or control the process of reflective equlibrium we go through. That gives me a different and more understanding perspective on them. It may not make me agree with them, but it makes me more willing to think of them as an odd but sympathetic potential-negotiating-partner rather than some sort of hostile villain.

I need to admit here that I personally am neither as saintly as Sophisticus nor as reasonable as Simplicio. A while ago, I learned that my great-grandfather was murdered, and my great-grandmother – normally a deeply kind and compassionate woman – demanded the death penalty rather than life imprisonment for his murderers. When the jury went with life imprisonment anyway, she yelled at them that she hoped someone killed their loved ones so they knew how it felt. This story had a pretty big impact on me and made me try to generate examples of things that could happen such that I would really want the perpetrators to suffer, even more than consequentialism demanded. I may have turned some very nasty and imaginative parts of my brain, the ones that wrote the Broadcast interlude in Unsong, to imagining crimes perfectly calculated to enrage me. And in the end I did it. I broke my brain to the point where I can very much imagine certain things that would happen and make me want the perpetrator to suffer – not infinitely, but not zero either. I am not going to claim that this is just some misfiring of evolutionary urges which I obviously denounce. I think I stick to them the same way I stick to liking kittens. I’m not sure I would promote them as policy – in the same kind of second-level way where I can think of some people who would make good dictators but still don’t actually want them to set up a dictatorship – but I don’t renounce them entirely either. I guess reflective equilibrium is easier to disturb than I thought.

Sophisticus: I’ve been thinking, Simplicio – doesn’t your philosophy hoist itself on its own petard?

Simplicio: What do you mean?

Sophisticus: You insist that much of what people consider value differences is actually a difference in what words they are willing to use while describing basically the same values. And that if a term seems unsavory to us, we should use it to describe the closest useful concept, rather than condemning it for applying to a bad concept we shouldn’t have. If we hear “laziness”, we should assume it stands for the way your cousin Larry is, rather than some package of moral and metaphysical assumptions. If we hear “judgment”, we should assume it stands for assessing someone’s ability as a dog-walker, rather than some package of moral evaluations. If we hear “punishment”, we should assume it stands for some kind of consequentialist negative reinforcement, rather than the belief that some people deserve to suffer.

Simplicio: I’m not sure that’s exactly how I would describe my position, but go on.

Sophisticus: What about the term “value difference” itself? It seems like you’re being a hypocrite here. After all, there are plenty of things that look like value differences to us – I disagree with people on moral questions about a thousand times a day. But you insist that none of those are real value differences, and instead we must reserve the concept of “value difference” for some Platonic perfect value difference that doesn’t exist in real life.

Simplicio: I agree these rarely exist in practice. I think the difference is that they can exist in theory. Imagine a paperclip maximizer robot vs. a paperclip minimizer robot. These have a true value difference. They’re not doing the same thing and applying different words to it. If I were a paperclip minimizer, I could never get a paperclip maximizer to say it wanted to do a specific thing, and then sarcastically say “If only there was a word for that thing!”, and then it would have to admit that word was “paperclip minimization”.

Sophisticus: Okay. But there are no paperclip maximizers or minimizers in real life. So I still think you’re denying real-life use of a term in favor of some Platonic version that doesn’t exist.

Simplicio: I just don’t like the connotations of “value difference”. I think they suggest the non-existent thing.

Sophisticus: That’s exactly what I’ve been trying to tell you is true of “laziness” or “judgment”, and you never let me get away with it!

Simplicio: I just realized I have to, uh, wash my toaster. I’ll be back in a minute.

Ten years go by. Sophisticus is never able to find Simplicio again. He seems to have disappeared. Sophisticus knows it’s bizarre to think somebody would skip town and change identities just to avoid a philosophical debate, but he cannot think of any other explanation. One day, Sophisticus goes on a vacation to a city very far away, and becomes hopelessly lost. He notices a stranger in glasses and a mustache, who looks familiar in a way he cannot quite place, and asks him for directions.

Sophisticus: Excuse me, do you know the way to city center?

Stranger: Don’t worry, good sir! You’re in city center right now!

Sophisticus: But…this whole area looks suburban. And the edge of the city is right there – past that street there’s only rolling fields as far as the eye can see. How can this be city center?

Stranger: The whole city is the city center!

Sophisticus: What?

Stranger: That’s right. We decided that it was pretty stigmatizing to say that certain parts of the city were non-central. You know, it implied that the people there were just a bunch of yokels who weren’t real citizens the same way everyone else was. So we held a referendum, and everyone agreed that the whole city would be classified as the city center.

Sophisticus: That’s pretty weird, but…look, I need to get to the tourist office, and I know it’s in city center, so if you’re not going to direct me to city center..can you just tell me what part of town the tourist office is in?

Stranger: It’s in the center. The whole city is center.

Sophisticus: Let’s try this again. Please point me in the direction of the Tourist Office.

Stranger: Perhaps you think the Tourist Office is some kind of mystical place that will answer all of your tourist-related questions and give you a perfect vacation, but that everywhere-not-the-Tourist-Office is some kind of hellscape with nothing of any value to visitors? In that case, I reject your Tourist-Office vs. Non-Tourist-Office distinction. There is no such thing as the Tourist Office.

Sophisticus: By “Tourist Office”, I just mean an ordinary non-perfect building with a greater-than-average propensity to give tourist information!

Stranger: Well, if you mean “building” to mean something 100% artificial without even natural materials which is hermetically insulated from the outside, then really there aren’t any buildings here. There are just –

Sophisticus: Wait, I know you! You’re my old friend Simplicio, who skipped town so he didn’t have to answer my challenge about his theory of value differences.

Simplicio: Guilty as charged. But now I hope you better understand what I mean. There is a sense in which you’re right, and a sense in which I’m right. Words both convey useful information, and shape our connotations and perceptions. While we can’t completely ignore the latter role, it’s also dangerous to posit fundamental value differences between people who use words one way and people who use them another. My concern is that I’ve seen people say “I am the kind of person who doesn’t believe in laziness, or in punishment, or in judging others. But that guy over there accuses people of being lazy, wants people to suffer, and does judge others. Clearly we have fundamental value differences and must be enemies.” All I’m trying to do is say that those people may have differing factual beliefs on how to balance the information-bearing-content of words versus their potential connotations. If we understand the degree to which other people’s differences from us are based on factual rather than fundamental value differences, we can be humbler and more understanding when we have to interact with them.

Sophisticus: Okay, but seriously, I need to get to city center.

Simplicio: The whole city is the city center.

Sophisticus: Screw you.

Simplicio: Hey, don’t be so judgmental.

Posted in Uncategorized | Tagged | 322 Comments

Fundamental Value Differences Are Not That Fundamental

I.

Ozy (and others) talk about fundamental value differences as a barrier to cooperation.

On their model (as I understand it) there are at least two kinds of disagreement. In the first, people share values but disagree about facts. For example, you and I may both want to help the Third World. But you believe foreign aid helps the Third World, and I believe it props up corrupt governments and discourages economic self-sufficiency. We should remain allies while investigating the true effect of foreign aid, after which our disagreement will disappear.

In the second, you and I have fundamentally different values. Perhaps you want to help the Third World, but I believe that a country should only look after its own citizens. In this case there’s nothing to be done. You consider me a heartless monster who wants foreigners to starve, and I consider you a heartless monster who wants to steal from my neighbors to support random people halfway across the world. While we can agree not to have a civil war for pragmatic reasons, we shouldn’t mince words and pretend not to be enemies. Ozy writes (liberally edited, read the original):

From a conservative perspective, I am an incomprehensible moral mutant…however, from my perspective, conservatives are perfectly willing to sacrifice things that actually matter in the world– justice, equality, happiness, an end to suffering– in order to suck up to unjust authority or help the wealthy and undeserving or keep people from having sex lives they think are gross.

There is, I feel, opportunity for compromise. An outright war would be unpleasant for everyone…And yet, fundamentally… it’s not true that conservatives as a group are working for the same goals as I am but simply have different ideas of how to pursue it…my read of the psychological evidence is that, from my value system, about half the country is evil and it is in my self-interest to shame the expression of their values, indoctrinate their children, and work for a future where their values are no longer represented on this Earth. So it goes.

And from the subreddit comment by GCUPokeItWithAStick:

I do think that at a minimum, if you believe that one person’s interests are intrinsically more important than another’s (or as the more sophisticated versions play out, that ethics is agent-relative), then something has gone fundamentally wrong, and this, I think, is the core of the distinction between left and right. Being a rightist in this sense is totally indefensible, and a sign that yes, you should give up on attempting to ascertain any sort of moral truth, because you can’t do it.

I will give this position its due: I agree with the fact/value distinction. I agree it’s conceptually very clear what we’re doing when we try to convince someone with our same values of a factual truth, and confusing and maybe impossible to change someone’s values.

But I think the arguments above are overly simplistic. I think rationalists might be especially susceptible to this kind of thing, because we often use economic models where an agent (or AI) has a given value function (eg “produce paperclips”) which generates its actions. This kind of agent really does lack common ground with another agent whose goal function is different. But humans rarely work like this. And even when they do, it’s rarely in the ways we think. We are far too quick to imagine binary value differences that line up exactly between Us and Them, and far too slow to recognize the complicated and many-scaled pattern of value differences all around us.

Eliezer Yudkowsky writes, in Are Your Enemies Innately Evil?:

On September 11th, 2001, nineteen Muslim males hijacked four jet airliners in a deliberately suicidal effort to hurt the United States of America. Now why do you suppose they might have done that? Because they saw the USA as a beacon of freedom to the world, but were born with a mutant disposition that made them hate freedom?

Realistically, most people don’t construct their life stories with themselves as the villains. Everyone is the hero of their own story. The Enemy’s story, as seen by the Enemy, is not going to make the Enemy look bad. If you try to construe motivations that would make the Enemy look bad, you’ll end up flat wrong about what actually goes on in the Enemy’s mind.

So what was going through the 9/11 hijackers’ minds? How many value differences did they have from us?

It seems totally possible that the hijackers had no value differences from me at all. If I believed in the literal truth of Wahhabi Islam – a factual belief – I might be pretty worried about the sinful atheist West. If I believed that the West’s sinful ways were destroying my religion, and that my religion encoded a uniquely socially beneficial way of life – both factual beliefs – I might want to stop it. And if I believed that a sufficiently spectacular terrorist attack would cause people all around the world to rise up and throw off the shackles of Western oppression – another factual belief – I might be prepared to sacrifice myself for the greater good. If I thought complicated Platonic contracts of cooperation and nonviolence didn’t work – sort of a factual belief – then my morals would no longer restrain me.

But of course maybe the hijackers had a bunch of value differences. Maybe they believed that American lives are worth nothing. Maybe they believed that striking a blow for their homeland is a terminal good, whether or not their homeland is any good or its religion is true. Maybe they believe any act you do in the name of God is automatically okay.

I have no idea how many of these are true. But I would hate to jump to conclusions, and infer from the fact that they crashed two planes that they believe crashing planes is a terminal good. Or infer from someone opposing abortion that they just think oppressing women is a terminal value. Or infer from people committing murder that they believe in murderism, the philosophy that says that murder is good. I think most people err on the side of being too quick to dismiss others as fundamentally different, and that a little charity in assessing their motives can go a long way.

II.

But that’s too easy. What about people who didn’t die in self-inflicted plane crashes, and who can just tell us their values? Consider the original example – foreign aid. I’ve heard many isolationists say in no uncertain terms that they believe we should not spend money to foreign countries, and that this is a basic principle and not just a consequence of some factual belief like that foreign countries would waste it. Meanwhile, I know other people who argue that we should treat foreigners exactly the same as our fellow citizens – indeed, that it would be an affront to basic compassion and to the unity of the human race not to do so. Surely this is a strong case for actual value differences?

My only counter to this line of argument is that almost nobody, me included, ever takes it seriously or to its logical conclusion. I have never heard any cosmopolitans seriously endorse the idea that the Medicaid budget should be mostly redirected from the American poor (who are already plenty healthy by world standards) and used to fund clinics in Africa, where a dollar goes much further. Perhaps this is just political expediency, and some would talk more about such a plan if they thought it could pass. But in that case, they should realize that they are very few in number, and that their value difference isn’t just with conservatives but with the overwhelming majority of their friends and their own side.

And if nativist conservatives are laughing right now, I know that some of them have given money to foreign countries affected by natural disasters. Some have even suggested the government do so – when the US government sent resources to Japan to help rescue survivors of the devastating Fukushima tsunami, I didn’t hear anyone talk about how those dollars could better be used at home.

Very few people have consistent values on questions like these. That’s because nobody naturally has principles. People take the unprincipled mishmash of their real opinions, extract principles out of it, and follow those principles. But the average person only does this very weakly, to the point of having principles like “it’s bad when you lie to me, so maybe lying is wrong in general” – and even moral philosophers do it less than a hundred percent and apply their principles inconsistently.

(this isn’t to say those who have consistent principles are necessarily any better grounded. I’ve talked a lot about shifting views of federalism: when the national government was against gay marriage, conservatives supported top-down decision-making at the federal level, and liberals protested for states’ rights. Then when the national government came out in support, conservatives switched to wanting states’ rights and liberals switched to wanting top-down federal decisions. We can imagine some principled liberal who, in 1995, said “It seems to me right now that state rights are good, so I will support them forevermore, even when it hurts my side”. But her belief still would have ended up basically determined by random happenstance; in a world where the government started out supporting gay marriage but switched to oppose it, she would have – and stick to – the opposite principle)

But I’m saying that what principle you verbalize (“I believe we must treat foreigners exactly as our own citizens!”) isn’t actually that interesting. In reality, there’s a wide spectrum of what people will do with foreigners. If we imagine it as a bell curve, the far right end has a tiny number of hyper-consistent people who oppose any government money going abroad unless it directly helps domestic citizens. A little further towards the center we get the people who say they believe this, but will support heroic efforts to rescue Japanese civilians from a tsunami. The bulge in the middle is people who want something like the current level of foreign aid, as long as it goes to sufficiently photogenic children. Further to the left, we get the people I’m having this discussion with, who usually support something like a bit more aid and open borders. And on the far left, we get another handful of hyper-consistent people, who think the US government should redirect the Medicaid budget to Africa.

If you’re at Point N in some bell curve, how far do you have to go before you come to someone with “fundamental value differences” from you? How far do you have to go before someone is inherently your enemy, cannot be debated with, and must be crushed in some kind of fight? If the answer is “any difference at all”, I regret to inform you that the bell curve is continuous; there may not be anyone with exactly the same position as you.

And that’s just the one issue of foreign aid. Imagine a hundred or a thousand such issues, all equally fraught. God help GCU, who goes further and says you’re “indefensible” if you believe any human’s interests are more important than any other’s. Does he (I’ll assume it’s a he) do more to help his wife when she’s sick than he would to help a random stranger? This isn’t meant to be a gotcha, it’s meant to be an example of how we formulate our morality. Person A cares more about his wife than a random person, and also donates some token amount to help the poor in Africa. He dismisses caring about his wife as noise, then extrapolates from the Africa donation to say “we must help all people equally”. Person B also cares more about his wife than a random person, and also donates some token amount to Africa. He dismisses the Africa donation as noise, then extrapolates from his wife to “we must care most about those closest to us”. I’m not saying that how each person frames his moral principle won’t have effects later down the line, but those effects will be the tail wagging the dog. If A and B look at each other and say “I am an everyone-equally-er, you are a people-close-to-you-first-er, we can never truly understand one another, we must be sworn enemies”, they’re putting a whole lot more emphasis on which string of syllables they use to describe their mental processes than really seems warranted.

Why am I making such a big deal of this? Isn’t a gradual continuous value difference still a value difference?

Yes. But I expect that (contra the Moral Foundations idea) both the supposed-nativist and the supposed-cosmopolitan have at least a tiny bit of the instinct toward nativism and the instinct toward cosmopolitanism. They may be suppressing one or the other in order to fit their principles. The nativist might be afraid that if he admitted any instinct toward cosmopolitanism, people could force him to stop volunteering at his community center, because his neighbor’s children are less important than starving Ethiopians and he should be helping them somehow instead. The cosmopolitan might be afraid that if he admitted any instinct toward preferring people close to him, it would justify a jingoistic I’ve-got-mine attitude that thinks of foreigners as subhuman.

But the idea that they’re inherently different, and neither can understand the other’s appeals or debate each other, is balderdash. A lot of the our-values-are-just-inherently-different talk I’ve heard centers around immigration. Surely liberals must have some sort of strong commitment to the inherent moral value of foreigners if they’re so interested in letting them into the country? Surely conservatives must have some sort of innate natives-first mentality to think they can just lock people out? But…

Okay. I admit this is a question about hard work and talents, which is a factual question. But we both know that you would get basically the same results if you asked “IMMIGRATION GOOD OR BAD?” or “DO IMMIGRANTS HAVE THE SAME RIGHTS TO BE IN THIS COUNTRY AS THE NATIVE BORN?” or whatever. And what we see is that this is totally contingent and dependent on the politics of the moment. Of all those liberals talking about how they can’t possibly comprehend conservatives because being against immigration would just require completely alien values, half of them were anti-immigrant ten years ago. Of all those conservatives talking about how liberals can never be convinced by mere debate because debate can’t cut across fundamental differences, they should try to figure out why their own party was half again as immigrant-friendly in 2002 as in 2010.

I don’t think anyone switched because of anything they learned in a philosophy class. They switched because it became mildly convenient to switch, and they had a bunch of pro-immigrant instincts and anti-immigrant instincts the whole time, so it was easy to switch which words came out of their mouths as soon as it became convenient to do so.

So if the 9/11 hijackers told me they truly placed zero value on American lives, I would at least reserve the possibility that sure, this is something you say when you want to impress your terrorist friends, but that in a crunch – if they saw an anvil about to drop on an American kid and had only a second to push him out of the way – they would end up having some of the same instincts as the rest of us.

III.

Is there anyone at all whom I am willing to admit definitely, 100%, in the most real possible way, has different values than I do?

I think so. I remember a debate I had with my ex-girlfriend. Both of us are atheist materialist-computationalist utilitarian rationalist effective altruist liberal-tarians with 99% similar views on every political and social question. On the other hand, it seemed axiomatic to me that it wasn’t morally good/obligatory to create extra happy people (eg have a duty to increase the population from 10,000 to 100,000 people in a way that might eventually create the Repugnant Conclusion), and it seemed equally axiomatic to her that it was morally good/obligatory to do that. We debated this maybe a dozen times throughout our relationship, and although we probably came to understand each other’s position a little more, and came to agree it was a hard problem with some intuitions on both sides, we didn’t come an inch closer to agreement.

I’ve had a few other conversations that ended with me feeling the same way. I may not be the typical Sierra Club member, but I consider myself an environmentalist in the sense of liking the environment and wanting it to be preserved. But I don’t think I value biodiversity for its own sake – if you offered me something useful in exchange for half of all species going extinct – promising that they would all be random snails, or sponges, or some squirrel species that looked exactly like other squirrel species, or otherwise not anything we cared about – I’d take it. If you offered me all charismatic megafauna being relegated to zoos in exchange for lots of well-preserved beautiful forests that people could enjoy whenever they wanted, I would take that one too. I know other people who consider themselves environmentalists who are horrified by this. Some of them agree with me on every single political issue that real people actually debate.

I think these kinds of things are probably real fundamental value difference. But if I’m not sure I have any fundamental value differences with the 9-11 hijackers, and I am sure I have one with one of the people I’m closest to in the entire world, how big a deal is it, exactly? The world isn’t made of Our Tribe with our fundamental values and That Tribe There with their fundamental values. It’s made of a giant mishmash of provisional things that solidify into values at some point but can be unsolidified by random chance or temporary advantage, and everyone probably has a couple unexplored value differences and unexpected value similarities with everyone else.

This means that trying to use shaming and indoctrination to settle value differences is going to be harder than you think. Successfully defeat the people on the other side of the One Great Binary Value Divide That Separates Us Into Two Clear Groups, and you’re going to notice you still have some value differences with your allies (if you don’t now, you will in ten years, when the political calculus changes slightly and their deepest ethical beliefs become totally different). Beat your allies, and you and the subset of remaining allies will still have value differences. It’s value differences all the way down. You will have an infinite number of fights, and you’re sure to lose some of them. Have you considered getting principles and using asymmetric weapons?

I’m not saying you don’t have to fight for your values. The foreign aid budget still has to be some specific number, and if your explicitly-endorsed principles disagree with someone else’s explicitly-endorsed principles, then you’ve got to fight them to determine what it is.

But “remember, liberals and conservatives have fundamental value differences, so they are two tribes that can’t coexist” is the wrong message. “Remember, everyone has weak and malleable value differences with everyone else, and maybe a few more fundamental ones though it’s hard to tell, and neither type necessarily line up with tribes at all, so they had damn well better learn to coexist” is more like it.

Posted in Uncategorized | Tagged | 665 Comments

OT106: Alexios I Commentos

This is the bi-weekly visible open thread (there are also hidden open threads twice a week you can reach through the Open Thread tab on the top of the page). Post about anything you want, ask random questions, whatever. You can also talk at the SSC subreddit or the SSC Discord server. Also:

1. I had originally asked teams in the Adversarial Collaboration Contest to be done by today. I would like each of the fifteen teams who originally signed up to check in (as a reply to the first comment on this thread) and tell me whether you’re done, whether you need more time, or whether you’ve given up. If done, please send your finished product to scott[at]shireroth[dot]org.

2. I’m going to write some posts soon that reference Conflict vs. Mistake, but I’m not entirely happy with it as some people said they thought it was wrong in important ways. I tried talking to those people and didn’t get a good feel for what they disliked, especially whether they rejected the idea that there was a dichotomy at all or just thought my post misrepresented one side of it. I would be interested in having someone who does think there is a dichotomy but thinks I misrepresented it rewrite the post, changing it as little as possible except to correct what they thought the misrepresentation was. If anyone does a good enough job of this I’ll post it on here as a new post and link the original to it.

3. Comments of the week are by bbeck, a drug patent lawyer who explains how a melatonin patent could incentivize supplement companies to sell the wrong dose, and how drug dosing patents work more generally.

Posted in Uncategorized | Tagged | 1,096 Comments

Did A Melatonin Patent Inspire Current Dose Confusion?

Yesterday I wrote about melatonin, mentioning that most drugstore melatonin supplements were 10x or more the recommended dose. A commenter on Facebook pointed me to an interesting explanation of why.

Dr. Richard Wurtman, an MIT scientist who helped discover melatonin’s role in the body and pioneer its use as a sleep aid, writes:

MIT was so excited about our research team’s melatonin-sleep connection discovery that they decided to patent the use of reasonable doses of melatonin—up to 1 mg—for promoting sleep.

But they made a big mistake. They assumed that the FDA would want to regulate the hormone and its use as a sleep therapy. They also thought the FDA wouldn’t allow companies to sell melatonin in doses 3-times, 10-times, even 15-times more than what’s necessary to promote sound sleep.

Much to MIT’s surprise, however, the FDA took a pass on melatonin. At that time, the FDA was focusing on other issues, like nicotine addiction, and they may have felt they had bigger fish to fry.

Also, the FDA knew that the research on melatonin showed it to be non-toxic, even at extremely high doses, so they probably weren’t too worried about how consumers might use it. In the end, and as a way of getting melatonin on to the market, the FDA chose to label it a dietary supplement, which does not require FDA regulation. Clearly, this was wrong because melatonin is a hormone, not a dietary supplement.

Quickly, supplement manufacturers saw the huge potential in selling melatonin to promote good sleep. After all, millions of Americans struggled to get to sleep and stay asleep, and were desperate for safe alternatives to anti-anxiety medicines and sleeping pills that rarely worked well and came with plenty of side effects.

Also, manufacturers must have realized that they could avoid paying royalties to MIT for melatonin doses over the 1 mg measure. So, they produced doses of 3 mg, 5 mg, 10 mg and more! Their thinking–like so much else in our American society–was likely, “bigger is better!” But, they couldn’t be more wrong.

So he’s saying that…in order to get around a patent on using the correct dose of melatonin…supplement manufacturers…used the wrong dose of melatonin? I enjoy collecting stories of all the crazy perversities created by our current pharmaceutical system, but this one really takes the cake.

Assuming it’s true, that is. Commenter Rodrigo brings up some reasons to be suspicious:

1. Who would patent a drug only up to a certain dose? Isn’t this really dumb?

2. To avoid the patent on the correct dose, drugstores just have to sell more than 1 mg – for example, 2 mg. But they actually sell up to 10 mg.

To these I would add:

3. Lots of supplements are very high dose. When I Google Vitamin C, the first product that comes up advertises that it has 1111% of the recommended daily allowance, which seems better optimized for numerological purposes than medical ones.

4. A few companies do sell melatonin at the right dose range, and MIT hasn’t sued them yet.

Normally I would find these considerations pretty persuasive, but I feel like the guy who discovered melatonin and ran a pharmaceutical company for a while knows more about the history of melatonin and pharmaceutical regulations than I do.

From last week:

This kind of thing is the endless drudgery of rationality training…questions like “How much should you discount a compelling-sounding theory based on the bias of its inventor?” And “How much does someone being a famous expert count in their favor?” And “How concerned should we be if a theory seems to violate efficient market assumptions?” And “How do we balance arguments based on what rationally has to be true, vs. someone’s empirical but fallible data sets?”

Here I’m just really skeptical of the MIT patent story. Wurtman seems to admit that “bigger is better” played a role. Maybe the patent thing was a very small issue, around the beginning of melatonin sales, and was soon forgotten – but the tradition of expecting melatonin to be very high dose stuck around forever, mostly for other reasons?

EDIT: Commenters, including a patent lawyer, have filled in the rest of the story. Because melatonin is a natural hormone and not an invention, patents can only cover specific uses of it. The MIT patent covered the proper way to use it for sleep; a broader patent might not have been granted. The patent probably guided supplement companies, but expired about five years ago. It’s now legal to produce melatonin 0.3 mg pills, but people are so used to higher doses that few people do.

Melatonin: Much More Than You Wanted To Know

[I am not a sleep specialist. Please consult with one before making any drastic changes or trying to treat anything serious.]

Van Geiklswijk et al describe supplemental melatonin as “a chronobiotic drug with hypnotic properties”. Using it as a pure hypnotic – a sleeping pill – is like using an AK-47 as a club to bash your enemies’ heads in. It might work, but you’re failing to appreciate the full power and subtlety available to you.

Melatonin is a neurohormone produced by the pineal gland. In a normal circadian cycle, it’s lowest (undetectable, less than 1 pg/ml of blood) around the time you wake up, and stays low throughout the day. Around fifteen hours after waking, your melatonin suddenly shoots up to 10 pg/ml – a process called “dim light melatonin onset”. For the next few hours, melatonin continues to increase, maybe as high as 60 or 70 pg/ml, making you sleepier and sleepier, and presumably at some point you go to bed. Melatonin peaks around 3 AM, then declines until it’s undetectably low again around early morning.

Is this what makes you sleepy? Yes and no. Sleepiness is a combination of the circadian cycle and the so-called “Process S”. This is an unnecessarily sinister-sounding name for the fact that the longer you’ve been awake, the sleepier you’ll be. It seems to be partly regulated by a molecule called adenosine. While you’re awake, the body produces adenosine, which makes you tired; as you sleep, the body clears adenosine away, making you feel well-rested again.

In healthy people these processes work together. Circadian rhythm tells you to feel sleepy at night and awake during the day. Process S tells you to feel awake when you’ve just risen from sleep (naturally the morning), and tired when you haven’t slept in a long time (naturally the night). Both processes agree that you should feel awake during the day and tired at night, so you do.

When these processes disagree for some reason – night shifts, jet lag, drugs, genetics, playing Civilization until 5 AM – the system fails. One process tells you to go to sleep, the other to wake up. You’re never quite awake enough to feel energized, or quite tired enough to get restful sleep. You find yourself lying in bed tossing and turning, or waking up while it’s still dark and not being able to get back to sleep.

Melatonin works on both systems. It has a weak “hypnotic” effect on Process S, making you immediately sleepier when you take it. It also has a stronger “chronobiotic” effect on the circadian rhythm, shifting what time of day your body considers sleep to be a good idea. Effective use of melatonin comes from understanding both these effects and using each where appropriate.

1. Is melatonin an effective hypnotic?

Yes.

That is, taking melatonin just before you want to get to sleep, does help you get to sleep. The evidence on this is pretty unanimous. For primary insomnia, two meta-analyses – one by Brzezinski in 2005 and another by Ferracioli-Oda in 2013 – both find it safe and effective. For jet lag, a meta-analysis by the usually-skeptical Cochrane Collaboration pronounces melatonin “remarkably effective”. For a wide range of primary and secondary sleep disorders, Buscemi et al say in their abstract that it doesn’t work, but a quick glance at the study shows it absolutely does and they are incorrectly under-reporting their own results. The Psychiatric Times agrees with me on this: “Results from another study reported as negative actually demonstrated a statistically significant positive result of a decrease in sleep latency by an average of 7.2 minutes for melatonin”.

Expert consensus generally follows the meta-analyses: melatonin works. I find cautious endorsements by the Mayo Clinic and John Hopkins less impressive than its less-than-completely-negative review on Science-Based Medicine, a blog I can usually count on for a hit job on any dietary supplement.

The consensus stresses that melatonin is a very weak hypnotic. The Buscemi meta-analysis cites this as their reason for declaring negative results despite a statistically significant effect – the supplement only made people get to sleep about ten minutes faster. “Ten minutes” sounds pretty pathetic, but we need to think of this in context. Even the strongest sleep medications, like Ambien, only show up in studies as getting you to sleep ten or twenty minutes faster; this New York Times article says that “viewed as a group, [newer sleeping pills like Ambien, Lunesta, and Sonata] reduced the average time to go to sleep 12.8 minutes compared with fake pills, and increased total sleep time 11.4 minutes.” I don’t know of any statistically-principled comparison between melatonin and Ambien, but the difference is hardly (pun not intended) day and night.

Rather than say “melatonin is crap”, I would argue that all sleeping pills have measurable effects that vastly underperform their subjective effects. The linked article speculates on one reason this might be: people have low awareness around the time they get to sleep, and a lot of people’s perception of whether they’re insomniac or not is more anxiety (or sometimes literally dream) than reality. This is possible, but I also think of this in terms of antidepressant studies, which find similarly weak objective effects despite patients (and doctors) who swear by them and say they changed their lives. If I had to guess, I would say that the studies include an awkward combination of sick and less-sick people and confuse responders and non-responders. Maybe this is special pleading. I don’t know. But if you think any sleeping pill works well, melatonin doesn’t necessarily work much worse than that.

Sleep latency statistics are hard to compare to one another because they’re so dependent on the study population. If your subjects take an hour to fall asleep, perhaps melatonin could shave off thirty-four minutes. But if your subjects take twenty minutes to fall asleep, then no sleeping pill will ever take off thirty-four minutes, and even an amazing sleeping pill might struggle to make fifteen. I cannot directly compare the people who say melatonin gives back ten minutes to the people who say melatonin gives back thirty-four minutes to the people who say Ambien gives back twelve, but my totally unprincipled guess is that melatonin is about a third as strong as Ambien. It also has about a hundred times fewer side effects, so there’s definitely a place for it in sleep medicine.

2. What is the right dose of melatonin?

0.3 mg.

“But my local drugstore sells 10 mg pills! When I asked if they had anything lower, they looked through their stockroom and were eventually able to find 3 mg pills! And you’re saying the correct dose is a third of a milligram?!”

Yes. Most existing melatonin tablets are around ten to thirty times the correct dose.

Many early studies were done on elderly people, who produce less endogenous melatonin than young people and so are considered especially responsive to the drug. Several lines of evidence determined that 0.3 mg was the best dose for this population. Elderly people given doses around 0.3 mg slept better than those given 3 mg or more and had fewer side effects (Zhdanova et al 2001). A meta-analysis of dose-response relationships concurred, finding a plateau effect around 0.3 mg, with doses after that having no more efficacy, but worse side effects (Brzezinski et al, 2005). And doses around 0.3 mg cause blood melatonin spikes most similar in magnitude and duration to the spikes seen in healthy young people with normal sleep (Vural et al, 2014).

Other studies were done on blind people, who are especially sensitive to melatonin since they lack light cues to entrain their circadian rhythms. This is a little bit of a different indication, since it’s being used more as a chronobiotic than a sleeping pill, but the results were very similar: lower doses worked better than higher doses. For example, in Lewy et al 2002, nightly doses of 0.5 mg worked to get a blind subject sleeping normally at night; doses of 20 mg didn’t. They reasonably conclude that the 20 mg is such a high dose that it stays in their body all day, defeating the point of a hormone whose job is to signal nighttime. Other studies on the blind have generally confirmed that doses of around 0.3 to 0.5 mg are optimal.

There have been disappointingly few studies on sighted young people. One such, Attenburrow et al 1996 finds that 1 mg works but 0.3 mg doesn’t, suggesting these people may need slightly higher doses, but this study is a bit of an outlier. Another Zhdanova study on 25 year olds found both to work equally. And Pires et al studying 22-24 year olds found that 0.3 mg worked better than 1.0. I am less interested in judging the 0.3 mg vs. 1.0 mg debate than in pointing out that both numbers are much lower than the 3 – 10 mg doses found in the melatonin tablets sold in drugstores.

UpToDate, the gold standard research database used by doctors, agrees with these low doses. “We suggest the use of low, physiologic doses (0.1 to 0.5 mg) for insomnia or jet lag (Grade 2B). High-dose preparations raise plasma melatonin concentrations to a supraphysiologic level and alter normal day/night melatonin rhythms.” Mayo Clinic makes a similar recommendation: they recommend 0.5 mg. John Hopkins’ experts almost agree: they say “less is more” but end up chickening out and recommending 1 to 3 mg, which is well above what the studies would suggest.

Based on a bunch of studies that either favor the lower dose or show no difference between doses, plus clear evidence that 0.3 mg produces an effect closest to natural melatonin spikes in healthy people, plus UpToDate usually having the best recommendations, I’m in favor of the 0.3 mg number. I think you could make an argument for anything up to 1 mg. Anything beyond that and you’re definitely too high. Excess melatonin isn’t grossly dangerous, but tends to produce tolerance and might mess up your chronobiology in other ways. Based on anecdotal reports and the implausibility of becoming tolerant to a natural hormone at the dose you naturally have it, I would guess sufficiently low doses are safe and effective long term, but this is just a guess, and most guidelines are cautious in saying anything after three months or so.

3. What are circadian rhythm disorders? How do I use melatonin for them?

Circadian rhythm disorders are when your circadian rhythm doesn’t match the normal cycle where you want to sleep at night and wake up in the morning.

The most popular circadian rhythm disorder is “being a teenager”. Teenagers’ melatonin cycle is naturally shifted later, so that they don’t want to go to bed until midnight or later, and don’t want to wake up until eight or later. This is an obvious mismatch with school starting times, leading to teenagers either not getting enough sleep, or getting their sleep at times their body doesn’t want to be asleep and isn’t able to use it properly. This is why every reputable sleep scientist and relevant scientific body keeps telling the public school system to start later.

When a this kind of late sleep schedule persists into adulthood or becomes too distressing, we call it Delayed Sleep Phase Disorder. People with DSPD don’t get tired until very late, and will naturally sleep late if given the chance. The weak version of this is “being a night owl” or “not being a morning person”. The strong version just looks like insomnia: you go to bed at 11 PM, toss and turn until 2 AM, wake up when your alarm goes off at 7, and complain you “can’t sleep”. But if you can sleep at 2 AM, consistently, regardless of when you wake up, and you would fall asleep as soon as your head hit the pillow if you first got into bed at 2, then this isn’t insomnia – it’s DSPD.

The opposite of this pattern is Advanced Sleep Phase Disorder. This is most common in the elderly, and I remember my grandfather having this. He would get tired around 6 PM, go to bed by 7, wake around 1 or 2 AM, and start his day feeling fresh and alert. But the weak version of this is the person who wakes up at 5 each morning even though their alarm doesn’t go off until 8 and they could really use the extra two hours’ sleep. These people would probably do fine if they just went to bed at 8 or 9, but the demands of work and a social life make them feel like they “ought” to stay up as late as everyone else. So they go to bed at 11, wake up at 5, and complain of “terminal insomnia”.

Finally, there’s Non-24-Hour-Sleep Disorder, where somehow your biological clock ended up deeply and unshakeably convinced that days on Earth are twenty-five (or whatever) hours long, and decides this is the hill it wants to die on. So if you naturally sleep 11 – 7 one night, you’ll naturally sleep 12 – 8 the next night, 1 to 9 the night after that, and so on until either you make a complete 24-hour cycle or (more likely) you get so tired and confused that you stay up 24+ hours and break the cycle. This is most common in blind people, who don’t have the visual cues they need to remind themselves of the 24 hour day, but it happens in a few sighted people also; Eliezer Yudkowsky has written about his struggles with this condition.

Melatonin effectively treats these conditions, but you’ve got to use it right.

The general heuristic is that melatonin drags your sleep time towards the direction of when you take the melatonin.

So if you want to go to sleep (and wake up) earlier, you want to take melatonin early in the day. How early? Van Geijlswijk et al sums up the research as saying it is most effective “5 hours prior to both the traditionally determined [dim light melatonin onset] (circadian time 9)”. If you don’t know your own melatonin cycle, your best bet is to take it 9 hours after you wake up (which is presumably about seven hours before you go to sleep).

What if you want to go to sleep (and wake up) later? Our understanding of the melatonin cycle strongly suggests melatonin taken first thing upon waking up would work for this, but as far as I know this has never been formally investigated. The best I can find is researchers saying that they think it would happen and being confused why no other researcher has investigated this.

And what about non-24-hour sleep disorders? I think the goal in treatment here is to advance your phase each day by taking melatonin at the same time, so that your sleep schedule is more dependent on your own supplemental melatonin than your (screwed up) natural melatonin. I see conflicting advice about how to do this, with some people saying to use melatonin as a hypnotic (ie just before you go to bed) and others saying to use it on a typical phase advance schedule (ie nine hours after waking and seven before sleeping, plausibly about 5 PM). I think this one might be complicated, and a qualified sleep doctor who understands your personal rhythm might be able to tell you which schedule is best for you. Eliezer says the latter regimen had very impressive effects for him (search “Last but not least” here). I’m interested in hearing from the MetaMed researcher who gave him that recommendation on how they knew he needed a phase advance schedule.

Does melatonin used this way cause drowsiness (eg at 5 PM)? I think it might, but probably such a minimal amount compared to the non-sleep-conduciveness of the hour that it doesn’t register.

Melatonin isn’t the only way to advance or delay sleep phase. Here is a handy cheat sheet of research findings and theoretical predictions:

TO TREAT DELAYED PHASE SLEEP DISORDER (ie you go to bed too late and wake up too late, and you want it to be earlier)
– Take melatonin 9 hours after wake and 7 before sleep, eg 5 PM
– Block blue light (eg with blue-blocker sunglasses or f.lux) after sunset
– Expose yourself to bright blue light (sunlight if possible, dawn simulator or light boxes if not) early in the morning
– Get early morning exercise
– Beta-blockers early in the morning (not generally recommended, but if you’re taking beta-blockers, take them in the morning)

TO TREAT ADVANCED PHASE SLEEP DISORDER (ie you go to bed too early and wake up too early, and you want it to be later)
– Take melatonin immediately after waking
– Block blue light (eg with blue-blocker sunglasses or f.lux) early in the morning
– Expose yourself to bright blue light (sunlight if possible, light boxes if not) in the evening.
– Get late evening exercise
– Beta-blockers in the evening (not generally recommended, but if you’re taking beta-blockers, take them in the evening)

These don’t “cure” the condition permanently; you have to keep doing them every day, or your circadian rhythm will snap back to its natural pattern.

What is the correct dose for these indications? Here there is a lot more controversy than the hypnotic dose. Of the nine studies van Geijlswijk describes, seven have doses of 5 mg, which suggests this is something of a standard for this purpose. But the only study to compare different doses directly (Mundey et al 2005) found no difference between a 0.3 and 3.0 mg dose. The Cochrane Review on jet lag, which we’ll see is the same process, similarly finds no difference between 0.5 and 5.0.

Van Geijlswijk makes the important point that if you take 0.3 mg seven hours before bedtime, none of it is going to be remaining in your system at bedtime, so it’s unclear how this even works. But – well, it is pretty unclear how this works. In particular, I don’t think there’s a great well-understood physiological explanation for how taking melatonin early in the day shifts your circadian rhythm seven hours later.

So I think the evidence points to 0.3 mg being a pretty good dose here too, but I wouldn’t blame you if you wanted to try taking more.

4. How do I use melatonin for jet lag?

Most studies say to take a dose of 0.3 mg just before (your new time zone’s) bedtime.

This doesn’t make a lot of sense to me. It seems like you should be able to model jet lag as a circadian rhythm disorder. That is, if you move to a time zone that’s five hours earlier, you’re in the exact same position as a teenager whose circadian rhythm is set five hours later than the rest of the world’s. This suggests you should use DSPD protocol of taking melatonin nine hours after waking / five hours before DLMO / seven hours before sleep.

My guess is for most people, their new time zone bedtime is a couple of hours before their old bedtime, so you’re getting most of the effect, plus the hypnotic effect. But I’m not sure. Maybe taking it earlier would work better. But given that the new light schedule is already working in your favor, I think most people find that taking it at bedtime is more than good enough for them.

5. I try to use melatonin for sleep, but it just gives me weird dreams and makes me wake up very early

This is my experience too. When I use melatonin, I find I wake the next morning with a jolt of energy. Although I usually have to grudgingly pull myself out of bed, melatonin makes me wake up bright-eyed, smiling, and ready to face the day ahead of me…

…at 4 AM, invariably. This is why despite my interest in this substance I never take melatonin myself anymore.

There are many people like me. What’s going on with us, and can we find a way to make melatonin work for us?

This bro-science site has an uncited theory. Melatonin is known to suppress cortisol production. And cortisol is inversely correlated with adrenaline. So if you’re naturally very low cortisol, melatonin spikes your adrenaline too high, producing the “wake with a jolt” phenomenon that I and some other people experience. I like the way these people think. They understand individual variability, their model is biologically plausible, and it makes sense. It’s also probably wrong; it has too many steps, and nothing in biology is ever this elegant or sensible.

I think a more parsimonious theory would have to involve circadian rhythm in some way. Even an 0.3 mg dose of melatonin gives your body the absolute maximum amount of melatonin it would ever have during a natural circadian cycle. So suppose I want to go to bed at 11, and take 0.3 mg melatonin. Now my body has a melatonin peak (usually associated with the very middle of the night, like 3 AM) at 11. If it assumes that means it’s really 3 AM, then it might decide to wake up 5 hours later, at what it thinks is 8 AM, but which is actually 4.

I think I have a much weaker circadian rhythm than most people – at least, I take a lot of naps during the day, and fall asleep about equally well whenever. If that’s true, maybe melatonin acts as a superstimulus for me. The normal tendency to wake up feeling refreshed and alert gets exaggerated into a sudden irresistable jolt of awakeness.

I don’t know if this is any closer to the truth than the adrenaline theory, but it at least fits what we know about circadian rhythms. I’m going to try to put some questions about melatonin response on the SSC survey this year, so start trying melatonin now so you can provide useful data.

What about the weird dreams?

From a HuffPo article:

Dr. Rafael Pelayo, a Stanford University professor of sleep medicine, said he doesn’t think melatonin causes vivid dreams on its own. “Who takes melatonin? Someone who’s having trouble sleeping. And once you take anything for your sleep, once you start sleeping more or better, you have what’s called ‘REM rebound,’” he said.

This means your body “catches up” on the sleep phase known as rapid eye movement, which is characterized by high levels of brain-wave activity.

Normal subjects who take melatonin supplements in the controlled setting of a sleep lab do not spend more time dreaming or in REM sleep, Pelayo added. This suggests that there is no inherent property of melatonin that leads to more or weirder dreams.

Okay, but I usually have normal sleep. I take melatonin sometimes because I like experimenting with psychotropic substances. And I still get some really weird dreams. A Slate journalist says he’s been taking melatonin for nine years and still gets crazy dreams.

We know that REM sleep is most common towards the end of sleep in the early morning. And we know that some parts of sleep structure are responsive to melatonin directly. There’s a lot of debate over exactly what melatonin does to REM sleep, but given all the reports of altered dreaming, I think you could pull together a case that it has some role in sleep architecture that promotes or intensifies REM.

6. Does this relate to any other psychiatric conditions?

Probably, but this is all still speculative.

Seasonal affective disorder is the clearest suspect. We know that the seasonal mood changes don’t have anything to do with temperature; they seem to be based entirely on winter having shorter (vs. summer having longer) days.

There’s some evidence that there are two separate kinds of winter depression. In one, the late sunrises train people to a late circadian rhythm and they end up phase-delayed. In the other, the early sunsets train people to an early circadian rhythm and they end up phase-advanced. Plausibly SAD also involves some combination of the two where the circadian rhythm doesn’t know what it’s doing. In either case, this can make sleep non-circadian-rhythm-congruent and so less effective at doing whatever it is sleep does, which causes mood problems.

How does sunrise time affect the average person, who is rarely awake for the sunrise anyway and usually sleeps in a dark room? I think your brain subconsciously “notices” the time of the dawn even if you are asleep. There are some weird pathways leading from the eyes to the nucleus governing circadian rhythm that seem independent of any other kind of vision; these might be keeping tabs on the sunrise if even a little outside light is able to leak into your room. I’m basing this also on the claim that dawn simulators work even if you sleep through them. I don’t know if people get seasonal affective disorder if they sleep in a completely enclosed spot (eg underground) where there’s no conceivable way for them to monitor sunrise times.

Bright light is the standard treatment for SAD for the same reason it’s the standard treatment for any other circadian phase delay, but shouldn’t melatonin work also? Yes, and there are some preliminary studies (paper, article) showing it does. You have to be a bit careful, because some people are phase-delayed and others phase-advanced, and if you use melatonin the wrong way it will make things worse. But for the standard phase-delay type of SAD, normal phase advancing melatonin protocol seems to go well with bright light as an additional treatment.

This model also explains the otherwise confusing tendency of some SAD sufferers to get depressed in the summer. The problem isn’t amount of light, it’s circadian rhythm disruption – which summer can do just as well as winter can.

I’m also very suspicious there’s a strong circadian component to depression, based on a few lines of evidence.

First, one of the most classic symptoms of depression is awakening in the very early morning and not being able to get back to sleep. This is confusing for depressed people, who usually think of themselves as very tired and needing to sleep more, but it definitely happens. This fits the profile for a circadian rhythm issue.

Second, agomelatine, a melatonin analogue, is an effective (ish) antidepressant.

Third, for some reason staying awake for 24+ hours is a very effective depression treatment (albeit temporary; you’ll go back to normal after sleeping). This seems to sort of be a way of telling your circadian rhythm “You can’t fire me, I quit”, and there are some complicated sleep deprivation / circadian shift protocols that try to leverage it into a longer-lasting cure. I don’t know anything about this, but it seems pretty interesting.

Fourth, we checked and depressed people definitely have weird circadian rhythms.

Last of all, bipolar has a very strong circadian component. There aren’t a whole lot of lifestyle changes that really work for preventing bipolar mood episodes, but one of the big ones is keeping a steady bed and wake time. Social rhythms therapy, a rare effective psychotherapy for bipolar disorder, revolves around training bipolar people to control their circadian rhythms.

Theories of why circadian rhythms matter so much revolve either around the idea of pro-circadian sleep – that sleep is more restorative and effective when it matches the circadian cycle – or the idea of multiple circadian rhythms, with the body functioning better when all of them are in sync.

7. How can I know what the best melatonin supplement is?

Labdoor has done purity tests on various brands and has ranked them for you. All the ones they highlight are still ten to thirty times the appropriate dose (also, stop calling them things like “Triple Strength!” You don’t want your medications to be too strong!). As usual, I trust NootropicsDepot for things like this – and sure enough their melatonin (available on Amazon) is exactly 0.3 mg. God bless them.

The Craft And The Codex

The rationalist community started with the idea of rationality as a martial art – a set of skills you could train in and get better at. Later the metaphor switched to a craft. Art or craft, parts of it did get developed: I remain very impressed with Eliezer’s work on how to change your mind and everything presaging Tetlock on prediction.

But there’s a widespread feeling in the rationalist community these days that this is the area where we’ve made the least progress. AI alignment has grown into a developing scientific field. Effective altruism is big, professionalized, and cash-rich. It’s just the art of rationality itself that remains (outside the usual cognitive scientists who have nothing to do with us and are working on a slightly different project) a couple of people writing blog posts.

Part of this is that the low-hanging fruit has been picked. But I think another part was a shift in emphasis.

Martial arts does involve theory – for example, beginning fencers have to learn the classical parries – but it’s a little bit of theory and a lot of practice. Most of becoming a good fencer involves either practicing the same lunge a thousand times in ideal conditions until you could do it in your sleep, or fighting people on the strip.

I’ve been thinking about what role this blog plays in the rationalist project. One possible answer is “none” – I’m not enough of a mathematician to talk much about the decision theory and machine learning work that’s really important, and I rarely touch upon the nuts and bolts of the epistemic rationality craft. I freely admit that (like many people) I tend to get distracted by the latest Outrageous Controversy, and so spend way too much time discussing things like Piketty’s theory of inequality which get more attention from the chattering classes but are maybe less important to the very-long-run future of the world.

Any argument in my own defense is entirely post hoc. But if I can advance such an argument anyway, it would be that this kind of thing is the endless drudgery of rationality training, the equivalent of fighting a thousand bouts and honing your reflexes. Controversial things are, at least, hard problems. There’s a lot of misinformation and conflicting interpretations and differing heuristics and compelling arguments on both sides. Figuring out what’s going on with Piketty is good practice for figuring out what’s going on with deworming etc.

Looking back on the Piketty discussion, people brought up questions like “How much should you discount a compelling-sounding theory based on the bias of its inventor?” And “How much does someone being a famous expert count in their favor?” And “How concerned should we be if a theory seems to violate efficient market assumptions?” And “How do we balance arguments based on what rationally has to be true, vs. someone’s empirical but fallible data sets?”

And in the end, I think we made a lot of progress on those questions. With the help of some very expert commenters, I resolved a lot of my confusions and changed some of my conclusions. That not only gives me a different view of Piketty, but – I hope – long-term trains my thought processes to better understand which heuristics and generators-of-heuristics are reliable in which situations.

Last year, I had a conversation with a friend over how we should think about the latest round of scientific results. I said over the past few years I’d learned to trust science more; he said he’d learned to trust science less. We argued it for a while, and in the end I think we basically had the same insights and perspectives – there are certain situations where science is very definitely trustworthy, and others where it is very definitely untrustworthy. Although I could provide heuristics about which is which, they would be preliminary and much worse than the intuitions that generated them. I live in fear of someone asking something like “So, since all the prominent scientists were wrong about social priming, isn’t it plausible that all the prominent scientists are wrong about homeopathy?” I can come up with some reasons this isn’t the right way to look at things, but my real answer would have to sound more like “After years of looking into this kind of thing, I think I have some pretty-good-though-illegible intuitions about when science can be wrong, and homeopathy isn’t one of those times.”

I think by looking at a lot of complicated cases, and checking back on them after they’re solved (which sometimes happens! Just look at the Fermi Paradox paper from earlier this week!) we can refine those intuitions and get a better idea of how to use the explicit-textbook-rationality-techniques. If this blog still has value to the rationalist project, it’s as a dojo where we do this a couple of times a week and absorb the relevant results.

This is one reason I’m so grateful for everyone’s comments. I only post a Comments Highlights thread every so often, but I’m constantly making updates based on things I read there and getting a chance to double-check which of the things I think are right or wrong. This isn’t just good individual rationality practice, it’s also community rationality practice, and so far I’m pretty happy with how it’s going.

SSC Journal Club: Dissolving The Fermi Paradox

I’m late to posting this, but it’s important enough to be worth sharing anyway: Sandberg, Drexler, and Ord on Dissolving the Fermi Paradox.

(You may recognize these names: Toby Ord founded the effective altruism movement; Eric Drexler kindled interest in nanotechnology; Anders Sandberg helped pioneer the academic study of x-risk, and wrote what might be my favorite Unsong fanfic)

The Fermi Paradox asks: given the immense number of stars in our galaxy, for even a very tiny chance of aliens per star shouldn’t there should be thousands of nearby alien civilizations? But any alien civilization that arose millions of years ago would have had ample time to colonize the galaxy or do something equally dramatic that would leave no doubt as to its existence. So where are they?

This is sometimes formalized as the Drake Equation: think up all the parameters you would need for an alien civilization to contact us, multiply our best estimates for all of them together, and see how many alien civilizations we predict. So for example if we think there’s a 10% chance of each star having planets, a 10% chance of each planet being habitable to life, and a 10% chance of a life-habitable planet spawning an alien civilization by now, one in a thousand stars should have civilization. The actual Drake Equation is much more complicated, but most people agree that our best-guess values for most parameters suggest a vanishingly small chance of the empty galaxy we observe.

SDO’s contribution is to point out this is the wrong way to think about it. Sniffnoy’s comment on the subreddit helped me understand exactly what was going on, which I think is something like this:

Imagine we knew God flipped a coin. If it came up heads, He made 10 billion alien civilization. If it came up tails, He made none besides Earth. Using our one parameter Drake Equation, we determine that on average there should be 5 billion alien civilizations. Since we see zero, that’s quite the paradox, isn’t it?

No. In this case the mean is meaningless. It’s not at all surprising that we see zero alien civilizations, it just means the coin must have landed tails.

SDO say that relying on the Drake Equation is the same kind of error. We’re not interested in the average number of alien civilizations, we’re interested in the distribution of probability over number of alien civilizations. In particular, what is the probability of few-to-none?

SDO solve this with a “synthetic point estimate” model, where they choose random points from the distribution of possible estimates suggested by the research community, run the simulation a bunch of times, and see how often it returns different values.

According to their calculations, a standard Drake Equation multiplying our best estimates for every parameter together yields a probability of less than one in a million billion billion billion that we’re alone in our galaxy – making such an observation pretty paradoxical. SDO’s own method, taking account parameter uncertainty into account, yields a probability of one in three.

They try their hand at doing a Drake calculation of their own, using their preferred values, and find:

N is the average number of civilizations per galaxy

If this is right – and we can debate exact parameter values forever, but it’s hard to argue with their point-estimate-vs-distribution-logic – then there’s no Fermi Paradox. It’s done, solved, kaput. Their title, “Dissolving The Fermi Paradox”, is a strong claim, but as far as I can tell they totally deserve it.

“Why didn’t anyone think of this before?” is the question I am only slightly embarrassed to ask given that I didn’t think of it before. I don’t know. Maybe people thought of it before, but didn’t publish it, or published it somewhere I don’t know about? Maybe people intuitively figured out what was up (one of the parameters of the Drake Equation must be much lower than our estimate) but stopped there and didn’t bother explaining the formal probability argument. Maybe nobody took the Drake Equation seriously anyway, and it’s just used as a starting point to discuss the probability of life forming?

But any explanation of the “oh, everyone knew this in some sense already” sort has to deal with that a lot of very smart and well-credentialled experts treated the Fermi Paradox very seriously and came up with all sorts of weird explanations. There’s no need for sci-fi theories any more (though you should still read the Dark Forest trilogy). It’s just that there aren’t very many aliens. I think my past speculations on this, though very incomplete and much inferior to the recent paper, come out pretty well here.

(some more discussion here on Less Wrong)

One other highlight hidden in the supplement: in the midst of a long discussion on the various ways intelligent life can fail to form, starting on page 6 the authors speculate on “alternative genetic systems”. If a planet gets life with a slightly different way of encoding genes than our own, it might be too unstable to allow complex life, or too stable to allow a reasonable rate of mutation by natural selection. It may be that abiogenesis can only create very weak genetic codes, and life needs to go through several “genetic-genetic transitions” before it can reach anything capable of complex evolution. If this is path-dependent – ie there are branches that are local improvements but close off access to other better genetic systems – this could permanently arrest the development of life, or freeze it at an evolutionary rate so low that the history of the universe so far is too short a time to see complex organisms.

I don’t claim to understand all of this, but the parts I do understand are fascinating and could easily be their own paper.

OT105: Ethelthread The Unthready

This is the bi-weekly visible open thread (there are also hidden open threads twice a week you can reach through the Open Thread tab on the top of the page). Post about anything you want, ask random questions, whatever. You can also talk at the SSC subreddit or the SSC Discord server. Also:

1. Comment of the week is by AlesZiegler, answering the question “What parts of Piketty’s book have stood the test of time?”

2. But also, see the discussion about the border in the last Open Thread, where people on every part of the political spectrum hash out their differences about Trump’s border policy with an emphasis on “if we’re going to enforce immigration laws, how can we do it more humanely than the current system?”. Especially interesting to me was this comment questioning the idea of “enforcing” vs “not enforcing” immigration law. And also this thread arguing border walls are ineffective at stopping migration, that even “successful” walls like the Israeli border wall and the Berlin Wall mostly relied on guards, and that the bare minimum requirement for a wall being even slightly useful – protection against ladders – is not in Trump’s requirements (suggesting he’s not serious about anything except the symbolism). But I don’t know how to square this with other people’s claims that the Bush-era fence did decrease immigration.

3. I went back, read the last month of comment reports, and banned several people who deserved it. I want to make this explicit so people don’t think bad behavior here isn’t punished. It is – it just takes me a long time to get around to it. Thanks to everyone who uses the report button to report comments to me.

4. I’ll probably be at the South Bay SSC meetup, 2 PM on Saturday July 7, at 3806 Williams Rd, San Jose. If you’re coming, consider emailing David Friedman (address at link) so he knows how many to expect.

Posted in Uncategorized | Tagged | 1,013 Comments

Highlights From The Comments On Piketty

Chris Stucchio recommended Matt Rognlie’s criticisms of Piketty (paper, summary, Voxsplainer).

Rognlie starts by saying that Piketty didn’t correctly account for capital depreciation (ie capital losing value over time) in his calculations. This surprises me, because Piketty says he does in his book (p. 55) but apparently there are technical details I don’t understand. When you do that, the share of capital decreases, and it becomes clear that 100% of recent capital-share growth comes from one source: housing.

I can’t find anyone arguing that Rognlie is wrong. I do see many people arguing about the implications, all the way from “this disproves Piketty” to “this is just saying the same thing Piketty was”.

I think it’s saying the same thing Piketty was in that housing is a real thing, and if there’s inequality in housing, then that’s real inequality. And landlords are a classic example of the rentiers Piketty is warning against.

But it’s saying a different thing in that most homeowners use their homes by living in them, not by renting them out. That means they’re not part of Piketty’s rentier class, and so using the amount of capital to represent the power of rentiers is misleading. Rentiers are not clearly increasing and there is no clear upward trend in rentier-vs-laborer inequality. I think this does disprove Piketty’s most shocking thesis.

Rognlie also makes an argument for why increasing the amount of capital will decrease the returns on capital, leading to stable or decreasing income from capital. Piketty argues against this on page 277 of his book, but re-reading it Piketty’s argument now looks kind of weak, especially with the evidence from housing affecting some of his key points.


Grendel Khan highlights the role of housing with an interesting metaphor:

Did someone say housing?

As an illustration, the median homeowner in about half of the largest metros made more off the appreciation of their home than a full-time minimum-wage job. It’s worst in California, of course; in San Jose, the median homeowner made just shy of $100 per working hour.

See also Richard Florida’s commentary. See also everything about how the housing crisis plays out in micro; it is precisely rentier capitalism.


In the original post, I questioned Piketty’s claim that rich people and very-well-endowed colleges got higher rates of return on their investment than ordinary people or less-well-endowed colleges. After all, why can’t poorer people pool their money together, mutual-fund-style, to become an effective rich person who can get higher rate of return? Many people tried to answer this, not always successfully.

brberg points out that Bill Gates – one example of a rich person who’s gotten 10%+ returns per year – has a very specific advantage:

Not sure about Harvard’s endowment, but it’s worth noting that the reason Gates, Bezos, Zuckerberg, and other self-made billionaires have seen their fortunes grow so quickly is that each of them has the vast majority of their wealth invested in a single high-growth company.

This is an extremely high-risk investment strategy that has the potential to pay off fantastically well in a tiny percentage of cases, but it’s not really dependent on the size of the starting stake. Anyone who invested in Microsoft’s IPO would have seen the same rate of return as Gates.

This is a good point, but most of Piketty’s data focuses on college endowments. How do they do it?

Briefling writes:

I’m not sure you can take the wealth management thing at face value. The stock market since 1980 has 10% annualized returns. Instead of trying to replicate whatever Harvard and Yale are doing, why don’t you just put your money in the stock market?

Also a good point, but colleges seem to do this with less volatility than the stock market, which still requires some explanation.

Tyrathalis, a financial planner, adds more information:

One of the things that having /any/ major financial planner does for you, though, is it opens up access to private equity funds that are only advertised to sufficiently high-net-worth individuals and businesses. The primary asset class that super high gains come from is private equity, generally meaning investments in angel funds and off-market startups. The way these funds operate involves you pledging a certain amount of money that they can invest as they choose, but they only call up parts of it periodically. This means that dealing with a few really rich people is much easier than dealing with a ton of poor people, in particular because it is really, really bad if they can’t manage to get all of the money. Their current business model requires only dealing with people who will definitely be able to make their payments when they need to, and since the funds are so large, that means they need to have a few very rich investors. Investment advisors known to advise large fortunes are where they go to find those people.

Also, any given private equity fund is still likely to make a negative return, which is a much bigger deal if you don’t have a lot of money in the first place, so very few people would recommend that you invest in a private equity fund instead of something safer if you aren’t already rich. Higher returns implies higher standard deviation. That’s also why a long time horizon is so significant. The basic activity of asset class investing is to diversify to balance out high variability without diminishing returns too much, but over a long enough time frame the variability matters much less and you can afford to make riskier investments.

Although, getting 10% returns doesn’t require any special connections. The stock market grows at 11% a year, it just has very high variability, so you need to be able to be in the market for several decades to ensure those gains with an all-stock portfolio. A 60/40 split of stocks and bonds will get around 8%, while not requiring more than a few dollars to invest. You can do it on Schwab with only a bit of research. The reason why super rich people and organizations /only/ get 10% returns is that despite private equity managing 20% or more, even they don’t have enough capital and long enough time horizons to stay fully invested in such risky markets. They diversify heavily too, cutting returns in favor of making those returns basically guaranteed.

My main point is that financial planners do things besides stock picking, but one of the things they do is get you into private equity funds, which are the main source of the better returns that rich people can get. However, for reasons of risk management, this isn’t something people who aren’t super rich necessarily ought to imitate. There are only slightly less effective strategies that anyone could imitate, but its not smart for everyone to have the same amount of risk. Realistically, most people ought to do something like the 60/40 split I mentioned, and the difference between that and what most people end up getting is due to people being bad at performing optimal strategies even when they know what they are.

And Vaniver adds:

There’s a mutual fund called the Magellan Fund, which was famous for its extreme performance (I believe it was annual growth of 15-20% per year) for about 20 years.

At the end of that streak, someone ran the numbers and discovered that most of the people who had invested in the fund had lost money, because they bought in when the market was high and sold when the market was low.

The problem that mutual funds have is that they don’t know how much money they’re going to have tomorrow, because there are thousands upon thousands of customers who might want some of their money back, or might want to add in some more money, and as a result there are lots of unplanned trades they’ll have to make that only benefit their customers, not them. Many of the best managers insist on terms of the form “you give me money and then can’t take it out for N years” so that they don’t have to deal with this kind of thing (in the short term, at least).

For private equity servicing one large customer, there are far fewer moves of that form, and they’re much easier to predict, and you averaging across many small customers still doesn’t duplicate that effect.

I still don’t feel like this explains everything; surely a college with $500 million has about the same risk tolerance and ability to give money on the right time scale as a college with $1 billion? Maybe all of this is just false? J Mann writes:

Is it consensus that Harvard and Yale consistently get better returns than other endowments and than the market? It looks like Harvard at least has had a number of recent bad years, and that some people are suggesting that its results may be based on taking on more risk.

And Anon256 adds:

Indeed; Havard has done badly enough in the years since Piketty’s book was published that it’s now considering switching to just using index funds.

And Will4071 says:

Just to note, I don’t think large endowments/the very rich really do anything special. This analysis suggests that they actually underperform a levered 60/40 portfolio (which is fairly standard, and something you could easily set up yourself).


Chris Stucchio has a different perspective on rich people making higher rates of return:

It’s also worth reflecting on a point which Piketty makes mathematically, but literally never says in words. If rich people are the best investors, then the best way to create economic growth is to ensure that rich people are the ones controlling investment decisions. Intuitively this makes a lot of sense; Travis Kalanick (and now Dara “the D” Khosrowshahi) are a lot better at transportation than the average autowale. Bezos is a lot better at logistics than my local cell phone store.

See also Paul’s answer to one of my objections to this. Right now it looks like (assuming Piketty is right about this at all), Chris has a point. Does anyone want to try to convince me otherwise?


Phillip Magness, himself an economic history professor, writes:

I’d also urge you to look more skeptically on his income distribution stats (the figure 1.1 above). Several economists, myself included, have been working on the measurement problems that arise from attempting to determine income shares from tax data in recent years. The aforementioned figure comes from a 2003 study by Piketty and his coauthor Emmanuel Saez. While it represented an innovative contribution to the literature, this paper gives generally insufficient treatment to the effect of changes to the tax code itself upon data that derive from income tax reporting.

To put it another way, taxpayers – both wealthy and poor – respond to the way that income tax laws are structured so as to minimize their own tax burdens. They take advantage of incentives and loopholes to lower what they owe. They engage in wealth planning strategies to legally shelter income from high rates of taxation. And some even illegally evade their obligations by misreporting income.

Tax avoidance and evasion rates vary substantially over time and in response to tax code changes, and so do the statistics they generate with the IRS. A major problem in Piketty-Saez is that they do very little to account for this issue over time, and instead simply treat tax-generated stats as if they are representative. Doing so yields a relatively sound measurement of income distributions, provided that the tax code remains relatively stable over long periods of time (e.g. what the U.S. experienced between roughly 1946 and 1980). When the tax code undergoes frequent and major changes though, tax-generated stats become less reliable. And it just so happens that the two periods of “high” inequality on the Piketty-Saez U-curve are also periods of volatility in the tax code: 1913-1945 and 1980-present.

The 1913-45 period is marred by both frequent tax rate swings and an initially small tax base that was rapidly expanded during WWII, combined with the introduction of automatic payroll withholding in 1943. When you account for these and related issues, the extreme inequality of the early 20th century and especially the severe drop it undergoes between 1941-45 become much more subdued. The period from 1980-present is similarly marred by Piketty and Saez’s failure to fully account for the effects of the Tax Reform Act of 1986, which induced substantial income shifting at the top of the distribution to take advantage of differences between the personal and corporate tax rates. Adjusting for that has a similar effect of lowering the depicted rebound.

Taken together, what we’re probably experiencing is a much flatter trend across the 20th century – one that resembles a tea saucer rather than a pronounced U. And that has profound implications for Piketty’s larger prescriptive argument in favor of highly progressive tax rates.

Magness also recommends his 2014 paper and Richard Sutch’s 2017 conceptual replication questioning Piketty’s data. It’s inherently hard to find good data on inequality over the last few centuries, but Magness finds that of the many datasets available, Piketty cherry-picked the ones that best fit the u-shaped curve he wanted to show, estimated some missing data points kind of out of thin air, and made some other questionable decisions. The result is a much less pronounced change in inequality, especially in the US.

The paper is pretty confrontational (on his own blog, Magness’ co-author describes Piketty as making “no-brainers…boneheaded historical errors [that] would be shocking if contained in a high school term paper”. Piketty sort of says a few words in his own defense in this article. But one thing I notice is that it looks like, aside from these authors, everyone is working together on this – the author of one of the pro-Piketty datasets was also a co-author of one of the anti-Piketty datasets, and the author of one of the anti-Piketty datasets has worked with Piketty in the past. This suggests to me that a lot of this is legitimately hard and that the same people, working from different methods, get different results. My main takeaway is that there are many different inequality datasets and Piketty used the most dramatic.


Tlaloc on the Discord provides the European log GDP graphs I wanted:

I think it’s fair to ask – what the heck? Taken literally, doesn’t this suggest WWII was long-run good for Europe – that its “recovery” brought it well above trend?

Eyeballing the Maddison Project data elsewhere shows France, Germany, and the US all having very similar growth of 200% between 1960 and 2016.

I need to look into this more, but right now I’m not really buying it.


VPaul doesn’t believe in straight-line GDP growth anyway:

I don’t trust inflation statistics, so I don’t trust inflation adjusted GDP statistics. During the time period covered by Piketty’s GDP growth trend line, there have multiple different methodologies for measuring inflation, with adjustments to fix obvious errors in previous versions of inflation adjusters. Since we know inflation statistics have been wrong, and there is good evidence they are still wrong, I think the steady GDP growth rate is an artifact.


Several people point out that “increasing number of rentiers” is not necessarily bad; after all, this is what the post-scarcity robot future should look like. For example, from Virriman:

A world where 1% of people can avoid drudgery seems preferable to a world where only 0.1% can do that, holding everything else equal. Isn’t the techno-utopian ideal a world where almost everyone is a “rentier”?

Sounds like we need to figure out how to get back to the gilded age, and then figure out how to turn that 1% of rentiers into 2% and keep trying to expand that number.

This could maybe make sense around number of rentiers, but amount of money per rentier could work the opposite direction, and Piketty’s numbers awkwardly combine both.


Paul Christiano on some of Piketty’s other statistics:

The extrapolation in figure 10.11 looks pretty wild. It takes a special something to draw a graph that’s been pretty smooth/predictable historically, then insert a stark+unprecedented regime change exactly at the current moment for no apparent reason. Does he give some justification for the sharp discontinuity?

Claiming that economic growth is always 1-1.5% also seems pretty dubious. According to Maddison’s estimates, which I don’t think are under dispute, worldwide per capita growth first reached 1% around 1900, continued increasing to 2-3% by 1960, and then fell back down to 1% in the great stagnation. You could say “A century is a long time, that’s basically always, the mid-century spike was just a deviation” but elsewhere Pikkety seems willing to write off that same chunk of history as an aberration. Or maybe his argument is supposed to apply only to the US? (Or maybe he includes Europe and then can cite steady growth for 150 years instead of 100? I don’t even think that’s true though, in 1875 I think that per capita GDP growth in Europe was not yet 1%?)

I’m not sure in what sense rentiers can be said to be winning. We can just look directly and see that rents are significantly smaller than wages, the capital share of income is staying around 1/3, it’s grown but only a tiny bit. If 1/3 of GDP is rents that get allocated inequitably then maybe you can increase median income by 25% with perfect redistribution, but that just doesn’t seem that promising compared to efficiency effects, unless you are super concerned about inequality per se (rather than regarding it as an opportunity to benefit poorer people). Even that benefit would shrink as savings rates fall.

If in fact the rentiers grow their fortunes at r, then they will get wealthier and wealthier until r = g, that’s basically an accounting identity. That seems to basically be a reductio of the concern that r>>g can continue indefinitely + rentiers can have their wealth grow at the rate r.

From an efficiency standpoint it seems like the main implication of r>>g is that we could spend 1% of GDP today to make our descendants several percent richer, which sounds like a good deal and suggests that we ought to invest more. It’s pretty wild to respond to r>>g by considering massively disincentivizing investment. If you want to push for equality and think that r>>g, maybe support a sovereign wealth fund? Or else we’d need to decide collectively whether the problem with inequality is that some people are rich, or that other people are poor—I can see how a wealth tax (vs a similarly large consumption or income tax) would help with one of those problems, but not the other. I think it’s just a really bad policy for a lot of reasons with very little to recommend it other than leveling down.


Swami brings up an IGM poll of economists on r>>g:

ADifferentAnonymous counters with a Matt Yglesias article arguing that this isn’t really disproving anything Piketty is saying.


Overall, it looks like the claim that the super-rich get much better returns on investment than everyone else doesn’t really hold up, except in obvious predictable ways, eg they can take more risks.

The claim that there is a rising rentier class who will dominate the 21st century doesn’t really hold up.

I’m not qualified to say whether Piketty’s empirical data holds up, but there seems to be significant academic debate over it.

And although Piketty’s rules of thumb for growth (g = 1 – 1.5%, r = 4-5%) hold up more than I would have expected before reading him, they still don’t hold up that well.

Now taking recommendations about if anything from Piketty is still worth keeping.