Open Thread 71.5

This is the twice-weekly hidden open thread. Post about anything you want, ask random questions, whatever. You can also talk at the SSC subreddit or the SSC Discord server.

This is the once-every-two weeks thread where we try to avoid culture war topics.

This entry was posted in Uncategorized and tagged . Bookmark the permalink.

646 Responses to Open Thread 71.5

  1. Tibor says:

    Do you recycle typos?

    Ok, let me explain what I mean. Sometimes I write something, then I decide to rewrite it a bit or I write a typo. One can either delete the word and rewrite it or “recycle it” by using the letters in it. It might sound ridiculous to some (probably most) of you, but I sometimes feel sorry for the letters and try to use them elsewhere instead of deleting them (I prefer cut and paste, since it does not “kill them”). I was wondering if anyone else has this weird habit. (I only do it when it is still reasonably convenient and does note require too much work, but it would still probably be faster to just retype everything, since I type quite fast.)

    • Anonymous says:

      I do this. Not because I feel sorry for the electrons, but because it’s a pretty reliable way to avoid making the same typo again. This sort of word modification is by necessity not trained, you have to do it manually by thinking about what you’re doing, which helps prevent falling into the unconscious and well-trained pattern that resulted in the typo in the first place.

      That’s my rationalization, anyway.

    • Well... says:

      I do this, but when I do it I lie to myself that it will save me time by saving me keystrokes (it probably loses me time because I have to remember to skip the keystrokes).

    • xXxanonxXx says:

      I do not, but had to reply anyway because that is absolutely adorable.

    • Machina ex Deus says:

      I should have realized I’m not the only one who does this. And yes, I do feel empathy for the little letters, as if they were ants or something, and I don’t want to squash them. And copy-paste is no worse than sending them through a Star Trek transporter.

      Related: when my depression was worse, I’d feel the need to rescue ants in my kitchen instead of killing them. It seems in retrospect to have been a combination of unselective empathy, obsessive-compulsive urges, and out-of-control conscientiousness.

      Speaking of conscientiousness: how do you evaluate this for an entire person? Just looking at work: I’m fanatical about keeping code clean and simple, but my desk is a mess. I correct typos and bad fontography even in documents that I receive and will not be sending on to anyone else, but I get in late around once a week. So am I conscientious, or not?

      And it seems ridiculous to give me a middle-range score, considering there’s no area where I’m middling-conscientious.

  2. Anonymous says:

    I need a sanity check on some nutrition data. Suppose I eat 5 hard boiled eggs (medium-large) and 5 apples (medium-small). Am I really eating less than half the standard 2000 calorie diet?

    I’m getting this result from multiple sources, and it seems extraordinarily strange, because it doesn’t feel like cutting my intake that drastically.

    • Eltargrim says:

      That sounds plausible to me. While I’d always recommend doing your nutrition by mass, as it avoids the uncertainty over “is this a medium apple or a small apple?”, the numbers check out.

      Keep in mind that an apple has no protein, no fat, and is about 10% sugar by weight. Eggs are more calorie-dense, being about 10% protein and 10% fat by weight, but they’re still pretty small.

      Ballparking a medium apple as 180 g, and a large egg as 70 g, you’re looking at 90 g of sugar, about 35 g of protein, and 35 g of fat. Using the 4-4-9 Calorie estimate, that’s 360 Calories from sugar, 140 Calories from protein, and 315 Calories from fat, summing to about 815 Calories. EDIT: This is probably an underestimate, but I’d but error bars on this of no more than 25%, assuming the masses are about right.

      Both eggs and apples are generally considered to be high satiety foods, meaning that they are good at making you feel full. However, barring extreme circumstances, someone on the proposed diet is almost certainly consuming far fewer Calories than is sustainable or healthy.

      Second edit: I fucked up the math. Corrected above. Previous version was 630, due to me somehow thinking 70*5=25.

      • Anonymous says:

        Thank you.

        Both eggs and apples are generally considered to be high satiety foods, meaning that they are good at making you feel full. However, barring extreme circumstances, someone on the proposed diet is almost certainly consuming far fewer Calories than is sustainable or healthy.

        Right. This is just for Lent, and it’s just an example day (when I’m feeling exceptionally lazy concerning food preparation); my diet is actually more varied than that, and I do take a multivitamin pill to supplement it. I’ve lost like 5kg since Ash Wednesday, so I can’t complain about being metabolically disprivileged.

        • Eltargrim says:

          Note that I fucked up my math somewhat. Revised numbers are above, but the conclusion is the same: 5 apples and 5 eggs is not a lot of calories.

      • dodrian says:

        Note: 70g is a very big egg – it’s at the top end of XL by US standards, and top-end of L by EU.

        • Eltargrim says:

          I rarely consume eggs, so I appreciate the insight!

          • dodrian says:

            I only know because of baking – a helpful tip for some recipes is to set the weight of everything else relative to the weight of the egg(s). A typical US ‘large egg’ is 55g. Related – 110g is the weight of an average cup of sifted flour, and also 1/2 stick butter.

    • random832 says:

      Apples are reasonably high in fiber, and eggs in protein, which means you could probably do worse nutritionally for the number of calories, but if that’s really all you’re eating that is a very drastic cut.

      (My numbers – I’ve always understood an egg to be 70 calories, and google says an apple is 95 calories which sounds about right, which makes the total 825 calories)

  3. onyomi says:

    They say baby animals are cute because they look like human babies, which are cute because we needed to take care of them for evolution’s sake (big eyes, head, feet, helplessness, etc.).

    But maybe animal husbandry is also in our DNA?

    In summary, since they let other people have babies, there’s a good bioevolutionary reason why my apartment should let me have pets for my mental health.

    • AnonYEmous says:

      They say baby animals are cute because they look like human babies, which are cute because we needed to take care of them for evolution’s sake (big eyes, head, feet, helplessness, etc.).

      But maybe animal husbandry is also in our DNA?

      More likely that the traits that determine cuteness, both in the baby’s body and in the adult’s mind, started from one common ancestor and spread throughout all mammals, all the way down to us

      • keranih says:

        Yeah, the response to [cute baby features] is pretty well conserved across mammal species, (as you can see from the non-infrequent cross-species ‘adoptions’) and it also shows up scary early in human toddlers.

        It’s really impressive how well programmed we are, to care for infants.

    • Corey says:

      Apartments approximately never care if you have fish as pets.

    • Anonymous says:

      In summary, since they let other people have babies, there’s a good bioevolutionary reason why my apartment should let me have pets for my mental health.

      I’m pretty sure the incentives here are aligned perfectly to promote natalism in the resistant, for their own and the common good.

    • Gobbobobble says:

      In summary, since they let other people have babies, there’s a good bioevolutionary reason why my apartment should let me have pets for my mental health.

      No argument if you mean cats, but IME babies are way better apartment neighbors than dogs. Sure, they cry sometimes, but the parents actually respond to that and they grow out of it eventually. And most people tend to only have one at a time.

      But apartment dogs tend to be of the incessantly-yappy variety that will bark their fool heads off at all hours for no reason, the owners feel no need to calm them down, and they tend to come in pairs or triples that encourage each other to bark their damn fool heads off.

      And as an added bonus, babies don’t shit all over the tiny common outdoor spaces.

      Maybe I’ve just lived near conscientious parents and unconscientious dog owners, but it’s gotten to the point that I now actively seek out apartments that don’t allow dogs. Small flats are just not good for dogs and having them there anyway is being unkind to both the dog(s) and one’s neighbors.

      • Nornagest says:

        There may be some selection bias here. When I lived in apartment buildings, I wouldn’t have noticed dogs that weren’t yappy.

  4. sketerpot says:

    Why are cities more liberal? At least in the US there is a sharp political divide between urban areas that lean more toward the left and rural areas that lean right — but why?

    One hypothesis I’ve heard is that living in a city exposes you to more diversity, and that either makes people more liberal or drives off conservatives. But this doesn’t feel like a satisfying answer; the pattern holds even for pretty non-diverse towns, and none of that has anything obvious to do with the other left-right differences, like religiosity or views on welfare.

    Another explanation some people have proposed is that living in a city promotes a more cooperative mindset, because there are so many more people underfoot, various city services like parks and buses are shared, et cetera. But small towns often feel more collectivist than cities, and this still feels like it ascribes way too much conscious thought to people’s political choices. And it completely fails to explain why people in cities are less religious (on average) than people in suburbs, who are in turn less religious than people in rural areas.

    It might be partly a matter of age. The higher the population density, in general, the younger the adult population. But the differences don’t look big enough to be the explanation, unless some smaller difference got amplified over time by people moving to be with others like them — like, if there were a moderate urban/rural difference and people tended to move to where they felt more comfortable, it could increase the divide.

    I’m still confused. Is there something obvious I’m overlooking?

    • Machina ex Deus says:

      I think in a city, you find yourself surrounded by a lot of people [citation needed]. This has two effects:

      1) It’s easier to find a bunch of people similar to yourself to have most of your social interactions with.

      2) It’s harder to avoid people who are not similar to yourself, including the sounds they make, the smell of their cooking, and the sights of their odd fashion sense.

      As a result of (1), you have less incentive to try to make people who are not similar to you more similar to you. As a result of (2), you have incentive to get used to people who are not similar to you. Both of these make you care less about (a) making other people more similar to you, and (b) making yourself more similar to other people. (And then, of course, there’s the evaporative effect that everyone who can’t stand salsa music, garlic, or punk hairstyles leaves the city.)

      What begins as an adaptation, like tolerance to alcohol, slowly transforms into a virtue, like Tolerance. (But only for other people you might find in a city.) Yogurt!

      • Aapje says:

        And then, of course, there’s the evaporative effect that everyone who can’t stand salsa music, garlic, or punk hairstyles leaves the city.

        The opposite effect is probably stronger, where people who can’t stand the rural norms move to the city.

        The high percentage of gay people living around Castro street wasn’t because there was something in the local water supply that turned people gay.

        • Dissonant Cognizance says:

          My understanding is that San Francisco was the final processing point for the Navy in WW2, for people being discharged on psychological grounds. That ended up with a lot of homosexual ex-sailors being given the choice between a bus ticket back to rural Nebraska, or building a life in the Bay Area.

          That may be a Navy myth, but if true, it would literally be something in the water turning the population gay.

          • Nornagest says:

            I’ve heard that SF was the first port of call for servicemen returning from overseas at the end of WWII, and that its leather culture developed out of recently discharged gay sailors and Marines with no particular reason to head back where they came from — hence the military trappings. Both could be true, of course.

            I’ve also heard similar stories about similarly militaristic heterosexual subcultures that came out of the West Coast in the mid-20th century, like the Hell’s Angels.

          • Douglas Knight says:

            Cognizance, you’re probably thinking of Blue ticket discharge, which started in WWI.

    • AnonYEmous says:

      perhaps sortition – living in the city is about freedom, or something, whereas the country – though it could be thought freer – is generally associated with more conservative values (and has been for a very long time, so it’s not just a function of this particular divide), and probably also that you have to work for yourself. Stuff like that. Something to think about.

    • suntzuanime says:

      People who live in cities are selected for not being sensitive to the ugliness of modernity, and also grow desensitized to it over time. Things that country bumpkins find horrifying, like terrorist bombings, are just part of everyday life in a big city. You get used to it, and it freaks you out less.

      • Aapje says:

        @suntzuanime

        I think this goes both ways, really. City folks tend to be horrified by many things that rural people consider fairly normal (guns, hunting, strict social norms, etc) and vice versa.

        Although there is also an element of appreciation (city farming, hipsters).

      • Protagoras says:

        Terrorist bombings are part of everyday life in a big city? So far as I can tell, the last terrorist bombing in Chicago was in 1980, and the last time a terrorist bombing killed anybody in Chicago was in the 19th century. New York has had a few high profile incidents, but it doesn’t come close to being a yearly event even there, much less an everyday one. And New York seems to be the favorite terrorist target; no other big city comes close.

        • Randy M says:

          I believe he is referencing a quote from the mayor of London to the effect of “we need to get used to this sort of thing.” Prior to the most recent thing of this sort, though.

    • John Schilling says:

      As a result of (1), you have less incentive to try to make people who are not similar to you more similar to you.

      I have not noticed liberalism to be terribly shy about trying to make people who are not liberals, similar to liberals. IIRC, they invented public schooling for approximately that purpose, and rolled it out in cities.

    • hoghoghoghoghog says:

      I’d love to know how widespread this is throughout history. Was it true in the European middle ages, when cities were centers of religious orthodoxy? Was it true in the Soviet Union? Alas, “liberal” and “conservative” are probably not well-defined enough to be applied across cultures like that.

      • Protagoras says:

        In ancient Athens, the pro-democracy faction was heavily urban and the oligarchic faction more rural.

        • The original Mr. X says:

          On the other hand, the main source of wealth back then was land-owning. Since people with large country estates would both be more likely to support oligarchic government (which would give them more power) and more likely to live in the country (where their big estates were), this seems like an obvious confounder.

          • Protagoras says:

            I was just providing a data point, not intending to imply any reason for it. Though your summary is misleading; the urban elites in Athens during the golden age were immensely wealthy, more so than the rural elites. But the democratic assembly had an urban bias (it was easier for city dwellers to show up to vote), and the urban elites had more interests in common with the urban citizenry in general than they did with the rural elites. Since the rural elites tended to be more old-money, the oligarchic remnant institutions tended to be dominated by the rural elites (even though the urban elites were wealthier). The urban elites thus found it to be in their interests to strengthen the democratic institutions and oppose the oligarchic institutions.

      • JulieK says:

        The linked article calls it a recent trend:
        “This divide between blue city and red countryside has been growing for some time. Since 1984, more and more of America’s major cities have voted blue each year, culminating in 2012, when 27 out of the nation’s 30 most populous cities voted Democratic.”
        On the other hand, I think cities with large immigrant enclaves have been Democratic for much longer than that. But on the third hand, despite what’s been said about cities making people more tolerant, there was plenty of strife between the different ethnic groups.

        • Nornagest says:

          This is confounded a bit by the fact that the Democratic and Republican coalitions themselves have been shifting over time, and largely to target urban and rural values respectively. 1984 is just about when the Moral Majority would have been taking off, and it predates Bork, who’s often cited as the first step towards today’s divisive politics.

    • keranih says:

      I wonder if it might be so simple as that cities are wealthier than the countryside. Wealth gives one the opportunity to embrace change and take risk that poverty does not.

      • John Schilling says:

        Wealth gives you more to lose, and the traditional stereotype is that rich people are conservative. However, you may be on to something in that, at least traditionally, cities are where aspiring not-wealthy people went to make their fortunes, which means that in the interim at least they are open to risk and change.

        • hlynkacg says:

          Wealth also gives you more to signal with, much of politics is signaling games.

        • Anonymous says:

          Wealth gives you more to lose, and the traditional stereotype is that rich people are conservative.

          I think the way this works is that cities give you more opportunities, and a greater chance for attaining great wealth, but the vast majority of city dwellers are transient and not wealthy. So you get a situation where the wealthy urbanites may be conservative, but most of their local inferiors are liberal.

      • Nancy Lebovitz says:

        It might not be so much a matter of wealth, it’s that being dependent on farming gives you less slack for making mistakes.

    • ChetC3 says:

      Rural areas are poorer and more dependent on graft and graft related industries for their basic survival. So they approach politics as a ruthless zero-sum fight. City folk are more likely to have skills with at least some free market value, so they can afford to take the high road now and then.

      • The Nybbler says:

        Rural areas are poorer and more dependent on graft and graft related industries for their basic survival.

        Yeah, I’m going to look in the general direction of Miami, Chicago, Philadelphia, and New York and doubt this one VERY strongly.

        It makes sense for cities to be more collective and authoritarian; when you cram people together like that they can’t just leave each other alone. The normal activities of some citizens will interfere with the activities of others, so someone has to stomp on one to benefit the other. But why more _liberal_ I don’t know.

        • Fossegrimen says:

          I’m going to look in the general direction of Miami, Chicago, Philadelphia, and New York

          You should probably see your doctor about that strabismus if that’s one direction.

        • Nancy Lebovitz says:

          This is something I’ve heard from one source, but I don’t have a cite.

          (NPR interview) A man took a tour (probably in the 80s) of the US looking at attitudes towards gay man, and found there was a sharp divide between farming regions which were accepting and small towns which weren’t. Among farmers, the crucial thing was whether people were good neighbors. In small towns, people presumably had enough slack to worry about other things.

          There’s a book if anyone wants to try tracking it down.

          • Aapje says:

            @Nancy

            I think that the reason is not ‘slack,’ but rather that the farmers are highly self-sufficient, while rural towns depend strongly on the townies pulling their weight for the community.

            In my country, people who live in small towns seem to volunteer for local clubs & societies at extremely high rates. As a result, they have options how to spend their free time that are highly disproportional to the number of citizens. If this culture changes and people become less willing to spend their free time for the community, the town would have drastically fewer clubs & societies.

            So there is a huge incentive to shame people who refuse to structure their lives this way. Furthermore, they can’t afford to splinter their efforts too much, so there is a strong incentive to push people into the clubs & societies that exist & in general: to shame people to be ‘default.’

          • Nancy Lebovitz says:

            I suppose it’s different kinds of slack/urgency.

            The small towns need social harmony in a way that prioritizes conformity, though I wonder if it’s also a matter of having enough free time that they can use for harassing each other.

            Farmers need the farms to be taken care of, so being a good neighbor (I’m thinking this is a combination of being willing to help and being competent enough that one doesn’t need excess amounts of help due to making more mistakes than one’s neighbors) is more important.

          • Aapje says:

            I don’t think that farmers depend on each other day in, day out, like the townies do. There is a big difference between the social norm that you mind your own business and fix your own shit, but if you really do need help, people are obliged to come help you vs a social norm that you are not supposed to mind your own business, but instead have to work together by default.

            In general, I would argue that cultures that places higher demands on the level of cooperation result in enforcing conformity, to ensure that the cooperation is more mutually beneficial.

        • ChetC3 says:

          > Yeah, I’m going to look in the general direction of Miami, Chicago, Philadelphia, and New York and doubt this one VERY strongly.

          Homo economicus will of course engage in graft wherever and whenever possible; but major urban centers have enough other sources of revenue that they aren’t dependent on it the way Podunk, Mississippi is on its Naval Arctic Operations Center or whatever.

      • Protagoras says:

        This seems to be the reverse of what I complained about below, and so perhaps I should comment on it as well; I doubt that “I believe conservatives have this flaw; what is it about rural people that would encourage this flaw?” is a good starting point for deepening one’s understanding of conservatives or rural people.

    • onyomi says:

      This may unavoidably touch on culture wars a bit, but:

      Cities:culture::oceans:marine evolution, while rural areas:culture::isolated rivers and lakes:marine evolution. Cultural changes, like evolutionary changes usually only happen in the context of changing circumstances, a big part of which is the presence of new species/cultures.

      In the ocean/big city, “mixing” of all kind happens much faster, resulting also in a faster evolutionary drive toward wherever evolution was already going. Same is true of language. Rural areas take their cultural and linguistic cues from nearby urban centers, but hang onto whatever they learn much longer than the people in the city. Cities invent culture, rural areas preserve it.

      A whole region of cities+towns, however, may turn itself into a cultural isolated mountain lake relative to other countries by restricting contact with the outside world: see, e. g. Choson Korea and Edo Japan. This tends to result in a culture with fewer truly novel advances and more refinements of what already exists. (That is, open nations:closed nations::cities:rural areas).

      Put another way, whichever way Cthulhu is swimming, he swims faster in oceans than in small lakes and rivers–that is, whichever way the cultural winds are blowing, expect cities to get there faster. That is, I think cities will always be more “liberal” in the sense of “open to change.” But if, by liberal, you mean “supporting free speech, supporting free trade, supporting free movement of people, etc. etc.” then the cities will not necessarily be more liberal in that sense if that is not the direction the broader cultural winds are blowing. If the broad new intellectual trend is to be against free speech or in favor of theocracy or what have you then I expect cities to get there faster as well.

      • onyomi says:

        I’ll add that I think this perspective may be helpful in understanding what “universal culture” as Scott describes really is: it isn’t necessarily Western or “the best” in any objective sense; rather, it’s “ocean culture”–what survives when many cultures are thrown together. It doesn’t mean that rabbits are “better” than potoroos, or kudzu better than a kadapul flower, just that rabbits and kudzu are better suited to a wider range of currently existing environments.

        To survive culturally means to be widely and readily accepted by a lot of people, which doesn’t make a Big Mac “better” than some exotic delicacy, except insofar as you measure food “goodness” in terms of how likely people are to like its taste. Of course, acceptability correlates somewhat to evolutionary usefulness: everyone agrees cookies taste better than dirt because cookies have calories you can use.

        But I don’t think art and culture can be reduced to the extent to which they do or do not enhance evolutionary fitness, and, even if they could, there are cases where, what was good in the evolutionary past (such as seeking sugar) is bad in the current environment, and when what is good, culturally, for the individual, is bad for the society (arguably promiscuous sexual behavior might be an example). So, when sugary food or promiscuous sexual norms come into contact with blander food or restrictive sexual norms, the former are probably more likely to win out. Which does not mean it will be a net gain for the group as a whole.

    • hoghoghoghoghog says:

      If the liberal/conservative distinction is mostly about which virtues are emphasized, conservative areas should be ones where conservative virtues offer big returns.

      And indeed isolated places ask for personal responsibility. Rural areas are home to older industries where best practices have already been established, which favors learning the best practices and applying them consistently; it is tempting to write “tradition” for “best practices.” Farming (and industrial wage-slavery) both ask for reliability and conscientiousness.

      • Protagoras says:

        Reliability and conscientiousness are not the virtues on which liberals and conservatives are alleged to differ markedly (it’s purity and respect for authority, and of course the whole thesis is controversial anyway). I fear your biases about liberals are showing pretty strongly here.

        • The original Mr. X says:

          Reliability and conscientiousness are not the virtues on which liberals and conservatives are alleged to differ markedly

          Maybe not personally, but liberals seem less inclined to demand reliability and conscientiousness in other people. Just try dropping a reference to the undeserving poor in your next conversation with one of liberal friends, and see how they react.

        • Douglas Knight says:

          Empirically, Conscientiousness predicts conservatism. It appears to mainly be driven by the Order facet predicting the disgust reaction thence concern for purity.

        • hoghoghoghoghog says:

          To clarify, I didn’t mean the personality trait, just “what virtues does this group aspire to.”

        • Nancy Lebovitz says:

          Has anyone checked on whether there’s a difference in personal conscientiousness between liberals and conservatives?

          I’m not sure how you’d measure this since people aren’t necessarily equally conscientious about everything in their lives.

    • hlynkacg says:

      My personal theory is that it is tied to Scott’s Thrive vs. Survive Theory Politics.

      It makes sense for cities to be more authoritarian seeing as “live and let live” ceases to be an option when you have lots of people living on top of each other. At the same time though the threats and responsibilities that face city-dwellers tend to be more diffuse which encourages a lackadaisical mindset (If you don’t do your job there’s someone else who will). On the flip side, while rural life affords much more freedom existential risks are ever-present and more readily apparent when they aren’t hidden in the anonymity of a crowd

      When coupled he fact that cities are generally wealthier than rural areas. It’s no surprise to me that they would gravitate towards opposite ends of the spectrum.

      • Aapje says:

        My observation is that there is a limit to the amount of tolerance, nice manners, etc that people want to engage in.

        If you live in a sparse area and greet everyone you pass, you will probably greet a few people a day. If you do so in NYC, you are greeting way, way more people. The quantity changes it from a nice way to brighten up your and their life to a huge chore.

        Similarly, if you allow irritating behavior, it will impact you only rarely if live in a sparsely populated area. If the same percentage of people engage in that behavior in NYC, you are way more likely to experience it (and it is harder to get away from it). IMO, people don’t care so much about absolute strictness of laws, but rather how the laws impact their lives. The value of restrictive laws is way higher in more dense areas, while the costs are somewhat lower, since the environment is more restrictive anyway.

        Finally, higher density makes it far more efficient to offer central/government services, so it is much more natural to depend on the state or such in the big city.

      • John Nerst says:

        So, the point has come where we officially use “authoritarian” and “liberal” to refer to the same things?

        Not ragging on you specifically, but this drives me up the wall.

      • Kevin C. says:

        It makes sense for cities to be more authoritarian seeing as “live and let live” ceases to be an option when you have lots of people living on top of each other.

        A point that, IME, too many libertarian seasteading-proponents seem to miss in their plans for achieving Libertopia by cramming people into floating sardine-cans.

        But this does expand into an interesting test for your theory, hlynkacg. Namely, what about places like seasteads, or even more strongly, space colonies, where you have high densities — lots of people “living on top of each other” in tiny habitats — but a lot more ever-present existential risks, not hidden by “urban” anonymity, and without the “diffuse” responsibilities? Which factor would dominate, if these point in opposite directions? Or is there a third position that would emerge in these conditions?

        • Anonymous says:

          Might well select for people who like cramped conditions, or at least tolerate them well. People who don’t think privacy is important. So far as I know, these traits are at least partly genetic, by the example of Chinese conditions.

        • JulieK says:

          Sounds like the Free Traders in Heinlein’s Citizen of the Galaxy – their ship freely roamed the galaxy, but they had very strict social rules inside it.

        • Nancy Lebovitz says:

          See also “The Claustrophile” by Theodore Sturgeon, as fine a geek wish fulfillment fantasy as has ever been written.

          I don’t believe the nice liberrtarian Trader space ships in A Fire Upon the Deep— closed environments make bullying much too easy. There at least should have been visible social norms enforcing decent treatment.

    • Mark says:

      Transience.

      People are more likely to be liberal when the thing they are being liberal about doesn’t seem to affect them.
      Gay marriage? None of my business.
      Drugs? Personal choice.

      The key difference between cities, especially large cities, and rural areas is transience. People move to the cities, they are younger, they are busy with their lives, and they aren’t especially interested in what is going on around them.
      They form their own communities, not based on geographical area, but personal interest. The area itself is unimportant to them. They have an interest in the general good, but not so much the local minute particulars.
      Conservatives are more likely to have a concern for the community in which they find themselves embedded – because they are older, because they have children, because they have to use the local services.
      Maybe just because they have bought property and care about their property prices/ the fact that they can’t escape easily.

      So the area with more transient people will tend to be more liberal.

      • Protagoras says:

        You are more polite about it than some in this thread, but still this is an example of reasoning that really annoys me. “I believe liberals have this flaw; what about city dwelling would encourage this flaw?” is probably not the best way to deepen your understanding of either liberals or cities.

        • Anonymous says:

          You seem to regard transience as a flaw.

          • Protagoras says:

            No, but the tone, for example “people are more likely to be liberal when the thing they are being liberal about doesn’t seem to affect them,” struck me as dismissive of liberals. So while “transient” is not itself a flaw, Mark’s comment as a whole seemed to be focused on what he seemed to see as a tendency toward non-seriousness in liberals.

          • Anonymous says:

            But is it true? I mean, is it true that people tend to be more liberal towards things that they don’t know about and don’t care about? Per my intuition, this is simply the corollary to “people are most conservative about things they have expertise in”.

    • The original Mr. X says:

      I wonder if living in an artificial environment might have something to do with it. In a big city, everything around you is man-made (even the parks are only there because somebody deliberately planted stuff there), so perhaps that makes you more in favour of social engineering projects and the like.

    • Jiro says:

      This doesn’t seem to have been corrected for race. Lots of minorities in inncer city areas.

      • Nornagest says:

        That’s a factor, but I’m pretty sure if you went to, say, Minneapolis, and talked to a hundred random white people there, you’d end up with a different distribution of political leanings than if you talked to a hundred random white people from the whole state of Minnesota.

    • Iain says:

      This looks like a question where responses are likely to say more about the person answering than anything else. In an attempt to circumvent that: there are, broadly, two different kinds of explanations — either urban/rural living affects your political leanings, or your political leanings affect where you choose to live. This seems like the sort of thing that gets researched. Does anybody have a source that examines how much political opinions change when people move from urban to rural, or vice versa?

    • JulieK says:

      1. Higher education correlates with liberalism, and jobs for educated people tend to be in the city.

      2. Married people with children tend to be more conservative, and they tend to move to the suburbs to have space for their kids.

    • Chalid says:

      I wouldn’t say that this is “the” explanation – there are probably lots of contributing factors – but one thing no one has mentioned is that income and wealth inequality is much more obvious in most cities than it is in less densely populated areas. And I’d guess that that would increase support for redistributive policies.

    • Jaskologist says:

      Scott: This is the one OT every 2 weeks where we try to avoid culture war topics.
      Us

      • HeelBearCub says:

        Yeah, I don’t feel like I can reply to this thread.

      • Whatever Happened To Anonymous says:

        I feel like it started not very culture-warry, and got progressively more so in a way that it was hard to find a way to put a stop to it, except we now can all agree that it’s definitely way over the line.

    • Salem says:

      Conservatives believe in tradition and family values, liberals believe in self-creation and discovery. Cities have extremely high land values, so Conservatives move to the ‘burbs so they can afford the space to raise kids, whereas liberals stay and experience the amenities.

  5. Gazeboist says:

    I found a typo in a 7th circuit opinion and would like to report it. Any suggestions as to how? On one prior occasion I was able to contact the writing judge because he happened to also be a professor at a law school, and had an email listed on that schools site, but I’ve not been so lucky this time.

    • Brad says:

      Send a letter to chambers. Everything gets read, usually by an intern or something, and something like that will probably be passed up the chain to one of the clerks.

  6. Nancy Lebovitz says:

    So far as I know (not very far) agriculture typically involves restoring nutrients to soil, but not minerals.

    Is this true? If so, is much known about adding minerals?

    • Douglas Knight says:

      Wikipedia lists 6 (soil) macronutrients and 9-14 micronutrients. Macronutrients are things plants use a lot of, so they can be exhausted from the soil, so they are a constant concern. Micronutrients are, by definition, necessary, but not in large quantities. So simple agriculture won’t exhaust them and thus they aren’t a common topic. But many soils are naturally deficient in specific micronutrients, especially zinc, so professionals pay attention to them. And of course they would be very important in the Sahara or on Mars.

    • Fossegrimen says:

      I can’t vouch for practices elsewhere, but I’m required to take soil samples and send in for analysis every 5 years and get a fertiliser plan in return. This is also good sense because adding the correct amounts gives the highest possible yield for the minimum cost so I would probably do it even without the regulations being in place.

      For micronutrients, some of my fields need added sodium every couple decades due to water load. Zinc deficiency in soil is common in the midwest, but not around where I live.

    • Eric Rall says:

      Adding minerals is definitely a thing in the context of personal vegetable gardens. I’ve seen it come up in three ways:

      1. A common recommendation to add compost to your garden (preferably a mix of a few different types of compost) either once a year or every time you harvest and replant (it’s common to grow 2-3 crops per year, especially in climates with long growing seasons, taking advantage of the fact that some veggies sprout in early spring and are ready to harvest by late spring/early summer, some sprout in late spring and are ready to harvest by autumn, and some can be planted in the fall and allowed to grow over the course of a mild winter). Compost is specifically recommended over synthetic fertilizer for this, partly because it releases nutrients more gradually so you can put a big chunk in all at once to last you all year, and partly because compost replaces a broader spectrum of nutrients compared with the big three macronutrients (nitrogen, potassium, phosphorus) emphasized in most synthetic fertilizers. And mixing or cycling between different kinds of compost broadens the spectrum of nutrients you’re replacing, since different composts have different nutrient profiles.

      2. Troubleshooting guides, where you notice your plants with certain types of growth problems and look up possible causes (usually parasites, insect damage, over/underwatering, or nutrient deficiencies). For example, leaves with green veins but a pale yellow color between the veins often indicates an iron deficiency.

      3. General recommendations for specific plants that are particularly susceptible to certain mineral deficiencies. For example, tomatoes need relatively high levels of calcium and often have problems blamed calcium deficiency (most noticeably blossom-end rot, where half-ripe fruits start rotting from the bottom, which can be caused by not enough calcium in the soil or by insufficient calcium absorption due to underwatering), so there’s a lot of advice recommending prophylactically supplementing tomatoes with calcium by amending the soil with crumpled eggshells, powdered milk, gypsum, or crushed limestone.

    • keranih says:

      To add to what has been already mentioned:

      The differentiation between “nutrients” and “minerals” is a relatively inaccurate one, when talking of soils. It makes more sense to differentiate between “minerals” and “vitamins” and between those and more complex life-feeding blocks like “protein” “water” “carbohydrates” and “fats” for complex animals, but not so much for plants.

      Where it becomes more complicated is in the interplay between plants, animals, fungi, and the whole of soil as “a living organism”. The presence/absence of different individuals can make particular nutrients more or less available to other individuals – the classic being legumes that can temporarily fix nitrogen into the soil for the use of other plants, and close on that the root fungi that do the actual fixing for the beans/clover/whathave you.

      In terms of agricultural practice – one ought to remember that every action is made in terms of trade-offs, and the concept of “first limiting nutrient” is very important. In terms of plant health, the first nutrient is nearly always sunlight, then water – and where it isn’t, the first is water, then sunlight. The “big three” (NPK) are necessary in quite large amounts, but not in equal amount, and both the soil pH and the crop involved impact how much is needed of each for the “ideal” growth of that particular crop. For each of these – starting with sunlight – it is possible to have too much, just as it is possible to have too little.

      Soil acidifiers and sweeteners (sulfur and lime) are used for specific crops, and shifting the pH does more to make specific nutrients available than is generally feasible to do by adding that nutrient.

      Adding animal or green manures won’t fix the problem, either – firstly, because those aren’t perfectly balanced for all crops either, and secondly because the increased cost of labor/time wouldn’t pay for themselves.

      As for the idea that western civilization is missing out on vital nutrition due to soils exhausted of micronutrients due to excess fertilizing with “chemical fertilizers” – firstly, the first limiting principle applies here, too. The Chinese exhausted the hell out of their soils for centuries, but what hurt them was lack of calories, not micronutrients (*). Secondly, the use of green manures and plowing under covercrops has jumped dramatically in recent years due to the use of roundup with no till technology, but you won’t hear that celebrated. Thirdly, the ratio of nutrients varies widely by crop. There is no reason for omnivores like humans to be short on any one nutrient – and where they are, there are more issues than just the use of fossil fuel fertilizers.

      Having said all that – I fully support the use of home gardens, supplemented with household compost, as a way to manage micro-nutrient shortages, exercise shortages, self-agency shortages, landfill space shortages, and deficiencies in community involvement.

      But I’m not about to legislate them.

      (*) Thiamine and iodine are specific cases long recognized and compensated for across human history.

  7. deluks917 says:

    There was a fascinating discussion of perfect pitch on the discord server. I organized some of the posts by Anatoly into a single comment (with his permission):

    “I read a monograph by a Russian psychologist Teplov called “Psychology of musical ability” that has a chapter devoted to perfect pitch (no English translation alas). He goes into depth on what different kinds of perfect pitch he/other psychologists recognize and how they differ from each other. The most important difference he makes is between people who can instantaneously and w/o thinking tell you the note once they hear it, w/o invoking any memory of other notes, and people who can name notes, but compare it to their memory of other notes and apparently relying on relative pitch. This latter ability he calls pseudo-perfect pitch, and it always takes some thinking and at least 1-2 seconds of time. It isn’t as accurate as the instant-response kind.

    The former ability, actual perfect pitch as he recognizes it, is pretty similar to how we can recognize the color red when we see it – we don’t need to compare it to anything in our memory. Now perfect pitch, acting without comparing to memory, he divides further into “passive perfect pitch” and “active perfect pitch”, where the passive one is when you can recognize but cannot sing a given tone without reference (either in reality or in your head from memory), and active is when you can both recognize and sing. You’re told “D” and you just sing it w/o thinking at all.

    The passive perfect pitchers also seem to depend on timbre a lot – they often recognize piano, but not strings or (other people’s) voices, for example. So Teplov has a fascinating explanation for this (I don’t know how well it’s supported by other research). It has to do with how every sound actually generated in real life is never pure, but has its primary overtone and then its timbre, which is usually quite unique for that tone on that instrument (i.e. different notes on the piano have very different timbres, not just their common piano-timbre).

    Suppose you’re a “normal” person, without perfect pitch. When you hear a sound, Teplov says, you hear its timbre “absolutely”, as something of its own that you can train to recognize (though it’s hard). But the actual pitch of the sound, the primary overtone, you can only hear “relatively” to some other sound.This means, staying with us non-perfect-pitchers for now, that you can actually learn to recognize a note if you practice really hard, but what you’ll learn to do, without maybe knowing it, is recognize the timbre. He describes several experiments where psychologists were able to get non-perfect-pitch people to decently recognize notes within two octaves, but they would describe their efforts at recognition as looking at how “light” or “heavy” or “bright” or “gloomy” the note is – metaphors associated with timbre.Unfortunately this ability, though it can be learned, never gets as good as perfect pitchers’ ability, and decays with time.

    Now perfect pitchers, according to Teplov, have the neural networks trained (not his words) to recognize the actual pitch, the primary overtone, as a thing on its own and not relatively. But the difference between passive and active perfect pitch is that the passive ones still also use the timbre in identification – they don’t separate the pitch from the timbre in their perception, and so their neural network is kind of trained on both simultaneously. That’s why they get confused by other timbres and that’s why they can’t sing on request (because voice timbre is very different). Whereas active perfect pitch, the real “aliens” as far as I’m concerned separate the pitch from everything else and recognize that. And it seems to be completely inborn and can’t be learned (though you do need some early exposure to music to activate it).

    The metaphor I made for myself trying to make sense of it is like this. It’s easy for us to recognize colors immediately. But suppose I gave you a list of 50 gradations of gray, from white to black, on the grayscale. It’s pretty easy to have “relative pitch” – to say which one is darker than the other, and even to “identify intervals” with some training. But even if you gave a separate name to each of the 50 grey shades, would you be able to recognize it? Having perfect pitch is like having each gray shade a separate thing in your perception, clearly identifiable, just as ‘red” and “green” are for us. Teplov goes into a lot of detail into how children with perfect pitch don’t need to “train” to improve it, seemingly they just need to know the names. They “see” the pitches as perfectly separate nameable entities from birth, they just need a guidance with naming. ”

    Another poster made the interesting comment:

    “Thanks so much for the summary, it was fascinating. I don’t think I fit cleanly into the categories, though. I think my timbre-trained neural network does the heavy lifting, but the primary-overtone-trained one exists and always works as a failsafe.”

    • Skivverus says:

      As a linguist (by training, not profession), I wonder if words vs. phonemes would be a useful metaphor here: I expect I could, with practice, learn which tone corresponds with middle C reliably enough; but you’d have better luck asking me to hum Aeris’ theme or Beethoven’s 5th in the same key as the orchestra.

    • One Name May Hide Another says:

      Thank you for posting this! I have been thinking a lot about perfect pitch lately following my discovery of the work of Diana Deutsch. I’ve never read Teplov and I’m now going to look him up. Anyway, Deutsch is a researcher studying perfect pitch and musical paradoxes. Similarly to Teplov, she has compared the lack of perfect pitch to color anomia (when a person can tell the difference between colors but cannot name them.) She has also discovered what she called The Tritone Paradox, which proves that people without the standard absolute pitch actually do have what she calls “implicit perfect pitch.” (More below.)

      Her work is fascinating because it implies that we can teach our children perfect pitch. (Parents of kids aged 4 and below, get in here! =)) That was what initially got me interested in it. Now I’m thinking of either making an app or at least a video illustrating her ideas. (Anyone up for a collaboration?)

      I’m going to try to post a link to Deutsch’s website in a separate comment. (Scott’s spam filter usually doesn’t let me post links.) Here’s an excerpt from her article on the Tritone Paradox:

      The Tritone Paradox provides a good example of implicit absolute pitch. The basic pattern that produces this illusion consists of two successively presented tones that are related by a half-octave (an interval known as a tritone). The composition of the tones is such that their note names are clearly defined, but they are ambiguous with respect to which octave they are in. So, for example, one tone might clearly be a C, but in principle it could be Middle C, or the C an octave above, or the C an octave below. So when listeners are asked to judge whether a pair of such tones forms an ascending or descending pattern, there is literally no correct answer.
      I found that, surprisingly, when one such tone pair is played (say, D followed by G#) some listeners clearly hear an ascending pattern, whereas other listeners clearly hear a descending one. Yet when a different tone pair is played (say, F# followed by C) the first group of listeners now hears a descending pattern whereas the second group now hears an ascending one. Furthermore, for any given listener, the pitch classes generally arrange themselves perceptually with respect to height in a systematic way: Tones in one region of the pitch class circle are heard as higher, and those in the opposite region as lower.

      • perrinwalker says:

        I’d be interested in collaborating on this app/video idea of yours. Sounds like a great project. I have perfect pitch, I’m musically trained, and I’m good at sound design and audio engineering 😉 I’ll give Deutsch’s paper a read tonight.

        • One Name May Hide Another says:

          Fantastic! Let’s do it. I’m thinking we could start with a video because it’d have a faster timeline and would overall be a smaller project, so we’d be much more likely to actually get it done.

          Sounds like a great project. I have perfect pitch, I’m musically trained, and I’m good at sound design and audio engineering

          Seems like I’m in luck then because I have none of these skills. 😉 I can make funny videos, though.

          Broadly, what I’m thinking I’d like to do is make an entertaining explainer-type video showing what the Tritone Paradox is, and giving some other interesting info related to perfect pitch. Intended audience: people who might not have background in music, but who like reading about random interesting stuff. =)

          So, anyway, let me know what you think of the Deutsch write-up. (If you are interested in reading more, the footnotes have a link to a Scientific American article that goes into a bit more detail on the subject.)

    • Winter Shaker says:

      Several people, including my singing teacher back at school, are of the opinion that perfect pitch can be learned, and a while ago, I decided to see if I could find a ‘teach yourself perfect pitch’ app for my phone.

      I tried several, and literally none of them had a system for re-setting your lack of orientation – i.e. once you hear a note, say what you think it is, and the app tells you if you were right, if you then hear the next note, there’s no way you can strengthen your perfect pitch with the next note because you can, and inevitably do (if you have any decent level of musical training at all) just use the previous note as a reference.

      I presume that, if the app plays a note, I guess a C, say, and it tells me it’s a D, rather than just straight ahead playing an A next (which I can identify as an A because it’s a fifth above the D that I can still remember), it should first play a confusing scramble of many notes in quick succession, not just the 12 chromatic note but also lots of finer subdivisions in between, which should hopefully distract your brain from holding the memory of the previous test note so as to come at the next one with fresh ears. I suppose ideally it would be a sequence freshly randomised each time so you don’t get used to it.

      This seems like a glaringly obvious thing to do, without which no one could possibly expect a perfect pitch training app to work, and yet none of them had it. What am I missing? I don’t think my ability to detect intervals is likely to be much better than the average musician, and I don’t think the random sequence of notes would be difficult to code, by comparison with the difficulty of coding the whole app in the first place.

      • perrinwalker says:

        One thing I’ve seen that works marginally well is to always jump by two or three octaves. That makes it harder to identify the interval, though you’re right, once you’re oriented, you’re oriented.

  8. Randy M says:

    What would be easier, making some portion of an extra-terrestrial object minimally habitable (which likely entails terraforming the entire atmosphere, at least) or making the Sahara fertile land without radically altering the climate elsewhere?
    My expectation is that terraforming Mars or Venus is really, really hard but maybe not impossible, but changing the climate of just one portion of our planet may be impossible. Perhaps an artificial mountain range along the coast could trap moist air currents in land? Or large barriers offshore, etc. It’s allowable here to change rainfall or temperature off-shore, so long as you don’t send more hurricanes towards the Caribbean or something like that.

    • Dr Dealgood says:

      Terraforming probably sounds much easier than it would be because we’re very bad at conceptualizing the scales involved.

      The Sahara has an area of roughly 9.4E6 km^2, while Martian surface is 144.8 km^2. So whatever we’d have to do to get the Sahara habitable, we’d have to do roughly fifteen times as much of it to Mars. Not counting the fact that the Sahara already has a breathable atmosphere, as well as being able to support native vegetation and animal life. Nor the fact that getting to Mars is a one-way trip by rocket, while you can cross the Sahara on a camel.

      I just don’t see what the advantages are for Mars.

      • rlms says:

        I think the purported advantage is that you don’t have to worry about controlling side effects, since you’re operating on the whole planet.

        • Randy M says:

          Yep, exactly. Plus, the fail case on Mars is a still currently uninhabitable planet, while the fail for localized geo-engineering is something like non-stop monsoons throughout Asia or perpetual drought in Europe or the like.

          But I agree with the good Dr, if it can be done without significant drawback, it would take much less work to do so.

          • Isn’t Los Angeles mostly by nature desert?

            Given the resources, why couldn’t you just irrigate the Sahara? If you have lots of power for desalination, there is water conveniently close. And, speaking of power, the Sahara has a lot of sunlight.

          • Randy M says:

            Well perhaps fertile was the wrong word. I was asking about changing local climate permanently, so things like fertilizer & irrigation aren’t needed to make it lush.

      • John Schilling says:

        I just don’t see what the advantages are for Mars.

        A planet’s worth of natural resources that haven’t been picked over by a few trillion man-years of prospectors and strip-miners, and a shallower gravity well offering more energetically convenient access to the rest of the resources of the inner solar system.

        Also, the hundred million or so kilometers of hard vacuum between it and every thing or every one that has made most of the green and fertile Sahara-adjacent regions of Earth relatively unpleasant places to live, doesn’t hurt.

        What are the advantages of the Sahara, over the places people live now? It’s not like we’re running out of room to put people (or farms) in any absolute sense. If we’re going to terraform anyplace, it needs to offer some advantage over e.g. Libya.

        • Dr Dealgood says:

          (I didn’t pick the Sahara as an example, it’s not exactly my idea of prime real estate.)

          The question “Mars or the Sahara?” boils down to which obstacle to colonization you think is more easily overcome, political or engineering.

          Getting enough people to Mars, with both a plausible plan for how they’re going to survive there and enough equipment to make sure they do, would be the single most impressive technological feat ever performed by human beings. You almost certainly understand the challenges there much better than I do, so I won’t belabor the point.

          Making an artificial oasis within the Sahara and expanding it, by contrast, would still be technically difficult but isn’t literally rocket science. The real make-or-break challenge would be, as you say, balancing the need to chase off any local Berber and Arab militias with the need to avoid pissing off Washington and Paris.

          I don’t know what sort of economy you could make out in the middle of the Sahara to justify the costs of colonization. But would-be Mars colonizers need to answer the same question, plus the cost of rocket fuel to connect that economy back to Earth.

          • aNeopuritan says:

            The place having people in it is a mere “political problem”. The people who suspect Right bias to comments are of course crazy.

      • Dissonant Cognizance says:

        The advantage for Mars, and, I think, the reason people romanticize Martian colonization, is that no extant political entities claim any part of it.

        That might also be part of why extant political entities are hesitant to commit to manned exploration.

        • Wrong Species says:

          Seasteading would be far more viable then.

          • beleester says:

            Interestingly, I once read a book that proposed making underwater colonies as a stepping stone for space colonies. Both of them require you to make a self-sustaining sealed environment, so a lot of the tech for one is applicable to the other, and the underwater habitat could actually be productive in the meantime so you can afford your space program.

          • houseboatonstyxb says:

            Seasteading might be more free politically than any land location, but floating-in-space colonies (‘Spacesteading’?) would potentially be more free — especially if designed to move fast.

    • Douglas Knight says:

      My understanding is that it would be pretty easy to irrigate the Sahara. Just use solar power to desalinate water and pump it around. My understanding is that the cost of desalination is mainly the cost of the energy, so if you install lots of solar panels for existing people to use, the excess power at noon is basically free for tasks like desalination.

      What negative effects would this have? I don’t think it would have climatic effects. This (open access) paper warns that the dust from the Sahara currently fertilizes the Atlantic and the Amazon. And locusts.

      • Eric Rall says:

        There’s at least one good-sized chunk of the Sahara that’s below sea level, and it would be expensive but technologically feasible to dig a series of sea-level canals allowing water from the Mediterranean to flow into it. The sun would evaporate the water (leaving a hypersaline lake), leading to increase precipitation downwind of the lake. The flow of water through the canals could also be used to generate hydroelectric power.

        The project was seriously considered in the 1960s-1970s, but abandoned because of cost, the need to evacuate the present population of the depression (about 25,000 people in the area that would be flooded, plus about 60,000 people in oases that would have their groundwater contaminated with salt from the new sea), and concern over the side effects of using underground nuclear explosions to cut down on excavation costs.

        Wiki article

        • skef says:

          Seems likely that a present-day project would use siphons rather than canals. Never mind, likely too much elevation in between.

        • Randy M says:

          Thanks, that’s interesting. My knowledge of geology is lacking.

      • Douglas Knight says:

        For completeness, the energy of desalination is about 10 megajoules per ton. That’s enough energy to lift the water a kilometer. About half of the Sahara is at 200m.

    • Reasoner says:

      Here’s a scheme for terraforming the Sahara, should help with global warming too: http://www.superchimney.org/

  9. The Nybbler says:

    It seems to me that telling people to avoid culture war topics is like telling them not to think about hippopotamus.

  10. xXxanonxXx says:

    I recently finished Our Mathematical Universe, and there’s something that’s been gnawing at me. When Tegmark describes the quantum suicide experiment he mentions how terrible it would be for the experimenter to personally know the Everett interpretation of QM is correct, but not be able to convince anyone else of this. I don’t understand the reasoning at all.

    The experiment works because the parameters are such that the only universe you will experience anything in is the one where you beat all the odds and survive. Doesn’t it then follow that the only universe you will experience anything in is also the one where everyone else in the lab watched you beat the odds and survive the experiment? In which case they are also convinced the Everett interpretation is correct?

    • suntzuanime says:

      No, it would just be a freak accident from their perspective. The theory underlying quantum suicide suggests that they should see you blow your head off. Seeing you survive is unlikely under any theory, so it’s not evidence that any theory is right.

      The theory underlying quantum suicide is pretty dumb, fwiw.

      • xXxanonxXx says:

        Why would it be a freak accident from their perspective and not a freak accident from your (the subject’s) perspective (maybe that’s what you mean when you say you think the whole thing is dumb)? This is the heart of my confusion on Tegmark’s position.

        Again, the experiment only works because it’s set up so that by definition you only experience the universe where you survive (i.e. if you could somehow know an atom had decayed prior to it triggering the weapon, or continue to experience life even a second after having been shot the experiment would be ruined). The only thing that is privileging the subject’s perspective is that they’ll necessarily “end up” in the improbable outcome. There’s nothing about that that entails other observers who “end up” there with you won’t also realize what just happened is wildly, absurdly improbable, to the point of making “freak accident” the understatement of the universe. It would be be so much of a freak accident that the building losing power immediately prior to the experiment starting, an asteroid striking and destroying the suicide device, or Jesus manifesting and stopping the bullet like Neo were all more probable outcomes.

        • suntzuanime says:

          Yes, this is why it’s dumb. But in theory it wouldn’t be improbable to you, only to the observers. To you it was the only possible thing that could have happened, because it was the only thing compatible with you continuing to observe. It’s basically a grotesque abuse of anthropics.

          • xXxanonxXx says:

            I think this largely comes down to one’s tolerance for anthropic arguments. Mine is pretty high. Do we agree though that the only “you” who experiences anything would be incredibly impressed by what just happened, as would everyone else in the lab (where “you” find yourself)?

          • suntzuanime says:

            Again, if the you that experiences things believes the theory, they should only be impressed in the swinging pendulum sense of “I have predicted this outcome infallibly with Science, but it’s still impressive to watch it play out”. The others in the lab would be impressed at an actual freak occurrence.

    • rahien.din says:

      I think Niels Bohr was correct when he located the observation event (and the source of the waveform collapse) within the Geiger counter, rather than within the scientists deciding when to open the box. Similarly, Einstein, who said “Nobody really doubts that the presence or absence of the cat is something independent of the act of observation.” Waveform collapse occurs when the physical system is physically disturbed by the physical measurement apparatus, not the instant a physicist decides to ponder it. In other words, the thoughts of the guy in the box aren’t participating in the persistence or collapse of the wave function. That, instead, is determined by the interaction between the radioactive substance and the radioactivity detector.

      From that perspective, Schroedinger’s cat is just a bad thought experiment, because it uses the target phenomenon within the construction of the thought experiment. It’s like using a fish to metaphorically describe a fish – of course it’s weird and confusing.

      Doesn’t it then follow that the only universe you will experience anything in is also the one where everyone else in the lab watched you beat the odds and survive the experiment?

      Let’s call the man in the box the subject. Iteration 0 is the live subject before any decay takes place. In each iteration, there is a new dead subject and the live one persists, each in separate worlds. Thus, there are exactly N+1 subjects, where N is the number of iterations. There is 1 world in which the subject survives, and N worlds in which they do not. We can determine the odds that they will find themselves in a world in which they survive : 1 / N.

      It might help to think of all the worlds existing simultaneously at the start of the experiment (instead of branching off from a single tree). There are N iterations, and thus, there are N+1 worlds. All subjects start off alive, and with each iteration one of them dies. At the end, only one subject survives.

      Thus it’s not correct to say that the subject definitely survives, because that implies that there is only one subject and only one consciousness, but the many-worlds premise of the experiment explicitly entails many subjects, all of whom experience something and most of whom die. One can only say that every instance of this experiment entails a single world in which the subject survives, though no one can predict which world we will find ourselves in.

      To go out on a limb, I think this demonstrates that the Copenhagen and Everett interpretations don’t actually conflict. The Copenhagen interpretation accurately describes the decaying survival chances of a single subject within a single world. The Everett interpretation describes what it would look like if each potential outcome was represented within its own distinct world and we could see all those worlds.

      CMWIW.

      • suntzuanime says:

        The issue is that a lot of people take a magical view of their own consciousness, so they think there’s some “self” that gets concentrated in the living branch when the guy in the other branch dies.

        So instead of “All subjects start off alive, and with each iteration one of them dies. At the end, only one subject survives.” it’s “The subject starts off alive. With each iteration, a world with a corpse is split off, but the subject remains alive in the other world. At the end, the subject is still alive.”

        • rahien.din says:

          Agreed!

          I can see why these thought experiments have the subject die (to avoid an exponentialization of these binary worlds, which would shift the focus away from the question at hand) but the natural reaction to death distorts the thinking.

          Maybe, instead of a canister of cyanide gas, we hit the subject with a water balloon filled with permanent blue dye. No death, irreversible binary variable, consciousness preserved.

          • xXxanonxXx says:

            Again, I think this destroys the entire point of the thought experiment. If consciousness is preserved in any of the worlds other than the wildly improbable one then you’d expect to be hit with the water balloon and/or shot. Terminating consciousness before it can occur in any of the probable worlds is the only way you ensure consciousness ends up in the the improbable one.

            I realize even using phrases like “ends up in” hopelessly confuses things, but lack a better way to describe it. I agree with Suntzuanime that this is an anthropic argument.

          • suntzuanime says:

            The hopelessly confused idea of consciousness “ending up in” a branch is the core of the hopelessly confused idea of quantum suicide. If you stopped treating consciousness as magic you’d see that the quantum suicide experiment just ends up with a lot of splattered brains and very few intact ones.

            Treating consciousness as magic is where anthropic reasoning goes predictably astray every time.

          • rahien.din says:

            Terminating consciousness before it can occur in any of the probable worlds ensures consciousness ends up in the improbable one.

            That may be the aim of the thought experiment, but I don’t agree that it happens. I think the subject’s consciousness is present in each of these worlds, and in most of them with an odds of N:1, consciousness ends.

            It may be clarifying to consider the subject’s co-researcher. Let’s say that in addition to the subject, there is also a single observer, and it is their job to open the box after N iterations. The observers must be just as numerous as the subjects, meaning, N+1. One of the observers will consciously observe a live subject emerging from the box. N of them will find their colleague dead.

            Critically, all N+1 observers observed their live colleague entering the box. Their timelines exist, and according to their consciousnesses, their experiences prior to the start of the experiment are identical. They watched a scientist enter a box.

            In order for there to be a single stem of the subject’s consciousness, it is necessary that the N subjects who died were never conscious – even when they entered the box. Therefore, N observers watched a p-zombie enter the box. This necessary logical conclusion, of course, is preposterous.

            It is even more preposterous when one interprets the general principle : if a person dies when they otherwise might not have, then that person who died never had consciousness, and was therefore a p-zombie.

            The resolution is in realizing that the many worlds interpretation of quantum physics is not merely an exponentialized version of the Omphalos hypothesis. It truly entails distinct, self-consistent worlds.

            I agree with Suntzuanime

            Suntzuanime can speak for himself, but AFAICT I don’t think you do.

            {Edit: slight clarification}

          • xXxanonxXx says:

            Thank you very much, but I’m capable of seeing you’d end up with a lot of splattered brains.

            Now, the you’s with splattered brains experience what exactly? If you say it’s nothing, then I think you are the one with some weird concept of magic consciousness (how does one experience nothing exactly?). If you say something, then you’re changing the parameters of the thought experiment.

          • xXxanonxXx says:

            And Rahien you cut the quote off mid sentence. I said it’s my belief that I agree with Suntzuanime it’s an anthropic argument. Near as I can tell that’s the only thing we agree on.

          • rahien.din says:

            xXxanonxXx,

            You gave me a binary choice between “magical thinking” and “not playing along.”

            {Edit : Didn’t realize this wasn’t directed at me. No worries. Thread is getting confusing.}

          • xXxanonxXx says:

            Rahien, that was directed at Suntzuanime and his comments about magical thinking. Apologies if you took offense.

          • suntzuanime says:

            Now, the you’s with splattered brains experience what exactly? If you say it’s nothing, then I think you are the one with some weird concept of magic consciousness (how does one experience nothing exactly?). If you say something, then you’re changing the parameters of the thought experiment.

            I’m curious as to what your answer is. Apparently it’s not nothing and it’s not something, so I’m excited to see how you square that circle.

          • xXxanonxXx says:

            Why do I need to answer that when you’re the one who brought up the splattered brains as relevant in the first place? I don’t see how asking questions about the worlds with splattered brains from a subjective point of view is coherent. From a God’s eye view the things will change, sure. We could see the grisly results, but the experiment never claimed it would be valuable to God, or to the people watching for that matter.

            I’m sticking with my position that the point of contention here is similar to the one between people who are bothered by anthropic arguments and those who aren’t. It’s noteworthy that in the same book Tegmark (who obviously disagrees with you about quantum suicide) says the only thing he finds surprising about anthropic arguments is that they’re controversial at all.

          • suntzuanime says:

            Well, you say the splattered brains don’t experience nothing, because that doesn’t make sense, and they don’t experience something, because that’s the point of the experiment. I may have brought them up, but if something I brought up exposes a contradiction in your beliefs, you’re still responsible for that contradiction, you don’t get to just change the subject. There’s something underlying your beliefs that you haven’t fully examined, don’t shy away from it.

            I’m not bothered by anthropic arguments per se. I’m seriously bothered by dumb arguments, which many anthropic arguments are, because people ascribe mystical properties to consciousness and identity and tie themselves up in knots.

          • rahien.din says:

            The point of contention here is similar to the one between people who are bothered by anthropic arguments and those who aren’t

            Count me in the former camp. I genuinely find the persistence of anthropic arguments baffling. Double for consciousness-causes-collapse.

            I am fine with saying “We live in a universe in which conscious life arose.” I am not fine with saying “The universe is slave to consciousness rather than to physical laws” or “The universes in which consciousness did not arise are nonexistent.”

            The many-worlds interpretation itself is not consistent with anthropic principles. The probabilities therein would necessitate that just as dead cats outnumber live ones, consciousness-free universes outnumber those that contain consciousness. They would also necessitate that our universe will likely become consciousness-free.

          • xXxanonxXx says:

            @ Rahien

            We’re mostly on the same page. I agree it’s insane to say consciousness causes collapse, or that it somehow usurps physical law, or really that it holds any sort of privileged position. As for the idea universes without consciousness are impossible? Honestly never even heard that one.

            It will be a cold day in hell before you find me speaking favorably of the strong anthropic principle (that link about Jesus manifesting was to tease Frank Kipler). The weak version though seems trivially true, and like Tegmark I don’t see what the fuss is about. I’m always just reminded of the metaphor about the puddle wondering why the hole it’s in is so perfectly molded to its own form (only to extend the metaphor it’s a puddle that has reason to believe only puddles of its own precise form can wonder at all, and suspects there might be other holes out there).

            I know we won’t see eye to eye on this one, but you have helped clear up my thinking. There’s a strong temptation to slip back and forth between different levels of observation without stating you’re doing so. I’ll have to be more careful with my wording in the future.

            @suntzuanime

            You haven’t brought up a contradiction because you’re still treating it from an objective point of view when the entire point is that it only works subjectively. Or to put it another way, your objection only holds if you a) think this experiment works for other observers (which it specifically claims not to), or b) think it’s possible for someone to experience nothing. As in they’re somehow still there, actively experiencing nothingness (which would really call into question why you believe it’s others and not you who think consciousness is mystical). Alternatively, we’re just quibbling over what it means to be “you”, and you’re again insisting anyone who thinks it’s acceptable to say Kirk is still Kirk after the transporter reconstitutes him believes consciousness is mystical.

            Regardless, your pendulum comment comes as close to answering my original question as we’re going to get. I do appreciate it.

          • rahien.din says:

            xXxanonxXx,

            I agree, our ideas may not be the same but they have converged to some degree, I respect your arguments, and I’ve learned some cool new stuff from you. Which is exactly why I keep coming back here – reaching this point is literally my goal in every one of these conversations.

            By the way, that puddle metaphor is fantastic.

            Thanks!

          • suntzuanime says:

            The people who don’t think Kirk is still Kirk are the ones who think consciousness is mystical, they’re the ones who believe in some fundamental continuity of identity that a transporter could disrupt. We hard deterministic nihilists would say that Kirk is whoever does Kirk’s job, and there seems to be a lump of matter on the other end of the transporter who is interested in boldly going where no man has gone before.

            I have refrained from giving my own answer to the paradox, because I doubt you’ll find it satisfying and I didn’t want to distract you from your own introspections on your beliefs, but for what it’s worth, I’m a panpsychist. I think the spattered brains have a form of consciousness. It’s not going to be identical to the consciousness of your intact brain, but neither is the consciousness of your intact brain identical from moment to moment. It’s all just properties of lumps of matter. When you try to carry out the quantum suicide experiment, you end up with one lump of matter that thinks it went pretty well and a whole bunch of lumps of matter that have disorganized jumbles of qualia. If you call the first “me” and the rest “not me”, well, that’s up to you.

            I propose a less suicidal experiment: play the quantum lottery. Buy lottery tickets based on a source of quantum randomness, and only consider the ones that win the jackpot to be you. Proves the same point, but not as messy, and you get a million dollars as a bonus.

          • xXxanonxXx says:

            Ha, I’m not sure how much introspection there was. I haven’t budged an inch from the belief Tegmark is right about what makes the experiment special (compared to getting hit with balloons, or buying lottery tickets). Having my ideas called mystical by a panpsychist has made my entire day though. I’d try to explain why it amuses me so much, but we don’t even agree on who’s being more mystical in the transporter example. Perhaps better to just thank you for the answer and move on :p

          • suntzuanime says:

            Panpsychism is the position that treats consciousness the least magically. What could be more mundane than a property that everything in the world has?

          • Machina ex Deus says:

            @rahien.din:

            I genuinely find the persistence of anthropic arguments baffling.

            Think about it like this: in a world without humans (or human-equivalent consciousness), anthropic arguments are obviously pointless, and die out.

            But in a world that does have humans (or human-equivalent consciousness), anthropic arguments at least seem to make sense on their face. And they produce an odd frisson in people when they first discover them.

            Given that the world we actually live in contains plenty of humans, you should therefore not be surprised at all at the persistent presence of anthropic arguments.

            Does that convince you?

          • Trofim_Lysenko says:

            @Machina ex Deus

            Well when you put it like…

            ….heeey, wait a minute…

          • rahien.din says:

            Machina ex Deus,

            Haha, yes, I understand their persistence much better now.

  11. Chevalier Mal Fet says:

    My Google Fu isn’t helping me, so I’m asking here:

    Yudkowsky or Scott or someone once wrote an article or short story featuring a dragon lord that hoarded stocks and bonds and investments, and was generally good for the economy. It was like a libertarian fairy tale of some sort.

    I’m afraid that’s all I remember about the piece (hence why Google is proving difficult), but does anyone happen to know what I’m talking about? And could provide a link?

    • Aapje says:

      I think that the author of that article lacks perspective, so most of the suggestions are quite poor. History can be said to consist of relatively stable eras and bridge periods. IMHO, the late 19th century, the 20th century and probably part/all of the 21st century mark(ed) a transition to individualism. Voting rights for all men and women (& races), ending the hereditary class structure, feminism, the cultural norm that people’s should be allowed to self-determine (see the end of colonialism), universal culture, atheism, communism, gay rights, etc, etc; can all be understood as a rejection of the idea that people have an obligation to slot into a ‘caste’ and take on the rights and responsibilities of that caste, regardless of whether that fits their abilities.

      So my name is the Individualist Era.

      • Iain says:

        I think that the author of that article lacks perspective, so most of the suggestions are quite poor.

        Ada Palmer teaches in the history department at the University of Chicago. I think it is safe to say that she knows something about the topic. Did you miss that this is in the context of science fiction, and the descriptions are intended as speculative exercises, not predictions?

        • Aapje says:

          I wasn’t aware of who she was (or even looked at the name of the author earlier), but a quick search reveals that she focuses on Early Modern European History, which is the period before the one that is being discussed in the article. So she seems to be writing about something that is not her area of expertise. Furthermore, it should be obvious that historians tend to disagree quite a bit and that their views tend to gain perspective over time. So my claim of a lack of sufficient perspective, currently, is not an accusation that this is necessarily a bad historian, just not an exceptional one.

          So your appeal to authority seems misapplied.

          It seems obvious to me that her article is not purely speculative, as she is addressing how the future will judge the current + ‘recent’ past. We have data on the current and the past. Furthermore, speculating can still be less wrong, as Scott tries to do when he tries to predict the future with probabilities, so his predictions can be judged somewhat.

          Let me make specific criticisms:

          Palmer suggests that a reasonable name is the World War Era, but the scope that made us call these wars World Wars were symptoms of greatly improved societal wealth and logistical improvements that made countries far more capable of supplying huge armies for long periods. Some of the Napoleonic wars involved many nations, but involved relatively small armies and short engagements because the logistics couldn’t support more. So while the size of these wars is remarkable and mankind has hopefully learned their lesson that we are too strong for major powers to fight like this, they are merely symptoms of deeper issues.

          Another suggestion is the Genocidal Age, but it seems to me that in so far that genocides became more frequent (?) and bigger, this was largely a symptom of the increased logistical ability, again.

          Then we have Space Age, which is silly, because our space endeavors consists of satellites (which offer significant value to humankind, but not enough to name an era after them) plus a bit of playing around. If we colonize space in the future, then maybe the future will be called the Space Age, but not the current age.

          ‘The Educational Revolution’ is actually significant, but IMHO, again the root cause is overlooked. Education for all came about because of individualism.

          Anyway, when reading the article I got fairly irritated because it just threw out a bunch of names without any apparent effort to even give a decent argument for or against. But perhaps I’m too demanding of an article that was probably intended as advertising.

          • JulieK says:

            How about “The Global Era?” That captures both that we have planet-scale things like world wars, and that we don’t have much off-planet things yet.

        • Iain says:

          You are completely misreading the post.

          Tor is a science fiction publisher, which just published Ada Palmer’s second novel. Ada Palmer is not trying to make predictions about what the future will call the present. She is discussing science fiction: how much world-building can we do, in an imaginary future, just by varying the name that is used for the (real-world) current day? This is not subtle: for one, the post starts with “One question a science fiction writer faces when world-building a future Earth…”

          These are not designed as perfect descriptions of the modern day. They are exercises in exploring how much interesting back-story you can imply about the changes between now and a fictional future with a single label. If this century is called the World War Era, but the story is set centuries in the future, then you are implying that wars have become a thing of the past. If this century is called the Space Age, then “either humanity has given up on space exploration, and considers it a blip in their past like the 19th century vogue for spiritualism or the 12th century obsession with Aristotle — or it means there’s some new frontier beyond space which makes the Space Age feel as quaint to this future as the Age of Sail does to us.” Those are fun speculations! They cause us to imagine all sorts of interesting things! They are not the sort of thing that can be evaluated as “right” or “wrong”.

          This is all right there in the post, and I am deeply confused by how you have managed to so completely miss the point.

          • Aapje says:

            @Iain

            I get the point that the way the future develops will determine in part how this era is viewed. I just consider it extremely unlikely that if the future has no more genocides, this era will be considered the genocide era. I consider it extremely unlikely that if the future has no more world wars, this era will be regarded as the word war age.

            I consider these bad future world building ideas as they do not logically build on the present, as the writer is clearly trying to do, IMO.

          • houseboatonstyxb says:

            On a timeline that stays pretty mundane (no FTL, no aliens), the first impressive Lagrange colony might be a Schelling Point for start of a Lagrange Era — ie, building new lebensraum at convenient locations using technology that worked on the first. If so, our 50+ years could be the “Pre-Lagrange Period”.

          • Aapje says:

            @houseboatonstyxb

            I probably define space colonization as something more extensive, as I see a very high level of autonomy as a necessary condition. It seems to me that it’s extremely unlikely that we can achieve a space station with a fully circular economy.

            It seems more likely that we first manage to establish a pretty much autonomous colony on a planet. At that point I would say that you can talk about the real start of space colonization (as opposed to space exploration).

          • houseboatonstyxb says:

            @Aapje
            probably define space colonization as something more extensive, as I see a very high level of autonomy as a necessary condition. It seems to me that it’s extremely unlikely that we can achieve a space station with a fully circular economy.

            It seems more likely that we first manage to establish a pretty much autonomous colony on a planet. At that point I would say that you can talk about the real start of space colonization (as opposed to space exploration).

            Since this is about fiction, you can have your history go either way.

            From the Tor article:
            Even more can be packed in if you use a historical name which—like Late Antiquity or Early Industrial Revolution—implies that our centuries are mostly important for their relation to some even more important neighboring era.

            Imo that fits our time (assuming that we will soon go into an important Age of Close Colonization or whatever). We’re nearly ready to do big things in near-by space, but haven’t quite had the will to do it (where’s JFK?) So the future’s name for us would be The Pre-CC-Era or something.

            For your view, you might use The Plymouth Crater Colony as a Schelling Point for the start of colonization. I used a Lagrange colony for better dramatic effect. I like the human interest and irony of our current rejection of the idea as being out of date, un-cool, though in fact it does turn out important. If we do really go into space soon, our 50+ year delay will look silly, un-cool, a quaint little backwater in time….

          • Aapje says:

            If we do really go into space soon, our 50+ year delay will look silly, un-cool, a quaint little backwater in time….

            Only if the future has extensive colonies or such that provide substantial benefit to humanity. That is hardly a given, even if we do manage to get some permanent presence on say, Mars.

            Our current decisions would then only look silly with a very heavy dose of 20/20 hindsight, IMO.

            But a lot of scenarios where we really need space colonies in the relative short term involve us messing up the earth, so I then would expect historians to blame us for not taking better care of earth at least as much as they would blame us for not pushing space colonization harder.

          • hlynkacg says:

            I think that you are entirely to hung up on this “substantial benefit to humanity” nonsense. If we skip forward 200 years* and find that colonies are indeed a thing, it makes perfect sense to call our current era “the pre colonial period” or some such especially if it’s being named by the descendants of said colonists. Sasa kennst bernata?

            *It’s highly unlikely that we’ll wreck Earth to the point of uninhabitability before that.

          • Aapje says:

            @hlynkacg

            I never disagreed with the ‘pre’ names, as those are not really descriptions for a period, but merely indicate that a period is before a named era.

            Note that Ada Palmer didn’t propose pre-Space Age, but ‘Space Age.’ I specifically took issue with that.

          • Nancy Lebovitz says:

            Aapje, there were some really big notable genocides close together in the 20th century, so I don’t think it’s crazy to call it the era of genocide. We could even go for Megadeath to include the world wars and the Spanish flu.

          • Iain says:

            I don’t know how many times I have to repeat: these names are not claiming to be realistic portrayals of likely future scenarios. They are hypotheticals: if the future did call the present “the Space Age”, what does that imply about what must have happened between then and now?

            It doesn’t matter if it’s unlikely. It matters if it is interesting. This is in the context of writing science fiction — if the world that you want to write about requires something unlikely to happen, you are well within your rights to just declare by fiat that it happened, and then explore what secondary effects that would have.

            Suppose that Charlie Stross at his most pessimistic is right, and human colonization of space is a dead end. We spend a long time and a lot of money trying to colonize Mars and the asteroid belt, only to discover that our artificial ecosystems are not stable without unsustainable resupply missions from Earth. We eventually abandon our colonies and limp home to Earth with our tails between our legs. In retrospect, we call the whole failed experiment “the Space Age”. Most scholars date the beginning to July 20, 1969, when Neil Armstrong first set foot on the moon, although arguments have been made for as early as 1905 (the first powered flight) or as late as 2041 (the first baby born on Mars).

            Plausible? Who knows! But it certainly provides an interesting backdrop for a science fiction story of some sort. Maybe it just justifies setting a far-future story on Earth. Maybe it increases the stakes of a potential ecological catastrophe. Maybe the last remaining descendants of the abandoned asteroid colonies are plotting vengeance by dropping a comet on the Earth scum who condemned them to a life of near-starvation in the outer darkness, and the denizens of Earth must piece through ancient texts to find the forbidden knowledge necessary to save humanity. There are all sorts of ways to build a world in which “the Space Age” means something interesting. You don’t get to claim that they are not logical enough, and therefore bad. That’s not how fiction works.

          • Nancy Lebovitz says:

            Iain, thank you. That’s what the article is about.

            By the way, my original comment was held for moderation. I suspect this happened because there was a three-letter sequence in the url which could have been culture war fodder.

            Still, even though people say science fiction isn’t about prediction, part of what makes it interesting is its relationship to the real world, so I’m also interested in plausible names for our present era, just so Ada Palmer isn’t blamed for taking the science fictional angle.

          • houseboatonstyxb says:

            Nancy, I agree with you and Iain that this exercise isn’t about real prediction; but thinking of scenarios that would fit under the old Hard SF rules is fun too.

            As for plausible names for our present era, you’d be on safer ground if you stuck to Post-X names rather than Pre-X.

          • houseboatonstyxb says:

            @ Aapje
            >> If we do really go into space soon, our 50+ year delay will look silly, un-cool, a quaint little backwater in time….

            > Only if the future has extensive colonies or such that provide substantial benefit to humanity. That is hardly a given, even if we do manage to get some permanent presence on say, Mars. Our current decisions would then only look silly with a very heavy dose of 20/20 hindsight, IMO.

            True. For simplicity, I was assuming a pretty long and stable colonial future.

            >But a lot of scenarios where we really need space colonies in the relative short term involve us messing up the earth, so I then would expect historians to blame us for not taking better care of earth at least as much as they would blame us for not pushing space colonization harder.

            Even if the colonies aren’t shipping home treasure, if we can move some of our dirty industries up there, their absence will be a benefit to the planet.

          • houseboatonstyxb says:

            Hm….

            Orbiting colonies are quite open to High Modernism. No existing terrain to bulldoze. No mountains or seas to build between. Blank slates in all directions, as many as you like. Wide open for pre-fab apartment-pods made of ticky-tacky. Top-down design and rule in each colony independent of the other colonies — since in space there is no top or bottom.

            Otoh, the wide open spaces up there are also open to individual creativity. Want to build a colony laid out like Venice, London, Paris? Disneyland? — The limits would be in what stresses your materials could support.

          • Aapje says:

            Even if the colonies aren’t shipping home treasure, if we can move some of our dirty industries up there, their absence will be a benefit to the planet.

            What industries are dirty and yet don’t produce physical products that would have to be shipped back to earth to benefit us??

            I have trouble with coming up with a scenario that gives a decent benefit of a colony to earthlings given existing and near-existing technology (like space elevators). Aside from solving the single point of failure issue, of course.

            Orbiting colonies are quite open to High Modernism. No existing terrain to bulldoze. […]

            It seems to me that the main problem is bootstrapping/getting started (this was a huge challenge when colonizing America too, a lot more colonists would have died if native Americans hadn’t helped).

          • houseboatonstyxb says:

            @Aapje
            What industries are dirty and yet don’t produce physical products that would have to be shipped back to earth to benefit us??

            Here’s an advantage of an orbital station vs e.g. Mars: no gravity well at the station end. Just throw rocks at drop the shipment to Earth. With a dirty industry like mining, mine the asteroids.

            It seems to me that the main problem is bootstrapping/getting started (this was a huge challenge when colonizing America too, a lot more colonists would have died if native Americans hadn’t helped).

            There were no baby steps across the Atlantic.

          • Aapje says:

            @houseboatonstyxb

            Sure, but it seems to me that this will become increasingly economically viable as we deplete easily mined resources + as we develop better robots that can do much of the work. It seems that this was a bad business case in the past, even if you’d force the earth mining companies to clean up much of their pollution; as well as very impractical given the technology of the past.

            Your earlier claim was that future generations might look down on us for not doing this 50 years ago, but do we look down on the Greeks for not further developing the steam engine (google: Aeolipile) and/or having an industrial revolution? I think current historians generally recognize that past civilizations had limited resources and couldn’t do it all; and that breakthroughs often depend on multiple conditions being satisfied. So if I project this same mindset onto future historians, I think that they won’t be as harsh as you. Especially as the last 150 years have seen unprecedented rates of progress in many dimensions.

        • suntzuanime says:

          It’s still silly to call the modern era the Age of Genocide, when genocide has been a pretty constant feature of human social interaction since humans split into different genos. If anything, this is the Age of Feeling Bad About Genocide, how’s that for some fun sci-fi worldbuilding?

          • Iain says:

            Depends on whether we have any especially egregious genocides in our near future, doesn’t it? Or on whether future historians will have political reasons to focus on current atrocities and sweep past ones under the rug. World-building!

            (From a modern standpoint, I also don’t think it’s unreasonable to draw a distinction between the soulless industrial genocides of the twentieth century and the artisanal hand-crafted genocides of yesteryear.)

          • Gobbobobble says:

            It may be 100% organic, but does this artisanal hand-crafted genocide contain GMOs?

          • Protagoras says:

            Wow, people have a really hard time avoiding culture war topics.

          • hlynkacg says:

            The age of guilt.

            …I think I might actually use that one.

          • dndnrsn says:

            The Age of Culture Wars!

          • Aapje says:

            @Iain

            Depends on whether we have any especially egregious genocides in our near future, doesn’t it?

            At most you can argue that, just like might happen with war, we built up to a crescendo and got so upset at our destructive capabilities that we stop doing it to that extent.

            However, it seems extremely ethnocentric to believe that these events that impacted us greatly, but have relatively little obvious effect on the course of humanity (compared to such things as the industrial revolution, steam engines, etc) will have huge meaning to future generations, especially as humans seem quite poor at appreciating the absence of bad events.

            For example, people don’t commonly talk about the Age of Crusades. Or the Age of Bad Medicine (quite a big age, that). The Age of Slavery. The Age of Serfdom. Etc.

            PS. I suggested a better name and instead of debating that and my reasoning for it, the focus is on what was basically my introduction to my actual comment…

  12. A question for some of our medically informed commenters:

    I have recently noticed that my knees hurt a little when I go up stairs. I conjecture this is a mild case of rheumatoid arthritis. Is there anything I should be doing about it, other than taking the elevator?

    There are various non-prescription pills that claim to be good for joints. Do any of them actually work?

    • James Miller says:

      I had soreness in my ankles and wrists and found that Glucosamine and infarared light therapy eliminate the pain.

    • The Nybbler says:

      There’s medications for rheumatoid arthritis, though a lot of them have a daunting list of side effects. But wouldn’t it be more likely to be osteoarthritis? Rheumatoid is the autoimmune one, osteo- the degenerative one. I have osteoarthritis in one knee. I took glucosamine for a while with no noticeable effect.

      (disclaimer: not a doctor, only occasionally play one on the net, for amusement purposes only)

    • Fish oil has mixed positives (not interested in digging up research). But the cons/risks to fish oil seem to be about zero, so might as well take it.

      Also people seem to respond very well to yoga.

    • Nancy Lebovitz says:

      You might want to look into Feldenkrais Method or other ways of improving coordination.

      My knees are in better shape than they were a few years ago, but my issue was going down stairs rather than up stairs, if that matters.

      I *think* Dragon and Tiger Qigong made the big difference, but I try enough different things that it’s hard to be sure.

    • Dog says:

      Assuming this is osteoarthritis (more likely unless you have specific reasons to suspect rheumatoid), there are different approaches:

      Lifestyle Interventions
      *Maintain a healthy weight
      *If you are pre-diabetic / diabetic, get your blood sugar under control
      *Exercise that strengthens the muscles that support the knees
      *Avoid joint injury – injured joints are much more likely to develop arthritis

      Pills
      *Omega 3s seem to work in guinea pigs: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3176911/
      *Glucosamine / chondroitin / hyaluronic acid all probably help: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3150191/
      *There seems to be some evidence for antioxidants / vitamin E / vitamin C. You need to be careful with antioxidants and vitamin E though, too much is not good for general health / longevity.

      I would recommend this review, it’s pretty thorough. If it were me, I’d be taking the omega 3s and the glucosamine / chondroitin.

      • Thanks. I already take Omega 3s, should probably take glucosamine.

        I don’t think my weight is enough to be a general health problem, but it’s substantially more than it was when I graduated from college and this looks like an argument for trying to push it a little farther down.

    • Reasoner says:

      Another thing to try: go to a massage therapist, tell them about the issue, and see what they’re able to do for you. It might be in the muscle rather than the joint. Unfortunately mainstream medicine is bad with chronic muscle problems. The result is that we have a hodgepodge of alternative medicine professionals (massage therapists, acupuncturists, yoga teachers, marijuana dealers etc.) who do the best they can (but typically much better than mainstream medical professionals do, in my experience).

  13. Montfort says:

    Calling John Schilling: do you happen to be an aerospace expert and/or an NK watcher?

    U.S. aerospace expert John Schilling, a contributor to the 38 North North Korea monitoring website, said the motor appeared too big for any ICBM North Korea has been known to be working on, but would be a good fit for the second stage of a new space rocket it is planning – or for a yet-unknown ICBM design

    (from article here)

    • John Schilling says:

      That’s me, yes, albeit thirdhand. The firsthand, long form version is here, with more along the same lines. I’ve been doing this for about a decade now; initially as a hobby but it got out of control.

  14. Art Vandelay says:

    Really awful and hard to get joke inspired by a misspelling of a logical fallacy in a previous thread:

    I was once having a debate with a friend while I was in bed. I had just finished making a point when he accused me of lying. “Argumentum ad homonym!” I countered.

  15. Mark says:

    Does ‘politically principled’ equal ‘concerned with individual liberty’?

    Obviously, with ideologies like marxism, you can have tremendously illiberal regimes that exist in the name of liberty – but I feel that now, every political system *has* to justify itself with respect to individual liberty.

    Alt-rightism is increasingly popular because people believe that the individual can only exist within a community, not because anyone fundamentally rejects the appeal of individual liberty.

    • The original Mr. X says:

      For a broad enough definition of “individual liberty”, maybe.

      As “individual liberty” is commonly used, no. I don’t think that communists, fascists, Nazis, etc. are really in favour of individual liberty as most people would use the term.

      • Art Vandelay says:

        I think it would be pretty impossible to argue no matter how broad your definition is, unless you also limit yourself to the Western world for the last 200 years or so.

    • Well... says:

      There are plenty of politically principled people arguing against individual liberty, and not just in the Alt Right.

      Speaking of the Alt Right though, I have noticed a dramatic decrease in political principalledness there since around the time a certain moderate/apolitical NYC billionaire entered the presidential race in 2015.

    • The Element of Surprise says:

      There are ideologies that make a case about trading individual liberty for economic prosperity, safety (contra Ben Franklin), predictability of life, fairness, tradition, divine imperative. In our times, people assume (with some empirical evidence on their side) that individual liberty goes along well with at least some of these other goals. If there were a big trade-off to be made, I’m not sure if most people would value individual freedom the highest of these.

  16. Mark says:

    Are there any politically principled people who think that our current system of money creation (whereby privately created bank money is used to pay tax) is a good thing?

    I feel as if monetary reform is something that could bring everyone together.

    • suntzuanime says:

      No culture war stuff please.

      • Mark says:

        This is culture peace.

        If we look at the actual issue, rather than worry about the supposed plotting of our ideological enemies, everyone is in agreement, right?

        • rahien.din says:

          “Avoid culture war topics

          Not

          “Bring up culture war topics within questions that encode your own opinions, and use Orwellian phraseology to preemptively imply that people who disagree and/or put up a fight are violating the suggestion to ‘avoid culture war'”

          • Mark says:

            But, is this a culture war topic?

            It’s like, if I said, ‘everyone agrees that fluffy kittens are cute, right?’

            Is that a ‘cuture war topic’?

            Surely, when someone responds: ‘Actually, Mark, all left wingers *hate* kittens – they eat them.’ That’s the stage at which we get into culture war.
            As long as we can avoid that kind of thing, I think we’re fine.

            Actually, maybe you’re right. I’m tempting people to engage in culture war – but it’s kind of sad that we can’t be trusted to speak only for ourselves, or in good faith.

          • Nornagest says:

            The objective here isn’t to satisfy some platonic definition of culture wars, it’s to not get into stupid arguments. We are getting into a stupid argument. Can we stop?

          • rahien.din says:

            Nornagest,

            Well taken. Thanks.

          • Jiro says:

            “Bring up culture war topics within questions that encode your own opinions, and use Orwellian phraseology to preemptively imply that people who disagree and/or put up a fight are violating the suggestion to ‘avoid culture war’”

            The problem is that by banning culture war topics, Scott has created a superweapon. You now get to win if you can get everyone to believe that your opponent violated the suggestion to avoid culture war.

            We recently had an absurd discussion where posters were trying to prove that someone was endorsing violence (they actually seemed to be endorsing self-defense). Why? Because if you endorse violence you’ll be banned. Superweapons.

          • AnonYEmous says:

            You now get to win if you can get everyone to believe that your opponent violated the suggestion to avoid culture war.

            Well, you can only get a ban if you get Scott to believe that this suggestion was violated, not ‘everyone’. And then that guy can just repost in the next thread over. So every two weeks, you can silence someone with moderate effectiveness on one issue.

            So uh, keep in mind that trolls might abuse newbies this way, and everyone please don’t make spurious accusations. That’s about it lads

          • suntzuanime says:

            You don’t win by getting a ban, you win by going into a thread that looks like it will devolve into a tedious libertarian dogma recitation where the tedious libertarians accuse everyone else of lacking principles and claiming it breaks the ban on culture war, so that everyone starts arguing about the ban instead of tedious libertarian dogma.

            Or, uh, so I’ve heard.

          • rlms says:

            @Jiro
            If I had been trying to get someone banned, I’d have mentioned the words “kind” and “necessary”.

          • Gobbobobble says:

            @suntzuanime

            Another good case for the potential persistent-Hide button someone mentioned a couple threads back. I don’t want to mute commenters, I want to mute tedious subthreads. I’d take a crack at modifying the mute-commenter script but I can’t seem to find it at the moment – could someone do me a favor and remind me how to find it?

          • Jiro says:

            If I had been trying to get someone banned, I’d have mentioned the words “kind” and “necessary”.

            I’ve already objected to true/kind/necessary specifically on superweapon grounds.

          • Gobbobobble says:

            Ah, I didn’t think to check the branches. Thanks, Brad!

      • The Nybbler says:

        Is the Federal Reserve a culture war topic, or is the objection just to “politically principled”?

        I’m sure the SSC commentariat could get into a serious battle over the desirability of fractional reserve banking, but it doesn’t really seem related to the culture war; it’s about gold bugs, not mold bugs.

        • FacelessCraven says:

          I too am very confused.

          • Art Vandelay says:

            Seconded. Confusion compounded by the fact that no one’s objected to the question below of whether ‘politically principled’ equals ‘concerned with individual liberty’ on ‘no culture war’ grounds.

            But it is difficult to know where to draw the line. Does no culture wars mean we’re not allowed to debate the relative merits of country vs old-time music?

        • I think discussing the pros/cons of the Fed on the SSC comment section should be allowed — it’s just stupid.

          Central banking is very very very complicated, and a highly undetermined field. Many brilliant economists and scientists have started from first principles, and come up with completely different answers. Brilliant econometricians have measured the same phenomena and come to separate conclusions.

          There are stacks of journals where some of the highest IQ people of the 20th and 21st century have gone back and forth, both philosophically, theoretically, and you better believe mathematically, and consistently seem unable to converge.

          I am not one of those highest IQ people, but I did work on a financial economic research team at the Fed for a few years. I wanted to see what it was like, because I was the type of kid who was always interested and debating central banking and macro-econ throughout my undergrad and masters.

          I don’t discuss it anymore, or debate it. It’s not worth it, it’s an undetermined field, and humans don’t yet have enough data to come to clear conclusions. Is it true that a set of, perhaps, equally valid assumptions could justify abolishing the Fed? Yeah, maybe. Depends who you ask. It also depends on how much you think a persistent ideological bias is distorting the academic credibility of the university and fed system (see? you can’t do gold bug without mold bug).

          Anyway, I guess meta-discussions of this phenomena I outlined might be interesting e.g. “Why do people online discuss central banking, when they rarely know what they are talking about, and reality seems so complex?” The actual topic itself though, searching for an answer to the “problem of central banking” is a pointless waste of time.

          • In case you are curious, here are my views on what the monetary system should be.

          • Mark says:

            Is this a problem for broad strokes, or just details?

            I mean, if we’re talking about details, you could probably say exactly the same about any political issue – ‘legalise cannabis? Where is your pharmacology PHD?’

            I would guess that since central banks are defined by people, and since they are operated by people, we have a pretty good idea of how they function. Question marks over lower-order effects of specific policies/ details.

            Anyway, I would say that the fact that the mechanics of our monetary system are unclear even to the experts, counts as a fairly big point against the system.

            Too complicated, don’t bother your head, leave it to those nice fellows at Goldman Sachs.

          • Mark says:

            The biggest obstacle to developing a private currency is that any transaction conducted in a private currency still results in a tax liability that can only be settled with government approved money.

            Obviously, if *everyone* could print their own tax-money, the government would lose economic control.

            The question is whether it is necessary/good for some private institutions to be able to do it. My guess is that it’s more like one of those weird ancient rights (‘nobles excluded from tax’) more to do with patronage than economic efficiency.

            Though, sometimes patronage is necessary to get things done.

          • IrishDude says:

            @DavidFriedman

            The paper says “Starting with commodity money as the simplest case, the interest of the government has frequently been to debase it, decreasing the weight in order to make a profit on recoining heavy coins into light ones. This serves the additional function of benefiting creditors at the expense of debtors, by allowing the creditors to pay their debts in the debased money.”

            It seems like debtors and creditors should be switched in the second sentence, unless I’m misunderstanding something.

          • random832 says:

            The biggest obstacle to developing a private currency is that any transaction conducted in a private currency still results in a tax liability that can only be settled with government approved money.

            There’s no reason this should necessarily be so. One could imagine a system where tax liability can be settled with anything that the government has deemed to have value, with the “this is income” function of assessing value being inseparable from “you can pay taxes in this”.

            Basically “the government says my income of 1000 units of privately minited currency is worth $10,000, and they want $1,000 in taxes, so they cannot refuse to accept payment in the form of 100 units of the same.”

            This isn’t the system we have in the real world, but that should be read as “the government has directly forbidden people from conducting business exclusively in private currencies” rather than as it being some philosophical obstacle to the concept.

            The question is whether it is necessary/good for some private institutions to be able to do it. My guess is that it’s more like one of those weird ancient rights (‘nobles excluded from tax’) more to do with patronage than economic efficiency.

            I’m not sure what this means, unless you’re suggesting that the Federal Reserve is a private institution.

          • John Schilling says:

            The biggest obstacle to developing a private currency is that any transaction conducted in a private currency still results in a tax liability that can only be settled with government approved money.

            If that’s the biggest obstacle, it’s not that big.

            Money is almost by definition fungible, and markets black, white, and grey are really good at funging different sorts of currencies against one another. If the sort of money the government demands its tax payments in is poorly suited to other uses, people will use whatever does work in their private affairs and then pay a couple percent to a money-changer for their Tax Coupons.

            If a government creates and maintains a currency that is actually good at doing what people want from their currency, they can use “…and you can pay taxes in it!” as a hook to lift that currency above other competitors in the local marketplace. They can’t use it to force people to abandon a sound currency in favor of a crappy one, as witness the many nations where crap local currency(*) circulates in parallel with the $US (and if the $US becomes crap currency, something else will take over that role).

            * To be fair, most governments are at least capable of managing a crap currency that’s good for paying taxes and for paying wages and buying groceries. Often the people who have money left over after taxes and groceries will convert that to $US before stashing it in a mattress.

          • Mark says:

            One could imagine a system where tax liability can be settled with anything that the government has deemed to have value

            That’s true – but, in practice they won’t be able to accept my IOUs as payment for taxes simply because it would be incredibly difficult to assess the value of that IOU plus the value of any IOU that anyone could come up with. More likely is that they could accept payments in some basic commodities (rice… gold), or accept payments in the form of money created by a few preferred/regulated institutions – which is what happens.

            I’m not sure what this means, unless you’re suggesting that the Federal Reserve is a private institution.

            I pay my taxes with bank deposits – and I don’t bank with the Bank of England.
            As I understand it, most of the money used in the economy exists as bank deposits, which are created when privately owned banks issue loans.
            It’s a bit strange, because although we theoretically have a fractional reserve system, there isn’t any practical difference between the private bank deposits and the central bank money/cash that they are supposed to be backed by.

          • Mark says:

            If that’s the biggest obstacle, it’s not that big.

            Yeah, I guess – there’s a (I think full reserve) local currency in the UK called the “Lewes Pound” – it’s supposed to have a one to one correspondence to Pounds Sterling, but prices are sometimes different, so I guess if the taxman wanted to be an ass about it, you’d have to do a bit of additional work when calculating your income.

            Maybe we should all just start making our own currencies and see what happens.

          • It seems like debtors and creditors should be switched

            Indeed they should, but the article was written a very long time ago and isn’t on my site. I’m not planning to try to get Cato to fix it.

          • random832 says:

            That’s true – but, in practice they won’t be able to accept my IOUs as payment for taxes simply because it would be incredibly difficult to assess the value of that IOU plus the value of any IOU that anyone could come up with.

            Since my point is that the payment is in the same form as your income, they assessed it just fine when they decided (or decided to trust your own assessment of) how much you owed.

    • Brad says:

      Are you considering the federal reserve as a private entity?

    • Mark says:

      To expand a bit – I think that private creation of tax money is one way in which the power of the mercantile classes is expressed, but there isn’t really any fundamental ideological or moral argument for the system.
      It’s really a product of its technological era – and as technology has advanced, probably a bit obsolete.

  17. James Miller says:

    From Scott Adams:

    Kids as young as eleven have smartphones…A kid with a smartphone has access to any illegal drug in the world, as well as all the peer pressure in the world…Pills are small, cheap, odorless, widely available, and nearly impossible for a parent to find in a bedroom search. When you have this situation, the next generation is lost…My observation is that smartphones have made half of all adults mentally ill. I mean that literally, not figuratively. The business model of phones is addiction, not value. And they addict you at the expense of the things humans need in their lives to be happy and healthy…Kids have it worse. They haven’t developed any natural defenses. They are pure victims.

    What do you think? My 12-year-old very much wants a smart phone. My guess is that he is much less likely than the average kid to take drugs, but of course I recognize that most parents probably think this about their children.

    • Sounds rather moral-panic-y.

    • Siah Sargus says:

      Oh yeah, the peer pressure to “buy pills off of the darknet” was so strong in middle school and high school.

      This is moralizing, pandering garbage. The unprecedented freedom that smartphones allow should be respected, yeah, but smart phones haven’t made people any more or less mentally ill than computers, video games, movies, television, newspapers, comic books, rap, metal… I’m sure you get the picture.

      Scott reveals himself as just another bitter old man here.

      • psmith says:

        mart phones haven’t made people any more or less mentally ill than computers, video games, movies, television, newspapers, comic books, rap, metal

        implying those things haven’t made people quite severely mentally ill

        • suntzuanime says:

          Sure, but what are you gonna do? You can try raising your child in Hard Traditionalism, but if that’s a thing you want to do you probably already know you want to do it, and I wouldn’t recommend it to a random person.

          • Siah Sargus says:

            More likely than not, after being raised in such an environment, they’ll just defect to whatever the perceive to be freedom. And, if they haven’t been educated on what can be dangerous, or worse, they don’t trust the education they were given, they’ll wind up doing a lot of damaging things. So I’m not recommending it either.

        • Nornagest says:

          I’m happy not just to imply but to outright state that none of those things have made people mentally ill. It’s like the whole generational pathology thing that crops up occasionally — theoretically possible insofar as the environment informs our mental development, but totally overshadowed by rose-colored glasses in terms of explaining anything about Kids These Days.

          (I’m less sanguine about Facebook/Twitter/etc — but even there I don’t think it’s a matter of mental illness so much as bad incentives.)

      • Anon. says:

        I bought salvia off the non-dark net in HS, who knows what we would’ve done with access to darknet markets…definitely a lot more MDMA, probably hallucinogens too.

      • Machina ex Deus says:

        smart phones haven’t made people any more or less mentally ill than computers, video games, movies, television, newspapers, comic books, rap, metal…

        Woah. Woah-woah-woah. One of these things is not like the others. TV requires almost no effort (unlike reading, or listening to music), transfers deleterious memes with a waterfall’s incessant pounding, and continuously entices more viewing with unfulfilled (yet unstated) promises. The longer I go without watching it, the happier and more-importantly saner I get.

        Naturally, this makes me despondent when I see my kids using computers as TV sets….

        • Gobbobobble says:

          Since when does listening to music require more effort than watching TV? While really good music *can* have plot and cognitive complexity, most of it does not, and is just there for background noise while doing something else. Whereas even garbage reality shows have a bunch of characters and events to pay attention to.

          I can multitask with TV because I can pick up most of it through the audio and have learned the right times to look up, but with music it is completely effortless.

          Don’t get me wrong, TV is generally “junk food” media. But you’re fooling yourself if you think music is categorically superior.

          • Randy M says:

            And I’ll stick up a very little for some TV shows. I’m watching Deadwood now, and while it may well have deleterious effects on my vocabulary–vis-a-vis an inclination to using certain vulgarities as punctuation–a temptation which I mostly avoid–following the interwoven plots of the various characters is no mindless feat. There’s certainly a low ceiling to the mental growth the medium facilitates, but it can rise a bit beyond how us sobs typically view it. (Helps that it is sans advertisements)

      • I don’t think he’s referencing the darknet explicitly, but rather your average teenage drug dealer.

        There is probably some truth to what he’s saying. The highly networked communication systems open to teenagers can encourage drug usage.

        In fact, that’s how I got started. When I was 16 friends I made online taught me about how the percocet I had from whatever surgery could get me high, and would mail each other drugs. We met on a video game forum. Five years later and a couple of our original forum crowd were dead from heroin overdoses.

        It’s anecdotal, but it is in some ways a new phenomena. Adams is of course taking it way too far.

    • suntzuanime says:

      Scott Adams is a nutjob, I became mentally ill just fine without a smartphone. It might be worthwhile having the Talk with your child about Moloch before they carelessly get into gatcha freemium scam games, though.

      • Siah Sargus says:

        The freemium pay to win shovelware is the real danger of giving a child a mobile phone.

      • James Miller says:

        It’s funny how effective advertisement gimmicks can be on children. “But dad we have to buy now because supplies are limited.” “But dad this is a great deal since it’s being offered at a huge discount.”

        • suntzuanime says:

          Funny, or horrifying, or definitely some sort of adjective.

        • roystgnr says:

          Did they buy? If not then it wasn’t an effective advertisement; hopefully it was an effective teaching moment. Better that they get exposure and develop immunity before they have their own credit cards.

          Kids can’t waste money on scams if they don’t have money to waste. Non-monetary scams are more frightening. We just had to explain to our daughter that, although the stranger pestering her to become Poptropica friends purports to be another little girl, that doesn’t mean it actually *is* another little girl.

          • Aapje says:

            @roystgnr

            I think that if the kids start appealing to their parents, then the ads were effective on the kids.

            Advertisement doesn’t always have the (immediate) goal of getting people to spend money. Quite frequently, they use a multi-stage strategy (like getting kids to beg to their parents) and/or set people up for another strategy to be used. For example, many shops have discounts to lure people to the shop. Once they have your eyeballs, they can take advantage of this by selling you other things with other ads.

    • Nancy Lebovitz says:

      I’d want to see some definitions and evidence, not to mention the possibility that teens and children might need some privacy from their parents.

      I’m very tempted by the idea that Adams is addicted to bloviating.

      • Paul Brinkley says:

        What Nancy said. Clearly, the lesson to learn here is “don’t give that kid his own blog”.

    • rlms says:

      I don’t think the potential access to drugs that smartphones provide is a good reason to keep them away from children. But there are other reasons to be careful. I’m glad I didn’t have a smartphone at that age (on the other hand I’m also glad we didn’t have a TV or games console until I was 10 or so, so I might be an outlier).

    • Nornagest says:

      The only way I can think of for smartphones to make it easier to get your hands on illegal drugs (relative to regular phones) is through darkweb markets, which require secure mail channels, money, and a fair amount of know-how. All three of those are serious obstacles to an eleven-year-old, and the first would be an obstacle even to a bright teenager.

      I’d be more worried about your kid wasting money and time on Farmville or whatever its modern equivalent is, but that doesn’t sound especially hard to police.

      • Matt M says:

        Needing a phone to buy drugs makes this seem MORE complicated for modern kids! In my day, all you needed was a $20

    • dodrian says:

      Given the history of his blog, I’m pretty sure this post is bait for some even more convoluted political point he’s going to make later in the week about why we shouldn’t X because of Y.

      Then again his metaphors are usually entirely transparent, but I can’t figure out what he’s actually referring to this time. The other option would be there’s a follow up post to come advertising a startup he’s invested in which will be able to stop the smartphone/illicit-drug epidemic.

      • oldman says:

        I’ll predict it will be why the government should regulate twitter to stop them “picking on him” – although I have a low level of confidence in this prediction

    • The Nybbler says:

      Just repeat to yourself “He’s just a cartoonist”.

      I have a smartphone. I have no idea how I would use it to get illegal drugs (at least, not without a good chance of getting Feds instead). On the other hand, when I was in high school (long before smartphones) I knew darn well where I could get weed or cocaine or heroin.

    • beleester says:

      Are phones addictive? Probably. Are they addictive because you can literally buy illegal drugs by phone? Ha ha ha ha, no.

      And really, peer-pressuring kids into buying drugs over the internet is like six different kinds of ridiculous. “Psst, kid, wanna try some drugs? All you’ve gotta do is install Bitcoin and Tor, find a way to turn your money into bitcoins, and go to this website which is definitely not a scam to place your order. Oh, by the way, if you do any of these steps wrong, you’ll leave a trail for the police and get arrested.” It’s a rare teenager indeed who can do good cryptographic security.

    • Rock Lobster says:

      FWIW, I hear anecdotally that social media has taken the miseries of girlhood adolescence and turned them up to 11. I really have no idea if it’s true or not.

    • AnonYEmous says:

      What do you think?

      that you should stop listening to Scott Adams

      just as a general rule, unrelated to the potential validity of this anti-smartphone sentiment

    • rahien.din says:

      The business model … is addiction
      Kids … are pure victims
      the next generation is lost

      Such things have been said about books and the act of reading.

      • Well... says:

        Each generation tries to preserve what it values. Some important values of the generations that said those things were surely lost because of books and reading. It’s easy for us to sit here with hindsight and say “nah, books and reading turned out well, look how enlightened we all are now!” but at the time people were being rationally cautious. It doesn’t mean social media and smartphones will also turn out that well either.

        • James Miller says:

          Ordinary people reading the bible certainly did cause lots of trouble such as, perhaps, the Thirty Years’ War.

          • Well... says:

            I can’t speak to that specific example. Did religious belief among ordinary people cause the 30 Years War? Was it some specific form of religious belief only made possible by reading–i.e. something ordinary people wouldn’t have picked up some other way?

            The example I’ve heard of is Socrates (I think it was him) lamenting that reading would destroy people’s ability or willingness to memorize and recite long epics. And he was probably right. Memorizing and reciting long epics was an important part of his society and reflected his society’s values, kept them going to some degree. When people stopped memorizing and reciting long epics, the society lost something irretrievable. In retrospect we can evaluate whether the cost was worth the benefit, but without hindsight I don’t think it’s foolish to approach that trade-off with a lot of caution.

          • Trofim_Lysenko says:

            Ehhh, only in the broadest sense that “If there were no protestants, there wouldn’t have been the initial casus belli”. Even then I think the same geopolitical factors that led to the scope and length of the war would eventually have kicked it off, even if the character of the conflict might not have been quite so vicious.

          • The original Mr. X says:

            I can’t speak to that specific example. Did religious belief among ordinary people cause the 30 Years War? Was it some specific form of religious belief only made possible by reading–i.e. something ordinary people wouldn’t have picked up some other way?

            No, it started off as a revolt by the Bohemian Estates (that is, the movers and shakers of the kingdom) against their overlord the Holy Roman Emperor, and continued as long as it did because most of the rest of Europe repeatedly dogpiled the Emperor, whom they feared was becoming too powerful. Religion did play a role insofar as the Protestant German princes were more likely to fear the increasing power of the (Catholic) Emperor, but it was only ever one factor among several, and there are plenty of examples of people fighting on the “wrong side”, both on the national level (Catholic France fought against the Emperor; Protestant Saxony changed sides twice) and on the individual (it was common practice for victorious generals to conscript enemy POWs into their army, so even if both sides had had perfectly religiously homogenous forces to begin with, this would change pretty soon).

            ETA: Plus, the role of literacy and printing in the long-term success of Protestantism is debatable. Official support and political reasons seem to have been at least as important.

        • rahien.din says:

          Well…,

          Hope you don’t mind if I respond to your replies in composite.

          Each generation tries to preserve what it values. Some important values of the generations that said those things were surely lost because of books and reading. The example I’ve heard of is Socrates (I think it was him) lamenting that reading would destroy people’s ability or willingness to memorize and recite long epics. And he was probably right. Memorizing and reciting long epics was an important part of his society and reflected his society’s values, kept them going to some degree. When people stopped memorizing and reciting long epics, the society lost something irretrievable.

          It’s easy for us to sit here with hindsight and say “nah, books and reading turned out well, look how enlightened we all are now!” but at the time people were being rationally cautious.

          Your point is well-taken regarding rational caution. And I agree that the technology itself (books, smartphones, etc.) does not inherently guarantee it will be used properly or beneficently.

          But, we can and should recognize those clung-to values as having been mere obstacles.

          Consider that in the 1800’s, horsemanship was an essential skill that held society together. Once people started to use cars, the value and prevalence of this skill inevitably diminished, such that the average person today has basically no idea how to properly ride a horse or drive a wagon. There were probably people who rightly saw this coming, and decried the automobile as a threat to horsemanship that should thus be resisted. But that wasn’t rational caution at all. It was merely a lack of foresight.

          It is as though a person is walking in the rain to dinner, dry under their umbrella. Upon reaching the restaurant, they consider that going inside would require them to close the umbrella that has kept them dry. “This umbrella has been essential to my health and comfort,” they think, “I would be a fool to close it.” Thus they stay out in the rain, dry and hungry.

          The resolution is acknowledging that human technological progress is continuous, and thus every point in history is some improvement on an earlier time. Thus, it is incomplete for Socrates to claim, in purported rational caution, “Text memorization is good and valuable. Books will erode the practice of text memorization. Therefore books should be resisted.” Rationally, he must locate the technology of text memorization within the timeline, with both predecessors and descendants, and he must weigh the risks and benefits of each versus the present. He, like us, would conclude that the text memorization era was superior to the prior pre-memorization era, despite whatever lost utility. And he would have to consider the potential increase in societal utility with the written word.

          Otherwise he is citing the positive utility of attained technological changes in order to utterly discount the positive utility of potential technological changes. That may be caution, but it isn’t rational caution.

          • The Element of Surprise says:

            Some very good points, but I’d argue that societal change is less transparent for human reasoning than an umbrella. The writing era was superior to the memorization era only in hindsight, it would have been very hard to predict beforehand. There is probably some survivorship bias in saying that most change has been for the better in the past; the ones that were obviously for the worse were mostly rejected after a while. Remember that e.g. communism was once seen as the logical next step in organization of human societies, even with a similar argument: As the capitalist era was superior to the feudal era, the communist era will be superior afterward.

          • psmith says:

            societal utility

            This is either so vague a criterion as to be useless, or it smuggles in quite a lot of contentious assumptions about what we ought to value.

          • Aapje says:

            This is either so vague a criterion as to be useless, or it smuggles in quite a lot of contentious assumptions about what we ought to value.

            Especially as a lot of societal change brings along with it ideas about what ‘is right.’

          • rahien.din says:

            The Element of Surprise,

            Fair points and I don’t dispute them. One might ask, how do we overcome the opacity of such societal changes?

            psmith,

            Intentionally vague (maybe poorly worded). I only want to imply that the written word has been net beneficial to humanity. This seems fairly indisputable (especially in the context of a comment thread).

            If you believe I am smuggling in assumptions, then I am genuinely interested to know more about that. It would seem to be the whole point of the discussion!

          • Well... says:

            The restaurant analogy is no good; when one goes to a restaurant, one knows to expect a dry indoor setting. Rather, it’s like walking in the rain to meet friends at a campsite: you don’t know if there’ll be a pavilion or if you’ll be sitting around a fire in the open. You shouldn’t close your umbrella just because a pavilion is what you hope to find. And even if the roof of a pavilion comes into view, you don’t ditch your umbrella in the woods, you keep it with you and try to remember how to use it just in case.

            Anyway my point isn’t about being anti-technology (of course you’d eventually have to go live naked in the woods or something), nor is the alternative cheering every technological advance. My point is about taking account of what stands to be lost and being intentional about whether that thing is worth giving up. If it isn’t worth giving it up, then it’s probably worthwhile to at least try to get other people to go through the same process of increasing their intentionality.

            acknowledging that human technological progress is continuous

            Wait, are you sure?

          • rahien.din says:

            The point I am driving at regarding the umbrella analogy is that the efficacy of a technology is not an argument for its eternal primacy. IE, it is not rational to say “X technology works well, therefore no new technology could work better.”

            But yeah, I am finding we agree on the larger, more important principles.

          • Nancy Lebovitz says:

            My example for lost knowledge that people don’t seem to care about is starting a fire with flint and steel.

            With the decline of smoking, I wonder whether as high a proportion of people even know how to use matches.

            Another example would be writing with a quill, which is a finicky process. The only people who do that now are those with a specialized interest.

          • rlms says:

            Starting a fire with flint and steel (and good tinder) isn’t hard. Or at least it isn’t much harder than starting one with matches.

          • rahien.din says:

            good tinder

            Presupposes expertise (use of jargon, critical evaluation of said tinder materials)

          • rlms says:

            I don’t think it requires that much expertise. If you gave someone today an 18th century flint and steel and tinderbox, I think they would be about as successful lighting a fire as if you gave them matches (unless flints and steels have got significantly better since then, which is possible). In comparison, presumably your average person today would be pretty bad at writing with a quill.

          • beleester says:

            I looked it up. Apparently you place the tinder in the box, then strike sparks off the steel into the box until the tinder catches, then blow on it gently to enlarge the flame, then ignite a sulfur-tipped match so you can actually use the fire. Not that complicated if someone’s already assembled the tinderbox for you, but definitely a bit trickier than matches.

            (Although you’re right that using matches is also a learned skill. I remember in high school chemistry, some kids had never lit a match before, so our teacher gave a quick explanation of how to hold and strike it before we started lighting Bunsen burners.)

          • Well... says:

            Just because you have a hammer (technological innovative capability) not everything is actually a nail, even if it looks like one.

    • Well... says:

      Having worked in tech and seen the sausage get made, I’ve never agreed with Adams more (by a long shot), though I’m not as sure as he is that a top-down solution should be the first/only thing to try.

      • FacelessCraven says:

        Having worked in mobile games, I concur. That entire ecosystem is *fiendishly* Molochian.

        • Cypren says:

          Working in mobile games was probably the biggest factor in driving me away from an anarcho-capitalist, “government regulation is the enemy of the good” worldview. If someone had described to me the level of cartoonishly evil, “the only value people have is in their wallet” type personalities I would encounter, I would have laughed them off as reading Soviet anti-capitalist agitprop.

          But nope, they’re real. And the world is poorer for it.

          • cassander says:

            That people are horribly greedy is not disputed by capitalists. What I wonder is why you think that people who work for the government will stop being greedy……

          • Cypren says:

            I don’t. I fully expect government to be evil; I just think that it’s also a necessary one that offers better solutions than private markets for some problems.

          • hlynkacg says:

            Why not?

    • Art Vandelay says:

      If a teenager buys drugs on the dark web does the coolness of doing drugs cancel out the nerdiness of knowing how to use tor and bitcoin?

    • WashedOut says:

      For the most part I agree with the quoted Scott Adams paragraph. However, the connection he makes to drugs I think is irrelevant and just unnecessarily baits people into the wrong argument.

      His viewpoint re: smartphones makes a lot of smartphone-wielding people (i.e. almost everyone) very defensive, and the way he’s worded it is not going to foster a constructive discussion, but his frustration is palpable and, I think, valid.

      A smartphone is fundamentally different to a desktop/laptop computer (or even a tablet), to TV, to videogames, death metal, rap, and almost any other classic “vehicle for degeneracy” asserted by the responders to this post so far. In practice the smartphone’s demand for attention is incessant by design, as SA pointed out. Aside from the minor background-level of calls, messages and alarms; the device is a continuously open portal to live media and communications, games, and status updates of various kinds. The result of this is fixation in the short term and dependence in the long term. And don’t pretend everyone shuffling around staring into their smartphone is reading MIRI articles or listening to mindfulness podcasts – mostly the content people are accessing is not criminal, contra Adams, but self-congratulation and approval-seeking from their social circles. I think this is what SA means by mental illness – pathological approval-seeking, identity-bolstering, and utter dependence on web media stimuli.

      Next time you get on a train, look down the carriage. Drive past a bus stop – look at the person waiting. Look at people standing in a queue. Listen to the person playing candy crush in the toilet next to you. Find a group of young girls sitting together at ANY EVENT. Completely shut off from their surroundings, pretending to be busy/popular/in-demand/”creating content”.

      I am a young man. I was not alive 30 years ago. But if I were, I doubt that the behaviours people exhibit today during or in furtherance of smartphone use would be regarded favourably at all. I understand that SSC and LW communities are extremely sensitive to unjustified claims of “moral panic”, and even more sensitive to being accused of being suckers. Well if I was a father I might be panicking at the prospects for the future social development of my children in the electronic age, but for now the smartphone looks like it was built for suckers, serfs, and narcissists.

      • The Element of Surprise says:

        There is definitely a good argument to be made about internet addiction, and about perverse incentives for content providers related to that. The “Pills are small, cheap, odorless, widely available, and nearly impossible for a parent to find” part is quite ridiculous, however. But that part is his central point, since “1% freedom to contact anyone for anything” is all it takes etc; his offered solution is constantly monitoring children’s online communication – I wouldn’t say people are “baited into the wrong argument”, they are reacting to what SA is actually trying to say.

        On a different note, as a not-that-young-of-a-man, and as an introvert, I am glad that by now technology has given me the possibility to be in a location full of strangers without feeling awkward about it. Maybe mobile phones cause younger generations having on average poorer social skills (related to handling the particular situation of being in a room with strangers and having nothing to talk about)? But that would only be because, and be commensurate to how much, they don’t need these skills any more. I’d compare that to people’s poorer mental math skills since pocket calculators came around.

      • Matt M says:

        A smartphone is fundamentally different to a desktop/laptop computer (or even a tablet), to TV, to videogames, death metal, rap, and almost any other classic “vehicle for degeneracy” asserted by the responders to this post so far.

        The issue is, this claim was made about each of those other things when they were new as well. The people who were decrying Elvis Presley swaying his hips as the end of western civilization did not say “This will destroy us just the same way that reading did!” No, they said “this is an entirely new and different threat than any such threat we have ever seen before!”

        To paraphrase another series of posts, you are crying wolf. Every other time someone has cried wolf, there has been no wolf – and when we point this out, your response is “But THIS TIME there really IS a wolf!”

        • The Element of Surprise says:

          Arguments that technological or societal innovation “hasn’t been that bad” in the past kind of bother me. Sure, it is evidence that people tend to overestimate or overstate the severity of their effect. But is the antecedent true? If the bad things (some of which, admittedly, are more believable than others) predicted about books, TV, internet, certain trends in music etc. (feel free to also imagine your past culture war topic of choice here) actually come to pass, would we notice? We don’t experience the counterfactual where we live in a world without a certain meme and end up being more social, having better memory, having richer inner lives, spending more of our time with things we value more. We notice even less when these effects are unevenly spread across the population – parenting by TV may be good for the children who would otherwise be neglected completely without it, but could be bad for those children whose parents would otherwise have spent more time with them.

          Some effects are measurable: I wouldn’t be surprised if at least part of the historically high prevalence of obesity or ADHD can be ascribed to the technologies listed. Also there might be a (less measurable, apparently, going by the quality of sources I find on this) general trend of increasing social isolation caused by social media.

          I personally think that most of these innovations were a net good. I don’t think they are without drawbacks, or that people warning about drawbacks that are inherently hard to notice are “crying wolf” in the sense of the fable.

          There are other kind of changes that happen very much like predicted, but it turns out we don’t care so much about them anyways. People probably are worse at mental math, remembering telephone numbers, and navigating cities than decades ago.

          That said, I do think that SA is wrong and that his point about mobile phones causing drug use is not very credible.

          • Matt M says:

            There probably exists a large enough population of people who don’t watch TV, and don’t allow their kids to watch TV, that we could study them and determine whether they are less likely to be diagnosed with ADHD or whatever. We don’t have the counterfactual “as a society” but we certainly do with certain subpopulations. We can look at the Amish as the ultimate control group, I guess?

            In any case, my main objection is the framing of the issue as “but THIS TIME it’s different!” That sort of framing implicitly concedes that in past instances of panic, the panic was overblown and no/little harm was done.

        • Kevin C. says:

          To paraphrase another series of posts, you are crying wolf. Every other time someone has cried wolf, there has been no wolf – and when we point this out, your response is “But THIS TIME there really IS a wolf!”

          And from the Luddites on, people displaced from jobs by technology have found new employment, so any and all talk of “technological unemployment” is just “crying wolf”, and all the folks here who talk about the need for UBI before computers put large fractions of the population out of work are just ignorant wolf-criers saying “But THIS TIME there really IS a wolf!”, yes?

          • suntzuanime says:

            And from the Luddites on, people displaced from jobs by technology have found new employment

            Is this really true? My understanding is that the notion of “unemployment” as this phenomenon we always have with us is a product of the industrial era. Those looms didn’t produce total unemployment, but they did produce some.

          • And from the Luddites on, people displaced from jobs by technology have found new employment,

            Or died.

  18. Siah Sargus says:

    Okay, so, most ship names, in real life and Science Fiction, are exceptionally boring, entirely too serious, and generally only one word, which really limits the numbers of good ship names. Well, after reading the Culture series (and, too a lesser extent, playing the Halo games), I feel like there’s only one way to go. So, for your amusement, here’s a working list of some of the best and worst ship names I’ve come up with. Feel free to suggest better names, or tell me which names are terrible. I also need to decide on the name of my main characters’ personal ship, which is a small two-seater shuttle. Anyway, I hope that some of these are at least funny:

    All-American Terrorist
    Airhorn Glissando
    As You Like It
    Baddest In the Game Right Now
    Bitches Brew
    Borne to Death
    Cool Cool
    Constant Vexing Girls with Convex Curves
    Culture War to Eternity
    Damn the Torpedoes
    Dead Presidents
    Don’t Mind the Cherenkov Radiation
    Eventually Colliding with Andromeda
    Fading Ember
    Fearful Change
    The Final Question
    The Flecks in Her Eyes
    Train of Thought
    From Frenulum to Frenulum
    From Sea to Shining Sea
    From the Top, Kitty Hawk
    Golden Grains
    A Good Book and a Long Weekend
    Heartbreaker
    Heavier Than Air
    If Looks Could Kill
    In Rainbows
    Insufficient Gravitas For a Meaningful Answer
    Last One Out, Get the Lights
    Like Poetry, but Not
    Lincoln Graine Clad
    Meditations on Moloch
    Nature to Eternity
    Now or Never
    Once More, With Feeling
    One Fathom Tall
    Pew Pew Pew
    Rayleigh Scattered After Dark
    Rivers’ Flow Into the Ocean
    Sea and Open Force
    Series of Glottal Stops
    The Shibboleth in Our Stars
    I Ship It
    Shooting Stars, Shooting Ships
    Standard Deviation
    Starchaser
    Supercritical at All Times
    Swords Beaten Into Plowshares
    Ten Paces at Dawn
    Thank Me Later
    They’re Famous, But I Can’t Remember Their Name
    Three Notes to the Trillest
    Too Many Ngons
    To Pimp a Butterfly
    To the Pain
    Trained to Go
    Twelfth Night
    Twilight of The Gods
    Will to Power
    Under Their Own Vine and Fig Tree
    Unintelligible Scrawl
    Urban Ball
    Volley and Thunder
    Volleys of Agincourt
    Volumetric Splattering

    • John Schilling says:

      Okay, so, most ship names, in real life and Science Fiction, are exceptionally boring, entirely too serious, and generally only one word,

      If the ship can’t make its own name un-boring, you’re doing it wrong.

      As for being too serious, Churchill pointed out the key issue there. If you are in the business of operating warships, you are going to be in the business of sending out telegraphs of the form,

      “Dear Mrs. Smith: I regret to inform you that your son Jack was killed in action yesterday, while serving on board HMS [X] during Operation [Y]”

      This is serious business, and “Boaty McBoatface” isn’t going to cut it. If your chosen values for X and Y don’t convey gravitas sufficient for that occasion, try harder, because grieving mothers aren’t known for their sense of whimsy.

      If you’re just naming a pleasure boat, knock yourself out. There’s lots of suitably flippant boat names at your local marina.

      • The original Mr. X says:

        Plus, there’s the morale issue of serving on a ship with a ridiculous name. People work better if they’re proud of their workplace, and nobody’s going to be proud to work on a ship called HMS Boaty McBoatFace or some other stupid title.

      • Siah Sargus says:

        So you’re saying… that the ships don’t have enough Gravitas?

        Well, to be fair, I figure that in a space station where severally privately owned vessels were docked, there would also be a more colorful assortment of names to mirror your Marina example. Obviously, nearly none of these would be warship names. (And SF doesn’t have to be Battleships In Space, it can be so much more…) And of the ones that would be alright for Large, Flying-Brick Capital Ships, like Lincoln Graine Clad, for instance, still follow my own personal taste of not being boring, and more often than not, not being just one word, more like a poetic phrase. Halo was the best at that, with capital ships often being three-word prepositional phrases (Forward Unto Dawn, Spirit of Fire, Shadow of Intent).

        Besides, more than anything, I’m sick of ships just named after powerful people (Presidents, Admirals, more often than not). I don’t think ships like that represent the values of a Navy.

        And, as an addendum, I recall a post in the previous thread about some of the most ridiculous ship names from the UK, many of whom where the result of strict naming rules.

        “I’m sorry ma’am, your son died when the Inflexible broke in half.” I can see why Churchill thought that was a problem.

        Besides, if the telegram problem is such an issue, simply show the serial number and class of the ship “While aboard ship NV-071 (A Twelfth Night-Class Assault Carrier), ecetera…” Non-problem. Even if she Googled it later and found out that the ship’s callsign or operating name was different from serial number, we would still avoid ever having to send anything funny on a telegram.

        • Nornagest says:

          Halo’s naming conventions (and those of its predecessor Marathon games) strike a good balance between gravitas, aesthetics, and interest, and I think you could adapt them to another fictional world with relatively little work. In Amber Clad would work as well in a condolence letter as any name could be expected to.

          Culture shipnames work in their context, but they rely entirely on Culture quirks that aren’t likely to be present in another setting; the Culture’s got plenty of self-conscious frivolity and plenty of contempt for traditional military virtues, and, just as importantly, the ship’s choosing the name for itself there.

        • The original Mr. X says:

          Besides, more than anything, I’m sick of ships just named after powerful people (Presidents, Admirals, more often than not). I don’t think ships like that represent the values of a Navy.

          You don’t think commemorating national heroes represents the values of a navy?

          Presumably it’s going to be a pretty unsuccessful navy, because the sort of culture that doesn’t commemorate heroism is the sort of culture unlikely to produce it.

          • Siah Sargus says:

            No. They aren’t heroes. Some rich, well off dude got himself elected or passed enough election boards (also politics) to get into that sort of position. There was no heroic act involved. It’s telling that we don’t name ships after dead petty officers.

          • The original Mr. X says:

            No. They aren’t heroes. Some rich, well off dude got himself elected or passed enough election boards (also politics) to get into that sort of position. There was no heroic act involved. It’s telling that we don’t name ships after dead petty officers.

            Well, you mentioned admirals as well, who if they’re highly regarded enough to get ships named after themselves generally have to have done something heroic at some point in their careers. Anyway, though, maybe “commemorating achievement” would be a better way of putting it, since run-of-the-mill politicians don’t get ships named after them.

          • Siah Sargus says:

            Well, just so I don’t come across as disingenuous or excessively culture warring, I will agree that there have been plenty of heroic Admirals throughout history, and I will also conceed that destroyers are not entirely infrequently named after Petty Officers. However, if the audience doesn’t know of who they are or what they did, it just becomes another boring unmemorable name to add to the lexicon of chacter and place names the audience is expected to remember. Science Fiction, unlike novels set in the present day, has to bring its readers up to speed on technology, culture, and place, and I don’t want to have to add to that expositional load any more than I need to.

            Since the heroics acts of fictional characters are, you know, fiction, and made up basically on the fly, it feels exceptionally meaningless to commemorate them, especially when they won’t be the focus of more than a couple of seconds or sentences. So ships named after people aren’t a good idea for my narrative.

            As for existing ships, I feel like the name should represent to ship, whether or not that’s some dudes name is really a matter of taste.

          • suntzuanime says:

            Even in a fictional context, it says something about the fictional culture if they name their ships after admirals, rather than monarchs, or placenames, or intangible virtues, or fearsome beasts.

          • Siah Sargus says:

            And it says something about Americans that we keep naming our at least one of our flagships Enterprise.

            What does it say, when a culture names none of its ships after any of those things?

          • ThirteenthLetter says:

            However, if the audience doesn’t know of who they are or what they did, it just becomes another boring unmemorable name to add to the lexicon of chacter and place names the audience is expected to remember.

            A ship being named after a person can be just as successful worldbuilding as a ship that’s named in any other style; it gives you a hint about the culture behind it, that this is a culture that honors people, that this is a culture whose names have a particular style and feel to them. Not to mention, you are under no obligation to explain who the person the ship is named after is; in fact, a culture with lots of names that are clearly meaningful but not explicitly explained to the reader feels a lot more real and full of hidden depths than one where every name is obvious or explained.

            Jack McDevitt’s early novels did this sort of thing pretty well; he had a lot of stuff with interestingly mundane names that told you a lot about the society.

          • Nornagest says:

            It’s interesting to trace which class of ships have been named for US states over the years. Historically, those names always went to battleships: that goes back as early as the Maine (of “Remember the…” fame) and as late as the Iowa. Starting in the Seventies, the Ohio-class ballistic missile submarines picked it up, along with the California and Virginia-class nuclear-powered cruisers. That looks likely to continue with the follow-on Columbia class (named after the District of Columbia), but now (post-2000) they share it with the Virginia class of nuclear fast attack submarines.

            Similarly, the Brits called their first nuclear-powered submarine Dreadnought (ours was named Nautilus, also revealing in another way), and they’re going to be using the same name for the lead boat in their new class of ballistic missile submarines.

          • AlphaGamma says:

            Similarly, the Brits called their first nuclear-powered submarine Dreadnought (ours was named Nautilus, also revealing in another way), and they’re going to be using the same name for the lead boat in their new class of ballistic missile submarines.

            To go with the gravitas point, about as close as the Royal Navy gets to a joke/pun is the motto used by the last two ships of that name.

            “Fear God and dread naught”

      • Odovacer says:

        As for being too serious, Churchill pointed out the key issue there. If you are in the business of operating warships, you are going to be in the business of sending out telegraphs of the form,

        “Dear Mrs. Smith: I regret to inform you that your son Jack was killed in action yesterday, while serving on board HMS [X] during Operation [Y]”

        This is serious business, and “Boaty McBoatface” isn’t going to cut it. If your chosen values for X and Y don’t convey gravitas sufficient for that occasion, try harder, because grieving mothers aren’t known for their sense of whimsy.

        That’s one of the reasons for the HUGO Gene Nomenclature Committee. A lot of genes were discovered in drosophilia and were given silly/inane names like Sonic Hedgehog, Clown, Cheap Date, etc. However, while some of those names still stand in humans (e.g. SHH), others have been changed, because of the implications for human disease and a doctor telling a patient, “I’m sorry Ma’am, but you’ve got a mutation in 18 Wheeler, you have 2 months to live.”

        • Machina ex Deus says:

          That’s one of the reasons for the HUGO Gene Nomenclature Committee.

          But that just leaves us with several different genes all named “No Award”!

      • Skivverus says:

        I can imagine, though, certain names or phrases acquiring (or losing) gravitas over the course of a couple centuries’ language drift. “HMS Paperclip Optimizer”, for instance, I could absolutely see as the name of a space opera warship. In-joke for the reader, terrifying near-extinction historical event for the characters.

      • Paul Brinkley says:

        I’m reminded of one example from fiction that featured a ship name that fit both whimsy and local pride. Larry Niven invented the Kzinti, a militaristic spacefaring race of tigerlike aliens that comes off like a MRA utopia. Top official is titled “Patriarch”; code of ethics borrowed from Bushido; etc. Their ship names were grandiose by human standards, e. g. Claw of the Warrior, All-Devouring Maw, Hunter’s Glory.

        Prolonged interstellar conflict forced the human race to step up their warmaking capacity, and one of their responses was a ship designed to infiltrate Kzinti space. Keeping with human attitude typical of that setting, they dubbed it The Great Patriarchal Tool.

        • AnonYEmous says:

          MRA utopia

          you’re lucky this thread is culture war free or I would have to ask some tough questions here

          • Dissonant Cognizance says:

            Two words: nonsentient females.

          • hlynkacg says:

            your mask is slipping.

          • AnonYEmous says:

            Two words: nonsentient females.

            arguable, I guess

            your mask is slipping.

            i will never not be proud to support men’s rights, and I don’t really care if men’s rights activists have done terrible things or not. The suffragettes were literal terrorists, but are lauded today. What would bother me is if the problems they brought up were trivial, and they’re not. If that’s a slippage of my mask then I proudly display my face.

            if you have responses, just tell me so and we’ll take it up in the next thread.

          • Nancy Lebovitz says:

            Faint memory: Wasn’t there a story (possibly not by Niven, if so presumably in one of the Man Kzin anthologies) about sentient female Kzin? There was a male Kzin with a fetish for sentient females, and it turned out they existed in the past or in an alternate timeline.

          • Loquat says:

            @ Nancy Lebovitz

            I vaguely recall a story involving a sentient female Kzin who’d been in stasis for several centuries, or something. Before she went into stasis all females had been sentient, but the Patriarchs had been talking about maybe embarking on a mass genetic alteration program to change that, and apparently in the interim they went ahead and did it. I don’t recall why that was considered desirable, or even whether the story provided a reason.

          • AnonYEmous says:

            you know I may have read that book

            “brick-shitter”

            ???

          • The Nybbler says:

            It’s hinted that there is, at the time of the Man-Kzin wars and after, an underground of sentient Kzinti females.

          • Evan Þ says:

            @Nancy, yes, I remember that story. It was in one of the anthologies, and she was in stasis from the distant past. It wasn’t that our (human male) protagonist had a kzinti fetish, but he was stuck in an ancient alien zoo with a lot of different creatures in stasis, and decided to see what happened if he woke up the female kzinti. Turned out she was sapient and decently friendly.

            I don’t remember anything about a modern underground, though.

          • Nancy Lebovitz says:

            Evan Þ

            It sounds like you remember the story better than I do, and the male Kzin with the fetish is my invention.

            I think the premise of sentient female Kzins is strong enough to support an anthology. Possibly a series of anthologies.

          • LHN says:

            IIRC, Speaker-to-Animals/Chmeee encountered sentient female kzinti on the Ringworld, and at least found the experience intriguing.

          • Skivverus says:

            I almost have to wonder if there are any Kzin/Hani crossovers out there in the world of fanfiction.

          • Nornagest says:

            I’ve found very little fanfiction for literary SF. I think this is because its appeal comes mainly from exploring concepts rather than worlds or characters, so the people that would be writing it write their own SF instead.

          • Evan Þ says:

            Haven’t read Ringworld, so I can’t say anything about that.

            Since the topic of Larry Niven fanfiction’s been breached… several years ago, I encountered one fascinating but tragically unfinished piece where a small group of explorers uncovered the Pak spaceship in Antarctica around 1910, found the Tree-of-Life fruit, and turned into Protectors. Unfortunately, the website I found it on is now dead. Does this happen to ring a bell for anyone?

        • Aapje says:

          a militaristic spacefaring race of tigerlike aliens that comes off like a MRA traditionalist utopia

          An interesting discussion can be had about why the prominent progressives in the MRA movement seems to be more willing to associate with the traditionalists and the repercussions that this has on their image. But not in this thread.

          • Paul Brinkley says:

            I was hoping that fictional culture wars were permissible, sorry. The Kzinti in the books came off like a pretty obviously “dialed up to 11” extreme.

          • Jiro says:

            The Kzinti were written decades before the MRA movement.

          • Aapje says:

            @Paul Brinkley

            Can’t they just be ‘dialed up to 11’ traditionalists? Those existed already in 1966, so it seems sensible to assume that Niven referred to something he was familiar with.

          • Paul Brinkley says:

            @Aapje: …sure, I guess so? It’s not like I care; it wasn’t the point of my post.

          • AnonYEmous says:

            …sure, I guess so? It’s not like I care; it wasn’t the point of my post.

            but it was the point of mine

    • Dr Dealgood says:

      One idea I’ve been kicking around, if I ever foolishly decide to write science fiction, is to abandon the Navy IN SPAAAACE! conventions entirely.

      Ship names are a very naval concept, but the way I understand it spacecraft are much similar to manned rockets or jet aircraft than to ships. That’s especially the case for spaceplanes like your shuttle.

      So one idea to consider is that any name it has is just an in-joke between the main character and his copilot. If he was talking to Space Traffic Control he would use the official designation of “Shuttle X1089898292” or whatever, but when the two of them are flying the thing around he would call it “the Goose” or some other silly nickname.

      • Siah Sargus says:

        Not a bad idea. But this sort of thing would probably be painted on the side as well, so it would be maybe a bit more official than just a joke.

        Honestly, I’m not going for the Navy in Space aesthetic. Laying out decks horizontally from aft to stern on a rocket ship makes no sense, and “skyscrapers with docking ports” is a much cooler look

        • Dr Dealgood says:

          Nice.

          I’m a bit partial to the Enzmann starship design. Huge balloons filled with frozen deuterium attached to gigantic skyscraper-sized rockets with rotating crew sections… they’re like the zeppelins of space travel. It brings a tear to my eye.

          Although if the EmDrive turns out to be real it’ll mess with that a bit. You can’t really have a rocket with a propellant-less drive, can you? Though the balloon full of fuel would still help.

          • Siah Sargus says:

            I suppose for the moment, rockets still serve as the only viable means of leaving the atmosphere of earth, as everything else has a rather low specific impulse. I’m not holding my breath for the EmDrive, because if it doesn’t work, i’ll look like a fool for including it, and if it does work, I can always quietly slip “EM Drive goes here” into some of the technical layouts of my ships later, like Star Trek and Bussard Ramscoops (which, incidentally didn’t really pan out anyways).

      • hlynkacg says:

        As I understand it spacecraft are much similar to manned rockets or jet aircraft than to ships.

        This is largely a product of our current space program’s “Sortie-style” mission structure. If we get to the point where spacecraft are regularly re-used/re-tasked and crews live onboard for extended periods I would expect that to change.

        • Siah Sargus says:

          But if you’re on a planet-type moon like Titan, and you have plenty of oxygen, it would be much easier to get into orbit using an air breathing jet engine. Especially if it can move seamlessly from Turbine to Ramjet. So in this use case, a shuttle design, with features standard to most transsonic and supersonic aircraft, would work best.

          • hlynkacg says:

            I think you’re missing the point.

            This is not about whether you optimize for atmospheric flight or interplanetary travel it’s about the crew’s relationship with their spacecraft. Modern spacecraft are essentially glorified ammunition and our attitude towards them reflects that. Shift to a mission structure where spacecraft have regular crews with individual personalities and extended service histories you better believe that people’s attitudes towards them will reflect that as well.

          • LHN says:

            While airliners have long service lives, my impression is most of them don’t get names. Some airlines do name their planes– Lufthansa does. But passengers don’t book tickets on the Lufthansa Herborn the way they might have on the Normandie or the Olympic; they buy a ticket on a flight number whose specific vehicle they neither know nor care about, and which might be switched without consulting or informing them.

            It’s certainly plausible that might be the case for spacecraft– whether the crew know their ship as Lucy or UNS819818DC, passengers may only know that they’re booked on the Titan run that leaves on Friday at 2000 UST.

            (Or they might be on the Coca-Cola or the Time-Warner. The recognition of the advertising value of naming rights that has transformed stadium naming in the last couple decades could plausibly be extended to spacecraft.)

            Re space, the Apollo CSMs and LMs got names (most famously Columbia/Eagle), even though they were one-shots. So did the space shuttles, which had longer service lives (but with different crews each time, so it wasn’t a matter of their developing a relationship with the ship). Have the Soviets/Russians or China given their crewed spacecraft names?

          • BBA says:

            Of course it’s the same for our most-used ships too. You don’t buy a ticket on the Molinari or the Newhouse, you buy a ticket on the Staten Island Ferry and you board whichever ship is at the dock.

            (Well, in that case you don’t buy a ticket at all because the Staten Island Ferry is free, but work with me, people.)

          • LHN says:

            Though cruise ships often make an effort to have an individual identity. Probably in part because they’re deliberately evoking the liner era.

          • Dissonant Cognizance says:

            What’s weird is that we have the opposite relationship to passenger trains. As a passenger you are absolutely sure you’re booking a trip on the Sunset Limited, but the actual hardware is just Amtrak 3948 or whatever. Maybe it’s because trains are composite entities.

          • The Nybbler says:

            @Dissonant Cognizance

            That’s only Amtrak, though, and its predecessor railroads. Commuter rail is much more boring. They have train numbers that correspond to schedule (that no one remembers), and car and engine numbers, but no names at all. Subways don’t even have passenger-visible train numbers.

            Like you say, it wouldn’t make sense to name a train; they don’t necessarily stay together for more than one trip.

          • Fossegrimen says:

            All trains should be named Blaine or Priscilla

          • AlphaGamma says:

            Re space, the Apollo CSMs and LMs got names (most famously Columbia/Eagle), even though they were one-shots.

            There’s an interesting story behind this.

            The Mercury capsules (also one-shot) were officially named by their crew, and these names were used as radio call signs. The plan was for this to continue into the Gemini program, but Gus Grissom chose a name for Gemini 3 (the first manned flight) that NASA didn’t like.

            (He named the capsule Molly Brown after the movie The Unsinkable Molly Brown– his Mercury capsule, Liberty Bell 7, had sunk to the bottom of the Atlantic after splashdown.)

            After this, the remaining Gemini capsules were unnamed. They simply used “Gemini 4” to “Gemini 12” as their call signs. This continued until Apollo 9, the first flight with two manned spacecraft. As they couldn’t both be “Apollo 9”, they got names- Gumdrop and Spider.

            As for Russian names, they don’t name their spacecraft. Radio call signs instead go with the commander- a given cosmonaut will use the same call sign for each flight they command. So, for example, all of Gennady Padalka’s 5 flights used the callsign Altair.

          • keranih says:

            @ hlynkacg –

            This is not about whether you optimize for atmospheric flight or interplanetary travel it’s about the crew’s relationship with their spacecraft.

            I think this is absolutely correct. People name things. What they name varies by person and by culture –

            – any one else remember the Grizzly Adams mule ‘Number 7’? –

            – but even if the passengers on the plane or the train or the ferry never know or care what which vessel they are on, the crew and maintainers do, and they tend to consider the carriers as specific individuals.

            This is helped along by the tendency for any vehicle of any type to develop quirks and individual characteristics as it ages.

      • cassander says:

        Names for aircraft, are quite common, especially for those crewed by more than 2 people.

      • beleester says:

        First off, NASA’s ships do have names, so I don’t see why spaceships wouldn’t keep up the trend. Anything that’s unique enough and rare enough that you can keep track of them individually will probably get a name.

        Also, while current spacecraft are similar to aircraft, in that they tend to be single-purpose, single-mission craft, any Space Empire worth the name will probably have spaceships that can stay away from home for a rather long time and can hang out in a distant star system to do things independently. That’s a paradigm that resembles naval ships more than airplanes or current NASA rockets.

        (It probably won’t descend from the Navy, since the Air Force and NASA are the ones actually building spaceships and rockets, but they’d probably take a few cues from the Navy in the “How to make hundreds of people live comfortably in a tin can for months on end” department.)

        • Cypren says:

          …but they’d probably take a few cues from the Navy in the “How to make hundreds of people live comfortably in a tin can for months on end” department.

          This has always seemed, to me, the obvious argument for why space fiction follows naval conventions: the social experience of cramming dozens, hundreds or even thousands of people on board a self-contained, mobile vehicle without access to greater civilization for weeks or months on end is a unique challenge in itself, and one that maritime conventions have adapted to address over centuries. The hierarchical command structure of “captain as god” evolved to maintain order with extremely limited resources and no obvious way of replenishing them in between ports, and no way for social dissidents to leave the society without what amounts to summary execution.

          It’s entirely possible that spacefaring vessels in the distant future will find some less-nautical means of managing their social environments, but I personally doubt it. This is a structure that is time-tested and has survived and thrived in a vast variety of human cultures and despite many, many seismic shifts in social mores and traditions over the centuries.

          • LHN says:

            Sometimes SF uses a naval structure out of inertia even though the situation isn’t really analogous. E.g., the 2009 Star Trek movie and its sequels have interstellar travel that’s faster between departure and destination than airliners (it takes seconds to warp from Earth to Vulcan), even leaving aside the interstellar transporters. Social dissidents can leave by transporter most of the time even during these short journeys. Even ships during the Next Generation era were rarely that isolated or out of touch. But it’s a navy in structure and culture, because Starfleet has always been a navy.

            (Which made sense in TOS, which was deliberately set up to have circumstances comparable to the Hornblower-era British Navy.)

          • Nornagest says:

            I liked the nuTrek movies (well, the first and third), but looking to them for tonal consistency and an overall sensical universe is probably not the smartest thing to be doing. Abrams and company aren’t going for that, they’re going for pulp.

            Which isn’t even bad on its face, but it does represent a departure from older Trek conventions, or at least from what fans wanted those conventions to be.

          • LHN says:

            Agreed on Abrams Trek. But more generally, SF writers generally have the freedom to define the parameters of what their travel tech is like. (Certainly once you get beyond very near future hard SF.) FTL drives don’t mostly take ocean voyage lengths of time because that’s how fast FTL drives go, but because writers are writing about ships of the line or tramp steamers or aircraft carriers IN SPACE.

            I think it was SF writer Jo Walton who I first saw suggest an FTL model where travel times and conditions were more similar to the jet age, which I’m not sure anyone has done. (Though it was implied that was what intercolony travel was mostly like prior to the Cylon attack in the Battlestar Galactica reboot. E.g., Roslin’s craft carrying aircraft-like designations both before and after being renamed Colonial One, and the internal layout looking much more like a plane than a ship.)

            We have seen interstellar travel being done as surface travel via gates (Heinlein’s Tunnel in the Sky, Cherryh’s Gate of Ivrel, the Stargate movies and TV series, etc). There of course you don’t get a Navy, and don’t even necessarily have vehicles. Frequently the protagonists just walk.

            I remember a story in Analog in the 80s, Rails Across the Galaxy, that as the title implies had an interstellar train (using some sort of laser rails? it’s been thirty years). But it was a first contact story, so the protagonists weren’t train crew or passengers.

    • sandoratthezoo says:

      Here are some ship names that I came up with for the Eclipse Phase setting, Scum Flotilla Memento Partii.

      Si Yes Da
      Frigiarchy
      Der Lange Schwanz
      Se Kir! (aka Kir Kir Kir!)
      TITAN Shit
      Neko ga Kawaii yo!
      Nail Pounder (this ship is hammer-shaped)

      They are all intended to be punny, dick jokes, or absurdist.

      • Siah Sargus says:

        Those are pretty sweet! (and plenty punny…)

        On a bit of a related tangent… the space station where the main characters reside has a couple of different nicknames, but the most common one is “the minute”. Firstly because the station being 1km in radius from center to the main floor, rotates about once every minute. Secondly, because being on of the first, large, normal-gravity stations ever constructed, it became one of the first places in space where sex was viable, and “minute” sounds awfully like the word for blowjob in their language. A similar theme persists with the American Station around Saturn being nicknamed “The Big O”.

        • sandoratthezoo says:

          Do the puns come through? I wanted them to feel kind of… convoluted? In-joke-y? The way that communities build jokes on jokes on jokes. But then I started to worry that they were all just incomprehensible to everyone who wasn’t me.

          Out of sheer lack of confidence, I will now explain them all:

          “Si Yes Da” is, first, “yes! yes! yes!” and it supposed to call out to the hedonistic “just say yes” qualities of the Scum, and also to be something that you might say during sex, and also of course be a pun for “siesta,” because they also embrace being lazy.

          “Frigiarchy” is intentionally-bad-pseudo-greek for “rule by cool people.”

          “Der Lange Schwanz” is a dick joke for the long tail (as in the statistical concept popularized by, er, “The Long Tail”) playing on the German word for tail, Schwanz, also being a slang word for dick.

          Se Kir! Hopefully means “three dicks” in maybe-dodgy-farsi, and also sounds like “Three cheers” in English. It was also supposed to be the third ship in the fleet with a dick-joke name (after Der Lange Schwanz and Nail Pounder).

          TITAN Shit is a straightforward Eclipse Phase reference, not really much of a joke.

          Neko ga Kawaii yo! Just means “cats are cute” in Japanese and is intended to be absurdist in the general vein of “very little gravitas indeed.”

          Nail Pounder is just a dick joke and a reference to the ship’s physical frame.

    • Civilis says:

      As always, the question is “who’s choosing the name?”, which often comes down to “who’s paying for the ship?” If the ship is big enough that it’s a government project, that’s almost certainly going to be something boring like a prominent person, place or event, or at best a martial virtue or appropriate symbolic animal. If you did want to justify an occasional oddball name, you could always have an individual ship financed by a kickstar… I mean, bond campaign. I’ve seen it historically mostly in reference to things like ‘this tank paid for by the People’s 21st Tractor Factory’. If whatever online troll community exists in the far future raised the money to build the ship, then USS Shippy McShipface might reasonably end up being what it’s named. Still, best to keep the truly ‘creative’ names rare and give the ship a less degrading nickname for the crew to use.

      For smaller ships, those produced in bulk with a small crew, take a look at the names given to bombers in the second world war. I couldn’t find a good comprehensive list in a quick google search, but I did find as an example the planes known to be used by one of the bomb groups (http://www.angelfire.com/ne2/b17sunriseserenade/452ndnames.html). Most are named after women (either girlfriends, daughters, or theoretical girlfriends). There are a couple of patriotic entries, a couple based on the crew’s peculiarities and a couple of song or movie titles. Based on the list of historical bomber names, and allowing for the cultural drift, the list you gave doesn’t seem that outlandish. Just be prepared to have a background explanation for any ship name that features prominently in the plot.

    • cassander says:

      A pet issue of mine is that I think the US should name aircraft carriers after places on the moon. There are a number of excellent names, Ocean of Storms, Sea of Tranquility, but the real reason is so that every ship carries the implicit name “USS Somewhere We’ve Been That You Haven’t”.

      • Siah Sargus says:

        That’s a brilliant idea! And there are a number of really good names there; Tycho, Sea of The Edge, Lake of Death, just to name a few more.

    • Dissonant Cognizance says:

      These are all going into a custom Stellaris name list.

    • hlynkacg says:

      So had a fairly long reply but the internet and/or spam-filter seems to have eaten it civilis has stolen the rest so I’m going to skip straight to the TL/DR version.

      In addition to thinking about who’s naming the ship, think about what the name says about the culture doing the naming. Who are the major players in your setting? What are their philosophical differences? Are most of the vessels in your setting owned/operated by large government-like organizations, or are we looking at large numbers of small independent operators?

      By way of example, in my own setting my “orbital coast guard/LE vessels” are named after geologic features IE; Kilimanjaro, and Sea of Tranquility. Ships built by my space-Mormon-terraforming concern have slightly more esoteric names based on bible verses and song lyrics such as Fiery Swift Sword and Candle in the Dark. Corporate ships are named as you would expect, and independent operators are suitably scattershot.

      • Siah Sargus says:

        Yeah, looking at the list, it seems to break down into a couple of categories rather quickly; poetry, politics, and novel quotes; song lyrics, songs, and album titles; science fiction in-jokes, puns, and sciency-jokes; names referencing significant others or romantic exploits; and finally, actual, existing, ship names. I can see how each one of these catergories would belong to a certain class in a collection of space ships. I have to admit, though, I’ll be having a lot more civilian ships than the usual Navy in Space sort of affair.

        • hlynkacg says:

          Even if they’re “civilians” that doesn’t change the central point. If anything it enhances it. If someone cares enough to give it a name, there’s going to be a meaning behind the name that was given.

    • rahien.din says:

      I think your names need to work on multiple levels.

      On the micro level, a ship is synonymous with her crew, so the ship’s name must somehow be an identity for its crew. The characters themselves must identify with it somehow (or consciously buck that identity). And, the reader should be able to glean some meaning as to the ship’s and crew’s character from the ship.

      On the macro level, a ship is a sword. The ship’s name also has to serve as an epithet for its role, a la Foe-Hammer or Red and White Flame. No one will believe your intergalactic flagship is named A Good Book and a Long Weekend.

      It’s also got to snap. People will forever be tired of reading about the maneuvers of the Insufficient Gravitas For a Meaningful Answer, and more importantly, you will tire of writing about it. Her crew (and her enemies) will naturally develop other, snappier epithets for her, and those will make more sense.

      • Cypren says:

        No one will believe your intergalactic flagship is named A Good Book and a Long Weekend.

        I am so naming my flagship this the next time I play a game where I can name flagships. 🙂

      • beleester says:

        You raise a good point about snappiness. A long title is fine if you have a good abbreviation for it.

        Insufficient Gravitas for a Meaningful Answer would probably get shortened to “Insufficient”, and who wants to be on a ship that is insufficient? But A Good Book and a Long Weekend would probably get called “The Good Book,” and that’s perfectly fine.

        Schlock Mercenary had a nice twist on this. Most spaceships have onboard AIs, and the AI usually gets a name that’s a shortened version of the ship’s name (since you address them when you need to order the ship around). One ship, the Post-Dated Check Loan, typically went by the initials P.D., or “Petey.” Their most recent ship is the Cynthetic Certainty, piloted by “Cindy.” And the Tausennigan ships have bombastic titles like “Sword of Irreparable Damage,” but the AI’s name gets shortened to “Sword.”

        So the ship usually has, in addition to whatever clever pun or elaborate title you think would be fun, a name that you can use to shout orders without being awkward.

    • FacelessCraven says:

      @Siah Sargus – “Too Many Ngons”

      Alright, that one got me.

    • fifferfefferfef says:

      I’m just getting into this series! Last week I came up with some ship names corresponding to my friends’ personalities.

      ROU That’s What You Think
      GSV Comfort Food
      ROU Ex-Pacifist (Me)
      GSV Uninvited Guest
      GCU In A Manner of Speaking
      ROU Them’s Fightin’ Words
      GSV Tea Party
      GCU Soft Underbelly
      GSV Unintended, But Foreseen
      LSV I Woke Up Like This
      GCU How Would You Rate Your Pain?
      VFP Liminal State
      GSV Pathological Function

  19. oerpli says:

    I’m a computer science/physics student (finished everything but the Master’s thesis in both subjects currently) and I’m looking for something interesting starting from summer/autumn.

    I have found Jane Street and what working there is like and I figured that it pretty much corresponds to what people here seem to enjoy so I thought maybe a few people from here work there or at similar companies.

    Do you know companies in the Munich/Zürich/in-between-area that are similar to Jane Street?

    Related: I would start working on my computer science master’s thesis after finishing my thesis in physics in a few weeks to 1-2 months. Are there topics that could be explored in the course of an internship at one of these companies? I am rather late to the party for summer internships so off-cycle would be fine.

    • Chalid says:

      I don’t know anything about what’s available in your part of the world, but if you can’t find any firms that fit you, then you might want to try talking to a financial recruiter. Finance uses recruiting firms way more heavily than other industries, for some reason. They can get you pointed at some of the correct firms, if there are any. Options Group or Huxley are giant global recruiting firms with a European presence.

      You don’t say what attracts you to Jane Street, but there might be jobs within the Swiss banking giants that excite you in similar ways. In a big bank, culture can vary quite a lot within the firm and even from desk to desk. It’s not unlikely that there’s a good job for you somewhere in there, it’s just a matter of finding it.

  20. Siah Sargus says:

    I just took up 3d art on a whim a couple months ago, and… holy shit. Where was this my whole life? It’s quickly become my favorite past time, it’s so… empowering? I guess?! Once you understand the underlying logic, anything you can image you can create, and it’s just amazing to have your personal computer displaying a complex 3d model you made. It has now consumed my waking hours. I can’t see any of my future art not incorporating CGI into the mix now that I know how to do it. I have a feeling similar sorts of epiphanies about the true capabilities of computers might very well happen when I take up a real language more than just HTML and CSS.

    • HeelBearCub says:

      just HTML and CSS.

      This made me shudder just a little bit.

      Although, if what you add is just javascript, that will make me shudder even more.

      Sorry, my old fart is showing.

      • Siah Sargus says:

        Technically it’s a markup and not a real language, yeah, I know. (this has been edited into my original comment.) Don’t worry, though, I’m not learning javascript, I did a few things with it as a kid, and my overall impression is, while it may have been necessary for web video then, it isn’t now.

        I’ll probably branch out with some Python scripts for Blender before I go off and do anything too crazy…

        • HeelBearCub says:

          YMMV of course, but I would recommend using strongly typed and compiled languages to learn in.

          Others will disagree.

          The big thing is to give yourself a strong sense of how to structure your code in a way that is easily legible. It will improve your thinking and allows a much easier approach to things like code re-use.

          Getting a good sense of how to actually make classes (i.e. object-orientation), rather than just use objects, is a good idea as well.

          • Iain says:

            I’ll step in to disagree.

            Even beyond its use for scripting Blender, Python is my go-to recommendation for a first programming language. It’s a friendly, low-mental-overhead language in which you can learn the basic principles of how to program, with a syntax that is as close to pseudocode as anything else out there. Will you end up writing a bunch of unmaintainable spaghetti code? Yeah, probably. But I think you need those scars to understand the benefits of strongly typed compiled languages on a gut level, and until you reach that point it will minimize the unnecessary hurdles that you have to overcome.

          • HeelBearCub says:

            I know lots of people love Python. So, from a strictly revealed preference sense, I can’t disagree. Plus, I haven’t gotten around to learning the language, so I definitely can’t speak ill of it.

            It might be 6 to one, half dozen to the other. Down to personal preference and what fits one’s personal learning style.

            I have seen too many promises of “it reads like pseudocode” to take those kinds of things as something to recommend a language, though. At the end of the day, programming is still taking your thoughts and turning them into an orderly process for accomplishing them. The language won’t do it for you.

            And there is some truly bizarre and inscrutable shit that happens in scripted languages with weak typing. Far better to know early on that misspelling a variable name (or just mis-casing it) will break your program. And the problems with miss typing something unintentionally can result in some truly bizarre shit…

            My early years were spent in Basic, and my college years were strictly in Pascal, and I still sometimes wish OO had become the dominant paradigm before I entered college so I would have gotten an early grounding in those principles.

            As it was I didn’t have to do any OO stuff until 8+ years into my career.

          • Siah Sargus says:

            Well, again, the primary purpose will be scripting for blender, so Python it is.

            I saw a similar divide in martial arts when then question of “what should a beginner take” ever came up. I think that everyone has biases when it comes to their own preferences, but the fundamentals remain the same.

          • Iain says:

            At the end of the day, programming is still taking your thoughts and turning them into an orderly process for accomplishing them. The language won’t do it for you.

            I agree completely. There are certain real conceptual leaps that need to be made to understand programming, and there are no short-cuts. At the same time, it is possible for languages to put up obstacles and distractions: “public static void main(String[] args)” makes perfect sense to me now, but was a magical incantation when I first started learning Java. There are corners of Python with pseudo-mystical boiler plate, but it takes longer to bump into them than it does for most languages.

            I’m totally with you on the value of static typing. I just think you need to crawl before you learn to walk, and Python is a good language for crawling.

          • HeelBearCub says:

            @Iain:

            I agree on crawling before walking, and I understand about magical incantations and how they interfere with the ability to understand what is actually being done.

            On the other hand, consider something like (without much thought being put into the example):

            myVar = 1


            mVar = myVar + one

            And now consider all of the myriad ways that can fail in various different concepts of uncompiled languages without strong required declarations. Also consider what that might look like with enough separation between the two lines of code and whether this will make much of any sense, especially if the line in question is ambiguous enough.

            Further consider whether implicitly declared and weak types will mean that someone doesn’t understand that there is such a thing as a type at all.

            Sure, 6 to one, half dozen to the other, and I would never recommend c++ as the first language, but compiled, strongly typed languages have some distinct advantages.

          • Iain says:

            I think we agree on principles and only disagree on relative weighting, so to turn this discussion in a less repetitive direction: I agree that C++ is a terrible language to start with (or indeed to ever end up programming in, he said, glaring balefully at the codebase in front of him). Which compiled language with static typing do you think is actually good for beginners, though? Java? I have recently transitioned from being a Haskell weenie to being a Rust weenie, but I would hardly recommend either of those languages as a good starting point.

          • HeelBearCub says:

            C# is great. Many of the advantages of Java, very similar to Java actually, but with a phenomenal IDE that just works. Environments are far more uniform as well, so the question “what flavor of …” doesn’t really come up, or not in the same way.

            I understand there are people who are opposed to Microsoft for various reasons, but now with Mono being an official MS technology, you can even deploy to Linux with official imprimatur.

            But, that is what I am immersed in right now. So of course I am biased.

            My side project is all in AWS on a linux/node/koa/swagger stack. That is painful, especially as someone with a Windows development box.

            I also have done a little Android Studio work recently (java plus some android specific stuff that is all based on what was originally a JetBrains IDE, I believe). I found it to be horrific in a bizarre “shit just stopped working” way.

          • Nornagest says:

            C# as a language is pretty good, and Visual Studio is an exceptional product, but IME getting anything substantial done in C# stumbles on the libraries available; painfully verbose integration slogs seem pretty much inevitable.

            There is at least a right way to do it, whereas doing the same thing in C++ always resembles an episode of Junkyard Wars, but more often than not I can get the latter done faster than the former. Plus, verbosity breeds bugs.

          • Art Vandelay says:

            I’ve been thinking about learning to code and had thought about starting with Python. Does anyone have any recommendations on good, free/very cheap resources for doing so?

          • Nornagest says:

            Learn Python The Hard Way is popular, though I find its style grating.

          • Iain says:

            I think there are a number of freely available Coursera courses that teach programming using Python, if that is compatible with your learning style.

          • Mark says:

            I don’t really understand the value of static typing.
            If your program breaks in an obscure way because you’re doing something like
            one = 1

            one += ‘2’
            and you’re not aware of what you’re doing, then doesn’t that just mean you’re a bit of an idiot?

            Does it really make that much difference, either way?

          • Nornagest says:

            The biggest advantage is that type errors get caught at compile time. Everyone is always occasionally an idiot, and if it’s your turn to be an idiot, it’s better to find that out before your code gets checked in, let alone before it makes it into production.

          • suntzuanime says:

            [if you need static typing] doesn’t that just mean you’re a bit of an idiot?

            There aren’t enough cool sexy geniuses to go around, exacerbated by the fact that some of them just troll psychiatry blog comments all day instead of doing useful work. If you can get code written with people who aren’t entirely non-idiotic, you’ll get a lot more code written than if you try to just have RMS do everything.

            In the context of a person learning to program, someone who is not yet experienced is likely to make some silly mistakes in the learning process, and static typing makes those mistakes more obvious. As someone who has had to hunt down bugs in student python code, I can tell you that python’s willingness to just go with the flow is not purely upside.

          • Mark says:

            I think it would be quite unusual, and bad practice, to reassign a variable to something of an entirely different type in any language.

            So, it doesn’t really make that much difference, once you get beyond the absolute beginner stage?

            Instead of “yeah… don’t do that”, it’s more “don’t do that.”

          • Brad says:

            The big thing is to give yourself a strong sense of how to structure your code in a way that is easily legible. It will improve your thinking and allows a much easier approach to things like code re-use.

            Getting a good sense of how to actually make classes (i.e. object-orientation), rather than just use objects, is a good idea as well.

            I use typed OO languages all day long, but over the course of the last decade have moved further and further away from the deep hierarchy of classes method of organizing medium size code bases (never worked on large ones). These days I’m much more likely to take a functional style approach to organization.

          • 3rd says:

            Kotlin is pretty cool. It’s based on Java, but fixes a few warts. It takes a lot of influence from Python.

            fun main(args: Array) {
            var myVar = "23"
            var fiveTimes : Int = myVar.toInt() * 5
            println("Value of var is $fiveTimes")
            }

            It has great intellij support (like an automated Java -> Kotlin converter), being started by JetBrains.

          • suntzuanime says:

            I think it would be quite unusual, and bad practice, to reassign a variable to something of an entirely different type in any language.

            Not unusual. It’s really easily to accidentally pass a string when you meant a list of strings, or string referring to an integer when you meant an integer, or whatever. Keeping track of levels of quotation is really hard for humans.

            Not even always bad practice. This is the philosophy of “duck typing”, that if all you need is the quack, your variable can store anything that quacks like a duck, not just a duck. This lets you be more flexible and spend less time getting bogged down distinguishing tuples from lists from dicts when you don’t really need to care.

          • Chalid says:

            I’m going to second Mark in that I’ve never really seen the value of static typing in my experience. I’ve spent a lot of time debugging code in my day and hardly any of that time was spent on issues that static typing would have caught for me. Sure, I make an occasional type error, but that sort of thing has been very obvious and easy to catch during testing. (Though maybe it isn’t so easy in all applications?)

          • HeelBearCub says:

            Re: static typing

            What suntzu said.

            But also, especially germane to this conversation, not reassigning type mid-flow is in fact a generally good practice. This good practice is not just encouraged, but essentially mandated by strongly typed languages with explicit declaration. Once you learn to code in this type of language, you tend to only exploit type-mutability in specific and considered ways.

            Whereas, if you learn to code in a scripted language that only has implicit weak typing, you don’t absorb that lesson by osmosis.

            Also, strongly typed languages make type errors really obvious, whereas weak typing makes type errors obscure.

            None of this is anything like a drop dead reason to not use weak typing or scripted languages tolearn in. It’s shades of nuance.

          • Aapje says:

            Java has become very complex and suffers from a long history of backwards compatibility. On the other hand, a huge advantage is that there are a lot of libraries and example code.

            But Python has plenty of that as well and seems like a better language to start with.

          • HeelBearCub says:

            @Nornagest:

            IME getting anything substantial done in C# stumbles on the libraries available; painfully verbose integration slogs seem pretty much inevitable.

            This has not been my experience, or at least I don’t recognize it. Perhaps I don’t know what you mean.

            Indeed, one of the things that makes C# attractive to me is the number of high-quality libraries available directly from Microsoft, meaning you don’t have to choose between various 3rd party libraries.

          • Brad says:

            The consequence of that though is that there’s virtually no 3rd party ecosystem at all. If microsoft has the library you need it’ll probably be high quality and polished, but if they don’t you are SoL.

          • Art Vandelay says:

            @Nornangest

            Thanks for the tip, I’ll give it a try.

            @Iain

            I find lectures/presentations alright for learning history or social science type stuff. However, when I was at school I always found it far easier to understand maths by reading the descriptions in the book than by listening to the teacher explain it and I’m assuming coding is likely to be similar. Could be that I had bad maths teachers though so I might try starting a course and seeing how it goes.

          • Jaskologist says:

            Static typing is nice when you have to hop back into the code and rearrange things. Need to change a method signature? Compilation will immediately tell you all the call sites that need to change. Or maybe you really should be passing around a proper class instead of the Hash<String, List<Integer> > you lazily threw at the problem earlier; now it will be easy to rip that out and replace it, confident that you got them all.

            Plus, types are a form of documentation. Very often, they’re the only documentation.

          • Machina ex Deus says:

            This discussion has gotten corrupted by misuse of “strong” and “weak” typing.

            Python is strongly typed: if you try to add 1 to “foo” it will tell you to go fish. But it is dynamically typed: it’s not going to worry about types until it has to.

            C is weakly typed: if I want to treat any object at all as a series of bytes, that’s just a (char*) cast away. But it’s statically typed: you need to tell it that variable x is an int, or a char, or a pointer to unsigned short. It won’t compile without you giving it that information.

            Strong typing is good. Everybody agrees that code shouldn’t do nonsensical things. The disagreement is entirely about static vs. dynamic typing.

            In my personal experience:

            1) Once I started doing test-driven development, the compiler looking over my shoulder wasn’t adding any value.

            2) People who’ve only used statically-typed languages tend not to see opportunities for polymorphism. They live in a duckless world. Encapsulation is great; inheritance is frequently useful even if it’s overrated; but polymorphism is the soul of OO.

          • Cypren says:

            I’m a bit surprised no one has mentioned Scala as a good first programming language. It’s an excellent balance between strictness (and compile-time error checking) and the looser, “just get it done” philosophy of dynamic scripting languages, using a lot of compile-time inference to reduce the required amount of pedantic detail on the part of the programmer.

            It also has the advantage of binary compatibility with Java (opening access to an enormous number of well-documented and well-supported open-source libraries) and advanced features borrowed from Haskell for programmers who are inclined to push their knowledge further.

            @Machina ex Deus: Test-driven development is a wonderful thing if you’re able to start a codebase from scratch. It’s nearly impossible to retrofit into an existing codebase after the fact, and unfortunately, the vast majority of us do not have the luxury of starting codebases tabula rasa. I’ll take a language-enforced requirement for strictness, which will persist through a series of management turnovers and project direction changes, over a human-enforced policy of developing comprehensive tests any day. The latter is far too likely to be slipped or ignored as soon as deadline or financial pressure mounts, and the longer a codebase exists, the closer probability converges to 100% that your tests will no longer cover significant portions of the code.

          • Whatever Happened To Anonymous says:

            and I would never recommend c++ as the first language,

            I agree that C++ is a terrible language to start with

            So…uh… when you guys say “terrible to start with” you mean like “start from scratch” or just in general “don’t approach this unless you know what you’re doing”?

            I’m asking for a friend.

          • hlynkacg says:

            I would say that it’s likely that learning C / C++ will be incredibly frustrating if you’re starting from scratch , but it’s probably one of the better languages to know/work in if you actually want to “grok” comp-sci.

          • John Schilling says:

            I would say that it’s likely that learning C / C++ will be incredibly frustrating if you’re starting from scratch , but it’s probably one of the better languages to know/work in if you actually want to “grok” comp-sci.

            Pascal was I believe designed and was definitely used as a relatively beginner-friendly instructional language for people who were expected to do their “real” programming in C/C++; I don’t know how common that practice still is, but I think it worked fairly well Back In The Day.

          • The Nybbler says:

            @John Schilling

            Definitely not; Pascal is roughly contemporary with C and was intended to be a professional language; its competitors were the old standbys of Fortran and COBOL, as well as assembler and a bunch of languages which have fallen by the wayside (various forms of ALGOL, PL/I, Simula, etc). The rise of C and Unix was well in the future.

          • HeelBearCub says:

            c/c++

            I will amplify what hlynkacg said. These are what I would call “bedrock” languages. (Assembler is probably something more like the lithosphere. to really stretch analogies).

            Learning c exposes you to some of the fundamentals of how syntax gets turned into logical operations on “virtual” computer. You come to understand things like how an “object reference” actually is carried out, and why a pass by reference is more efficient than pass by value. You have to be concerned with things like memory management, which means you get an understanding of what memory management actually is. And so on.

            Mostly though, much like being able to read assembler directly in hex, this is a fairly useless skill these days. More modern languages take care of these things automatically, and Moore’s Law has mostly meant that the advantage that can be obtained by doing a better job is too small to matter.

            This wasn’t the case, say, 5 or 10 years ago, where most every game was written in c++, and gaming companies wrote their own graphics engines, so that they could get as close to the top performance of the hardware possible. (I am not a gaming programmer, but have a good friend who is, so take this as a pointer at something true, rather than gospel.)

            So, whether you “should” learn c/c++ is mostly down to a) whether you have a specific application that needs them, or b) you feel the need to explore a much deeper understanding of underlying principles.

          • HeelBearCub says:

            @The Nybbler/@John Schilling:
            I learned in Pascal, and my impression from my professors matches John’s.

            I never ever encountered even a sniff of Pascal in the professional world.

          • The Nybbler says:

            @HBC

            Yes, Pascal ended up as a teaching language, but it was not designed as one. The UCSD P-system probably had a lot to do with that, by making it widely available; it was the only structured language available on the Apple II for some time, for instance.

          • HeelBearCub says:

            @The Nybbler:
            Fair enough.

            Regardless of how it came to be, I think Pascal filled the role of a “teaching” language admirably. I still sometimes wish it had been an OO language, but given the timeframe I learned in, that is sort of a nitpick.

          • Brad says:

            My first and only formal instruction in programming was in c++. Although STL’ed standard library was available, it was in the early years of its release and my university, in its infinite wisdom, decided to ignore it in favor of their own homebrew standard library.

            It was horrible pedagogy and I can’t recommend it to anyone.

          • Iain says:

            So…uh… when you guys say “terrible to start with” you mean like “start from scratch” or just in general “don’t approach this unless you know what you’re doing”?

            C and C++ are the last remaining languages in common use that are not memory safe, in the sense that you have to allocate and free memory yourself, and there are no guard rails to protect you from buffer overflows and use-after-free bugs and all those other fun foot guns. This lets you squeeze a little bit more performance out of your code, which is why they are still used in a lot of cases where performance is important: operating systems, compilers, games, browser engines, and so on. The theory is that sufficiently good programmers can manage the complexity. In practice, pretty much any non-trivial codebase is going to have lurking memory management issues. There’s also a significant cost to programmer productivity — having to carefully think through how you are using memory slows down the pace of development.

            C is a relatively simple language. C code maps more-or-less directly to idealized hardware. It is not an accident that C is almost inevitably the first language supported on new hardware. It’s hard to write bulletproof code in C, but there is definitely value in learning and understanding C.

            C++ has pretty much never seen a new feature it didn’t like. It is intended as a “multi-paradigm” language, which means that every new idea for how code should be organized in the last thirty years has been grafted onto the side of the language, creating a sprawling monstrosity with a million and one different ways to do things, all of which have had to be slightly compromised to fit in with the previously existing aspects of the language. Every C++ codebase uses its own subset of the language. Meanwhile, the officially sanctioned “right way” to write C++ is a constant moving target.

            I would encourage a person learning how to program to get comfortable with C, but only after learning the basics of programming in another language. I would encourage that person to learn how to program in C++ only if they have to work on a previously existing C++ codebase. I would encourage anybody who wants to write low-level, high-speed, close-to-the-metal system code to take a long look at Rust, a new language being developed by Mozilla. In many ways, Rust is what you would get if you redesigned C++ from scratch with the benefit of thirty years of hindsight, carefully structured so that it can reject memory-unsafe code at compile time instead of gleefully generating buggy code. Rust is the first language I’ve seen that has a real chance of knocking C++ off of its dark throne.

            (In case it was not obvious, my day job involves a lot of C++.)

          • HeelBearCub says:

            @Iain:

            creating a sprawling monstrosity with a million and one different ways to do things

            You are obviously a lot closer to it than I am, but I think some of the problems of c++ in this regard are baked into it’s inheritance from c.

            I mean, there is an International Obfuscated C Code Contest for a reason.

          • Nornagest says:

            Pascal was I believe designed and was definitely used as a relatively beginner-friendly instructional language for people who were expected to do their “real” programming in C/C++; I don’t know how common that practice still is, but I think it worked fairly well Back In The Day.

            I think mine was the last generation to do that, and I caught the very tail end; the programming courses I took in high school started with Pascal before graduating to C++, but by the time I got to real-boy college everything that wasn’t systems programming or high-speed graphics (C/C++) or AI (Common Lisp) was doing its stuff in Java. These days I expect Python or something like it would have taken Java’s place there.

            I don’t think Pascal is a very good educational language, though. It lacks most of C’s really nasty pitfalls, but makes up for them with plenty of nonsensical restrictions that do nothing but build bad habits.

          • Iain says:

            You can write obfuscated C, but at the end of the day C is a pretty small language. Consider c4, the minimalist self-hosting C compiler consisting of only 4 functions. It doesn’t support the entire language, but it supports a pretty reasonable subset, especially given that it only has about 500 lines of code. There’s no way you could do anything remotely close for C++. Of all the languages in existence for which you could write a compiler front-end, I think C++ might actually be the hardest. (See: Parsing C++ is literally undecidable.)

            Templates are one big culprit. (A fun example.) The various kinds of inheritance and polymorphism don’t do any favours, especially when you throw in exception handling and you need to figure out what happens when an exception is thrown in a constructor. Initialization in general is a minefield. Operator overloading makes it difficult to figure out exactly what is being done in a line of code. I write C++ every day and honestly have no idea what is going on with the move semantics stuff: this stack overflow question is a good example of how complicated things have become.

            I could keep going, but you hopefully get the point. These are all C++ specific issues. Unless you are actually going to write code in C++, you can get 90% of the educational benefit in 10% of the time by just sticking to C. (But don’t let anything I’ve said discourage anybody from learning C++ if they do have a good reason; I write C++ for a living, after all.)

          • Cypren says:

            As someone else who works primarily in C++ (and in games, specifically), I will second pretty much everything Iain said.

            C++ is a fantastic language when you are programming alone because it is so incredibly expressive: it gives you the tools to do whatever you want, whenever you want, and doesn’t make any attempt to “protect” you from the consequences. For a master, this is incredibly appealing, as the safeguards in many other programming languages frequently get in the way of extremely experienced engineers (especially when focused on hardware-level optimization) rather than helping them.

            But this same expressiveness is incredibly dangerous when used in a team environment, where standardization and readability are critically important. There’s a high chance that someone else (or your future self, without the day to day familiarity of having just written the code) is going to have to come along and fix bugs in or extend your code in a few years, and the more “clever” and original your solution, the harder it’s going to be for them to understand it and successfully modify it without breaking things. As much as C/C++ programmers like to mock Python’s sanctimonious “there is only one way to do it” philosophy, it definitely makes it easier for a programmer to hand off their code to someone else.

            Switching gears a bit, HBC mentioned earlier that C++ is falling out of fashion in game programming. This is an area I can speak to with some familiarity; I’m an engine programmer for a well-known game studio.

            All commercial game engines of any significance are still written in C or C++ right now. (There are non-commercial and hobbyist projects that are written in other languages, but none have really taken off.) But some of those engines are no longer extended using C++.

            The most popular and well-known example of the rise of managed programming languages in gaming is the Unity engine, which is itself written in C++, but can be extended with a few different programming languages (C# being the most common). Up until about 18 months ago, this was done through the incorporation of a heavily-modified version of the Mono framework (an open-source implementation of Microsoft’s .NET platform) into the engine, with bridges connecting the licensee-written C# with the engine’s own C++ code.

            More recently, the Unity developers have moved away from Mono and written their own compiler which translates the IL bytecode of the .NET platform (the “portable assembly” that C# and other .NET languages compile into) into C++ code, which is then compiled for the target platform. This transition has happened mostly because of acknowledgement that general-purpose managed runtimes simply aren’t sufficiently performance-optimized for games, and using something with more limitations but higher performance is a worthwhile tradeoff for this specific application. So even though the engine initially represented a move away from low-level unmanaged code and towards a “Moore’s Law” future where hardware performance is high enough that programmers no longer needed to hand-optimize so much, it’s being walked back somewhat.

            Unity is a very popular game engine due to its ease of use and cheap licensing terms, and has largely swept the mobile game market at this point and made significant inroads into traditional console/PC games as well, especially in smaller studios. As a result, it’s entirely possible to make a career as a game programmer now without knowing C++ so long as you’re working on smaller, lower-budget titles. But larger titles (the so-called “AAA” games that have development and marketing budgets in the hundreds of millions) are all (to the best of my knowledge) still built using either wholly-custom or heavily-customized engines using C++. Most major development studios have an in-house engine, and those that don’t have typically taken a commercial engine (usually Unreal) and modified it so heavily that it’s no longer really compatible with its original parent.

            ETA: @Gazeboist: Shout-out to my D brother! I love the language; it’s unfortunate that it’s never gained enough traction (and hence, library support and widespread professional knowledge) to threaten C++. I’m hoping that Rust might succeed where D failed, but I’m not holding my breath.

          • Machina ex Deus says:

            @Cypren:

            Test-driven development is a wonderful thing if you’re able to start a codebase from scratch. It’s nearly impossible to retrofit into an existing codebase after the fact, and unfortunately, the vast majority of us do not have the luxury of starting codebases tabula rasa.

            Whatever code you’re adding, you can add in a test-driven way. And even if you’re only deleting code (which should be a much bigger part of maintenance than it usually is), you can stick tests around the area you’re changing to make sure you don’t change any behavior accidentally.

            Is it obvious at first how to do this? No. Can you do this in a practical way with legacy codebases, such that the payback for the time you invest is on the order of weeks, not years? Yes.

            I’ve done this on several legacy projects so far, and while each team initially took the attitude that, “Yeah, that might work for some projects, but ours is different,” it turned out the differences didn’t matter.

            I should also mention that test-driven development’s benefits are mostly in how it makes you structure the code, and how it keeps you from creating code you don’t actually need. The suite of tests that falls out is a nice little bonus, which most teams quickly learn to value as an early indicator of problems, and to keep it current.

            @Everybody talking about Pascal:
            The Macintosh Toolkit was written in Pascal. That’s a library that shipped in the ROM of tens of millions of machines, and was used by tens of thousands of programmers.

            Also, while poking through an early version of Microsoft Word for the Mac (3.1, maybe?) with ResEdit, I ran across pCode resources. I don’t know if they used pCode for cross-platform portability, or just as a way to keep the program’s size in memory small, or what. But my guess is that the source language was Pascal.

            I regret never using Pascal on Apple ][ machines. My first Pascal was Turbo Pascal, on ridiculously limited PCs (286s, I think) over 30 years ago. The class I was in that summer had each of us write a small interpreter for a small language (PURPL), in Pascal, in three weeks. It’s possible I have the printout for my code around someplace…

        • skef says:

          Computer language abstractions are maybe 10% at issue for any given programming problem. I understand why people in the “language business” portray them as more like 70%, but I’m generally mystified by how much people seem to care about them otherwise.

          In previous decades there were endless complaints about the number of characters needed to do this or that, as if everyone was a super-producer. How many lines of code you laying down in a day, dude? Just askin’

          The utility of static typing increases for code that is reused, which most often means this or that library. For single-role code the cost of adding it may outweigh the benefits (even so, it’s just some extra characters). This is why a new trend is optional static typing, so that type fixing can be deployed where it’s useful.

          At this point, the one thing I tend to miss in a language lacking it is a real closure semantic (e.g. not Python inner functions). But it’s not a big deal.

    • FacelessCraven says:

      *checks replies, nothing but code talk*

      …for shame.

      @Siah Sargus – Gratz on getting into 3d! I’ll concur on it being astonishingly addictive. Judging by a reference further down, I take it you’re using Blender? I’ve thought about trying it a few times, but haven’t ever gotten around to it; I’ve dabbled in Modo a bit, used XSI in school before goddamn Autodesk bought them out and discontinued the line, spent far too long using Maya for no good reason, and now spend most of my life in Silo. I’m hoping to take a real crack at Zbrush sometime soon, as it’s been a long-term goal for me. I’m currently off-and-on trying to master complex sub-d/h modelling for hard-surface props; guns and spaceships, mainly.

      Got anything to show so far? And you were in here talking about a comic awhile back; made any progress on that? I was looking forward to checking it out.

      • Siah Sargus says:

        I have a few things to show for sure, and yes I will be using these as “assets” for the comic. I’m on my phone right now, but I’ll show you some of my more finished models and a few renders tomorrow!

        • FacelessCraven says:

          very much looking forward to it! toss up some wireframes too while you’re at it!

          • Siah Sargus says:

            Well, I start with some of the simpler models I’ve done so far. (To be fair, most of the models I’ve made are in some state of not finished, so these are the most presentable ones I have right now.) Anyways, I figured that most people would prefer compact, movable keyboards, to bulky terminals, so one of my first models was a small mechanical keyboard.

            http://imgur.com/a/JS4N0

            And what I am working on now: A pilots chair for that two seater shuttle I’m still WIPing.

            http://imgur.com/XQjCnyx

          • FacelessCraven says:

            nice work! seems you’re well on your way!Depending on how much 3d you’re intending to do for a comic, I’d look into kitbashing as well; building everything from scratch is great for learning, but probably too slow for production work.

          • Siah Sargus says:

            Thanks for the tip! I’ll certainly be looking at creating my own prefabs at the very least.

    • Douglas Knight says:

      The best language is the one you actually use. Choosing a language because it is the right one for your project is a great reason to choose it (unless you stall out and stop playing around with it). The other way to pick a language is to pick teaching materials and trust those materials to pick a good language. The pedagogical quality of the materials is more important than the pedagogical quality of the language. The Learn Python the Hard Way guy has made variants for other languages because it doesn’t matter much. He does have some things to say about his experience teaching programming and I think it favors Python, but for very different reasons than given above in the thread. I trust him a lot more than I trust the posters because his advice is grounded in experience.

      • Iain says:

        Zed Shaw is a weird guy. I’ve never read his books, so I can’t speak to their quality directly, but his criticism of Python 3 is pretty off base. See here for one good rebuttal. (I think it is obvious from reading the two posts side-by-side who is wrong here, but I can elaborate further if you disagree.)

        That said, although I’ve seen some reviews of Learn C the Hard Way that would make me hesitate to recommend it, everything I’ve seen about Learn Python the Hard Way indicates that it’s a perfectly cromulent place to start Python, so long as you are prepared to take some of the author’s more forceful opinions with a grain of salt.

  21. mobile says:

    Follow up to the bizarre kidnapping story from the September 2015 links thread: the kidnapper has been sentenced.

  22. Ronan Nobblewit says:

    I was startled in the review of “Seeing Like a State” to see Scott holding up the Green Revolution as an example of an unmitigated High Modernist success, and then in the comments to see almost everyone agreeing with this assessment.

    Someone (I forget who, I’m sorry) did mention that the Green Revolution is to a large extent made possible by the use of nonrenewable resources, which means that as these resources become more scarce the relevant agricultural techniques may not remain useful. One potential outcome of that is a future impending population crash, which would then with the retrospectoscope make the whole thing seem like less of a great idea.

    There’s also the difficulty that replacing traditional polycultures with cereal crop monocultures has, while allowing more calories for some previously starving people, not provided as adequate a balance of macro and micronutrients as traditional diets might have done.

    But the other major consideration is that at no time have Green Revolution agricultural practices resulted in everyone having enough food to eat, even though the total amount of food in the world is now more than enough for everyone. Problems in distribution of resources have made it so that there have continued to be malnourished people in all parts of the world, but especially in Africa, Southeast Asia, and South America. Meanwhile the increased total quantity of available food has to have been at least a partial contributor to the massive growth of human population (with all of the problematic effects on the world that have come from that growth).

    In 1950, there were about 230 million total people in Africa. Today, there are about 230 million undernourished people in Africa.

    In 1950, there were about 1.4 billion people total in Asia. Today, there are about 500 million undernourished people in Asia.

    I’m not always much of a utilitarian, but that doesn’t look like what I would call an unqualified utilitarian success.

      • John Schilling says:

        All four of those nations are busy fighting major civil wars, which I do not believe is a coincidence. Concerns about the sustainability of the Green Revolution aside, at present mass starvation occurs only when people with guns prevent people with food and/or agricultural know-how from going about their business of feeding hungry people.

        • Ronan Nobblewit says:

          Well, exactly.

          Although “starving” is different from “undernourished”.

          The Green Revolution has not been successful at feeding the hungry (or probably even at decreasing the total number of hungry people) because “not enough food” isn’t the problem. Since something like 1990 people have been doing better at getting food to where it’s needed, but to my mind the Green Revolution itself in the mid-century is not something it’s appropriate to call a success of high modernism.

          • cassander says:

            >The Green Revolution has not been successful at feeding the hungry (or probably even at decreasing the total number of hungry people) because “not enough food” isn’t the problem. Since something like 1990 people have been doing better at getting food to where it’s needed,

            Both of these assertions are false. Even if famine deaths weren’t way down, global population has grown dramatically, and the green revolution is why those billions of new people aren’t starving. Saying ““not enough food” isn’t the problem” entirely misses the point. The green revolution made supplies of food more regular, more robust, and much cheaper, all of which made food more food available to people that needed it.

            >but to my mind the Green Revolution itself in the mid-century is not something it’s appropriate to call a success of high modernism.

            It’s not high modernism, but it’s definitely a success.

          • Odovacer says:

            I agree with you that food that doesn’t reach those in need is not useful, but I think you’re really selling the Green Revolution short. It greatly increased crop yields and lead to reducing hunger for millions of people. If you can’t call that a success, then you must have a very high bar.*

            From: http://www.pnas.org/content/109/31/12302.full

            Widespread adoption of Green Revolution technologies led to a significant shift in the food supply function, contributing to a fall in real food prices. Between 1960 and 1990, food supply in developing countries increased 12–13%. Estimates suggest that, without the CGIAR and national program crop germplasm improvement efforts, food production in developing countries would have been almost 20% lower (requiring another 20–25 million hectares of land under cultivation worldwide). World food and feed prices would have been 35–65% higher, and average caloric availability would have declined by 11–13%. Overall, these efforts benefited virtually all consumers in the world and the poor relatively more so, because they spend a greater share of their income on food

            On a tangent, are you familiar with Hans Rosling? He recently died, but was a popularizer of fighting poverty, poverty statistics, and correcting common misconceptions about the world. The short of it is, there’s much to be done, but overall things are improving for those in developing countries.

            *I’m not saying that there are no drawback to the Green Revolution, but I would call it successful for alleviating poverty and hunger for millions of people.

    • Dr Dealgood says:

      If those people were starving, it seems to me like they obviously weren’t getting all that much benefit from traditional diets. Am I wrong here?

      As for unchecked population growth in Africa and Latin America, those are certainly problems but I’m hesitant to prescribe mass starvation as the solution. If you’re dead-set on reducing those populations there are certainly less sadistic ways of doing so.

      Someone (I forget who, I’m sorry) did mention that the Green Revolution is to a large extent made possible by the use of nonrenewable resources, which means that as these resources become more scarce the relevant agricultural techniques may not remain useful. One potential outcome of that is a future impending population crash, which would then with the retrospectoscope make the whole thing seem like less of a great idea.

      I don’t know enough about agricultural science to say for sure whether the whole thing about permanently depleting nutrients in the soil is BS, but it certainly smells like it. And, to the extent that it’s true, GMOs seem to be a help rather than a hinderance when it comes to those concerns.

      For example, the bit about petroleum-based fertilizers that the other user (sorry, forgot his name) mentioned. If we’re concerned that reliance on fertilizers makes our food supply vulnerable to energy shortages, then having genetically modified crops which produce greater yields with less fertilizer would seem like a good thing IMO.

      I’m open to the possibility of being wrong here, but would need to see some research validating those points first. It sounds like hippie nonsense more than anything else.

      • Ronan Nobblewit says:

        Oh there is definitely a lot of hippie nonsense available on this subject. Some of it contains germs of fact but a lot of it is just flat nonsense.

        Who prescribed mass starvation as a solution to anything? The point is that the Green Revolution as it occurred probably harmed at least as much as it helped because it while it increased overall food supply it did not ensure food security for those in need. And indeed in many cases directly resulted in decreased food security for those in need. If I had a time machine and magic powers I would go back to the fifties and try to use participatory development schemes to help peasant farmers adopt and adapt modern agrarian techniques while also improving access to basic education for children, especially girl children. I don’t know for sure that things would work out better, but it also doesn’t matter because I don’t have a time machine or magic powers. I’m just trying to point out that the Green Revolution was not this amazing “high modernist” success that in all ways dramatically improved the world (or even a thing that solved, or even adequately addressed, the problem it was purporting to solve).

        GMOs and additional scientific agricultural techniques definitely help mitigate some of the insecurity inherent in our current food production systems, and I’m all for them. Nonetheless if there comes a time when we run out agricultural inputs or sources from which to make agricultural inputs then having used modern agricultural improvements to temporarily increase the human carrying capacity of earth may start to look like a mistake. (Please note the “if-then”).

        Nutrient depletion in topsoil (and indeed, depletion of topsoil itself as a resource via erosion) is absolutely a real thing; one needs either careful management or considerable ongoing nutrient input or both to farm the same land productively for very long. I am surprised by the idea that this would be in question.

        • cassander says:

          >The point is that the Green Revolution as it occurred probably harmed at least as much as it helped because it while it increased overall food supply it did not ensure food security for those in need.

          This is demonstrably false.

          • Ronan Nobblewit says:

            That link, summarized in words and by me, shows that prior to the 1970s, the number of people who died in any decade from famines that killed at least 100,000 people varied between “less than five million” and “more than 25 million”. It also shows that since the 1970s, the number of people dying per decade from famines that killed at least 100,000 people has stayed below five million.

            This is definitely an improvement in the metric of “people literally starving to death by the tens of thousands”, which I hope we would all agree is absolutely something that we want to minimize. The recent absence of huge famines is a great accomplishment for the world, in terms of political stability and global humanitarian efforts as well as whatever contribution scientific agriculture has made (certainly some; exact balance undetermined).

            What this graph does not in any way show is that the Green Revolution has provided food security to the hungry people of the world – the conventional definition of food security being “the condition in which all people, at all times, have physical, social and economic access to sufficient safe and nutritious food that meets their dietary needs and food preferences for an active and healthy life”. Even if we take a weak (and unconventional) definition of food security and say it’s something like, “the condition in which all people at most times have physical, social, and economic access to sufficient safe and nutritious food that meets their dietary needs for a health life”, globally we don’t have that either. As noted, there are many, many hungry people in many parts of the world even though the total amount of human food available in the world is more than enough for everyone, right now.

          • cassander says:

            What this graph does not in any way show is that the Green Revolution has provided food security to the hungry people of the world

            the millions today that aren’t starving to death certainly have more food security than the people that did starve to death in previous decades. That, or your definition of food security is either grossly inadequate or outright propaganda.

            > “the condition in which all people, at all times, have physical, social and economic access to sufficient safe and nutritious food that meets their dietary needs and food preferences for an active and healthy life”

            You’ve gone from “the green revolution has caused more harm than good” to “the green revolution hasn’t entirely ended not just famine, but malnutrition, everywhere and anywhere.” That is a massive shift of goalposts, and a rebuttal of an argument that no one made.

            >there are many, many hungry people in many parts of the world even though the total amount of human food available in the world is more than enough for everyone, right now.

            This is true. But it is in no way a defense of the claim that the green revolution harmed as much as it helped, not is it an argument against the position that the green revolution prevented the starvation of tens of millions. It also completely ignores the point I made earlier, that talking about how “there’s more than enough food” is utterly myopic and verges on moral preening.

          • Ronan Nobblewit says:

            1) It’s not my definition, it’s the standard one?

            2) My goalposts haven’t changed. I would quote myself from my original post in this thread but that seems like a wast of time. My stance is that the Green Revolution is a really weird and (according to me) bad example of an unmitigated success of High Modernism. I still think that’s true. I don’t think it had no good effects. I also don’t think that the people doing it had bad intentions. I do think it had bad effects that, depending on how you count, probably outweigh the good (again, according to me).

            3) You totally lose me when you talk about “millions of people who would otherwise have starved to death.” No. That’s not how that works. Millions of people who would never have been born because their parents and grandparents starved to death in childhood – maybe. Millions of people who would never even have been conceived or born because of malnutrition-induced subfertility – MUCH more likely. My whole point here anyway is that we really should balance the millions of actual people who spent the eighties and nineties watching their children grown up stunted from malnutrition against the people who hypothetically might have starved to death if the Green Revolution never happened.

            Some (many) of you clearly don’t agree right now and may never, and that’s okay with me. Fortunately at least right now global politics are such that we are doing really well at increasingly getting resources to people who need them, so I’m hopeful that we’ll reverse some of the damage.

          • cassander says:

            @Ronan Nobblewit says:

            1) It’s not my definition, it’s the standard one?

            By your, I meant the one you are using. And definitions of that term are not standard. Or, rather, there are several standards. The US FDA, for example, includes people who said on a survey that sometime in the previous year, they thought it was possible that they MIGHT not have enough money to buy food when it ran out. Their metric is int eh category of pure propaganda

            2) My goalposts haven’t changed. I would quote myself from my original post in this thread but that seems like a wast of time.

            I’ll do it then. You literally said ” it did not ensure food security for those in need. ” This is false. it ensured security for hundreds of millions of people. Not every single person, true, but still hundreds of millions. You’ve since stepped back to “This is definitely an improvement in the metric of “people literally starving to death by the tens of thousands”, ”

            >My stance is that the Green Revolution is a really weird and (according to me) bad example of an unmitigated success of High Modernism. I still think that’s true.

            It’s true, but only because the Green Revolution was very much not high modernism, not because it wasn’t a nearly unmitigated success.

            I do think it had bad effects that, depending on how you count, probably outweigh the good (again, according to me).

            this just speaks to immense moral blindness. By your own admission, it’s prevented tens of millions from starving to death.

            3) You totally lose me when you talk about “millions of people who would otherwise have starved to death.” No. That’s not how that works. Millions of people who would never have been born because their parents and grandparents starved to death in childhood – maybe.

            First, I’m not even talking about the overall larger population now, I’m talking about the tens of millions that routinely starved to death in the average decade prior to the green revolution that no longer do.

            But putting that aside, How on earth is people never being born because their grandparents literally starved to death by the millions not horrific? And its prevention an enormous success?

            >My whole point here anyway is that we really should balance the millions of actual people who spent the eighties and nineties watching their children grown up stunted from malnutrition against the people who hypothetically might have starved to death if the Green Revolution never happened.

            There is no balance to be had. growing up stunted by malnutrition is better than not growing up because you starved to death. And that’s before we even address that plenty of people grew up stunted prior to the green revolution, and how making food cheaper helps both malnutrition and starvation.

            Fortunately at least right now global politics are such that we are doing really well at increasingly getting resources to people who need them, so I’m hopeful that we’ll reverse some of the damage.

            They really aren’t. Africa is not getting better because of global politics, but because of global capitalism, and the green revolution. Decades of western development aid failed to give africans decent phones, Vodafone succeeded. The arguments you’re making directly impede the results you’re seeking.

          • suntzuanime says:

            the conventional definition of food security being “the condition in which all people, at all times, have physical, social and economic access to sufficient safe and nutritious food that meets their dietary needs and food preferences for an active and healthy life”.

            Food preferences? Hahaha, this is almost as bad as the definition of genocide. Somebody needs to smack these people upside the head.

        • Odovacer says:

          The point is that the Green Revolution as it occurred probably harmed at least as much as it helped because it while it increased overall food supply it did not ensure food security for those in need.

          Can you back this up? Worldwide hunger has decreased since the Green Revolution.

          See this map for 1960:

          https://ourworldindata.org/slides/hunger-and-food-provision/#/Daily-Food-Supply-kcal-per-Capita-in-1961

          And this map for 2009:

          https://ourworldindata.org/slides/hunger-and-food-provision/#/Daily-Food-Supply-kcal-per-Capita-in-2009

          There are more maps in that site that show your statement is incorrect.

          Nutrient depletion in topsoil (and indeed, depletion of topsoil itself as a resource via erosion) is absolutely a real thing; one needs either careful management or considerable ongoing nutrient input or both to farm the same land productively for very long. I am surprised by the idea that this would be in question.

          I don’t think people are disputing nutrient depletion in topsoil, but rather that the GR was as hurtful as it was helpful. Regardless, I know I would appreciate it if you could expound on nutrient depletion and what it means for world hunger.

          • Ronan Nobblewit says:

            Your linked maps show nicely that the average quantity of calories available per human per day has increased between the 1960s and 2009. I have no desire to dispute this, as far as I know it is absolutely true. (It’s probably worth looking through the whole slideshow, which is interesting in several respects – I mostly liked it.)

            It is also true that there are still a lot of hungry people in the world. The easiest source I found for this is here:
            http://www.worldhunger.org/2015-world-hunger-and-poverty-facts-and-statistics/
            It is easy to compare numbers of undernourished people in the 2010s to earlier regional total population estimates – choose your own source on that,Wikipedia has some. Most of these sources have good data on hunger only since about 1990; I believe that this is because that’s when the UN formally adopted the Millennium Development Goals and started tracking these things. Prior to that there is a lack of easily available and good information. But if it’s generally accepted that (for instance) 23% of Asian people were undernourished in 1990, and if there were about 1.4 billion total people in Asia in 1950 and even if we say for the sake of argument that it was a LOT worse in 1950 and 40% of Asians were undernourished (I have no idea), that was about 560 million hungry people circa 1950 compared to about 665 million in 2005 or 512 million in 2014 – only extremely recently an improvement at all.

            I don’t want to spam myself out here with links, but there are some other sources you can find from the worldhunger site above with similar data.

            And in fact, this semi-randomly-found article from the NY Times in 1987 suggests that rather than assuming that in 1990 a lesser proportion of people were undernourished than had been the case in earlier decades, the real picture is less simple.
            http://www.nytimes.com/1987/06/28/world/world-hunger-found-still-growing.html
            Notably the article quotes an FAO estimate that in 1980 there were about 475 million hungry people in the world, total.

            Re: nutrient and other resource depletion – Nancy Lebovitz has the gist of my argument, even though she doesn’t think it’s a great one. We don’t know whether the needed resources will run out, but if they do, no substitutes are found, and the population is still high at that time, there will be huge amounts of suffering. Personally I incline to think that topsoil (lost through erosion as well as to non-agricultural use) is more likely to be a limiting factor than things like nitrogen and phosphorus. But even so, to be honest I don’t know how strong of an argument even I think this is – empirically, if I were really that worried about it I’d be working on developing perennial polycultures and I’m not. Regardless this is not the biggest part of why I don’t think the Green Revolution was an unqualified success.

          • Odovacer says:

            Let me just make sure I understand you. Forget about the topsoil for moment, you’re saying that since the absolute number of people who are hungry has increased since, e.g. 1960, that means that the Green Revolution was a failure and did more harm than good. Is that a correct summary of your argument?

            If so, then I have several against it.

            1) If there are more people in the world, and the percent of those going hungry has decreased, then the Green Revolution has made life better for millions of people. It has helped an absolute huge number of humans. The world population skyrocketed and food provision improved at the same time. Improved yields from the Green Revolution lead this.

            2) Life expectancy has increased almost everywhere in large part thanks to greater availability of food.

            3) Average human height is increasing in most countries. Indicating that much more people are getting more nutrients and calories, especially during critical periods of development in childhood.

            4) I’m a little too lazy to go too far back, but according to that FAO website you link to, 1,010.6 million people were hungry in 1990-92, 794.6 million people were hungry in 2014-2016. So the absolute number of people who go hungry is also decreasing.

            I could go on, but I will quote Steve Pinker talking about his book on the history of violence when it comes to absolute vs proportional statistics. I’ve changed the references to violence to refer to hunger:

            You can think about it in a number of ways, but they all lead to the conclusion that it is the proportion, rather than the absolute number, [] that is relevant. First, if the population grows, so does the potential number of [those who go hungry] So if the absolute number of [hungry people] stays the same or even increases, while the proportion decreases, something important must have changed to allow all those extra people to grow up free of [hunger].

            Second, if one focuses on absolute numbers, one ends up with moral absurdities such as these: (a) it’s better to reduce the size of a population by half and keep the rates of [hunger and malnutrition] the same than to reduce the rates of [hunger and malnutrition] by a third; (b) even if a society’s practices were static, so that its rates of [hunger and malnutrition] don’t change, its people would be worse and worse off as the population grows, because a greater absolute number of them would suffer; (c) every child brought into the world is a moral evil, because there is a nonzero probability that he or she will be a victim of [hunger and malnutrition].

            As I note on p. 47: “Part of the bargain of being alive is that one takes a chance at dying a premature or painful death, be it from violence, accident, or disease. So the number of people in a given time and place who enjoy full lives has to be counted as a moral good, against which we calibrate the moral bad of the number who are victims of [hunger and malnutrition] Another way of expressing this frame of mind is to ask, `If I were one of the people who were alive in a particular era, what would be the chances that I would be a victim of [hunger and malnutrition]?’ [Either way, we are led to] the conclusion that in comparing the harmfulness of [hunger and malnutrition] across societies, we should focus on the rate, rather than the number, of [hungry people].”

    • The Nybbler says:

      There’s also the difficulty that replacing traditional polycultures with cereal crop monocultures has, while allowing more calories for some previously starving people, not provided as adequate a balance of macro and micronutrients as traditional diets might have done.

      Uh, yeah, but complaining that people who were (pulling numbers out of the air) previously getting 50% of their ideal number of calories along with 76%, 78%, and 93% of their ideal various micronutrients are now getting 100% of their ideal number of calories along with 68%, 103%, and 90% of their ideal of various micronutrients seems a bit churlish. Sure, it’d have been better if they could have just doubled that traditional diet, but that wasn’t actually possible.

      • Nornagest says:

        Sure, but as long as we’re pulling numbers out of thin air, it could also be that they were getting e.g. 50% of their calories and 100% of their micronutrients, and now they’re getting 100% of their calories and 50% of their micronutrients. The latter is still probably better for them — I’d rather have a micronutrient deficiency than starve to death — but it would be dishonest to say it’s not creating problems as well as solving them.

        I’m far from sure that this is actually happening in a significant number of places, but it’s happened before — remains from around the Neolithic Revolution show sharp declines in many proxies for health (height, evidence of infection, etc.) but equally sharp increases in population. Moving from a calorie-bound equilibrium to a micronutrient-bound equilibrium is a plausible interpretation.

    • Jaskologist says:

      My understanding of the Green Revolution places it in the context of The Population Bomb. In that light, I actually see it as a rebukeof the High Modernist types. But I’m not sure we can get much into that without sparking Culture War.

    • Nancy Lebovitz says:

      The green revolution has had mostly good effects so far. We don’t know whether the needed resources will run out, but if no substitutes are found and the population still high then, there will be huge amounts of suffering.

      I don’t think this is a very strong argument– it involves some unproven premises– but I don’t think it’s obviously wrong.

  23. Urstoff says:

    How do these articles keep getting written: https://aeon.co/essays/materialism-alone-cannot-explain-the-riddle-of-consciousness

    A million articles have been written with the same basic argument:

    Quantum mechanics is weird
    Consciousness is weird
    Therefore,
    Materialism is false

    Now I tend to think “materialism” is an underspecified theory most of the time, and in its naive formulation (everything is smoothly reducible to physics) is most definitely false, but this is not the way to argue against it.

    • Siah Sargus says:

      I hate this shit specifically, because it’s so superficially apparent as just another God of the Cracks argument: “I don’t understand quantum mechanics or consciousness; therefore must be magic, and also related for some reason.”

      But more seriously, I’m unsure if it’s even possible to have a fully empirical refutation of materialism, and as long as we don’t have a working Theory of Everything, it’s all just speculation anyway.

    • The original Mr. X says:

      A million articles have been written with the same basic argument:

      Quantum mechanics is weird
      Consciousness is weird
      Therefore,
      Materialism is false

      That seems like a weak-man, at least of the specific article you linked to. A better summary would be:

      Materialism seeks to explain things in terms of physical matter. But nobody knows what physical matter actually is. Therefore, materialism fails at explaining things.

      As for how such articles get written, I’d guess that people keep making unsupportable claims about what materialism can explain, so there’s a constant need for articles rebutting them.

      • Urstoff says:

        There is no clear specification of what “materialism” is in the article. If by materialism he means “all things can (or will) be explained or reduced to the entities and laws of physics”, then it’s not clear what his argument is against that. Generally, the materialist makes allowances for changes in physical theory. The claim is not “all things can (or will) be explained or reduced to the entities and laws of physics circa 2017”. Also, it’s not clear how many people actually hold such a hardline materialist view (it’s a minority position in the philosophy of science, as far as I can tell). If he means a weaker but more popular materialist view that “all objects are composed and subject to the entities and laws of physics”, then again it’s not clear what the argument he’s putting forth against the view is. I think there are good arguments against both forms, but I don’t see how the weirdness and ontological confusion surrounding quantum mechanics can be used against those views.

        • The original Mr. X says:

          If we don’t know matter actually is, how can we be confident that everything can in fact be explained in terms of matter? If somebody said to you “The solution to this problem is X, I don’t know what X is, but trust me, it’s the solution,” would you be inclined to believe them?

          Furthermore, a lot of people, from what I can tell, support materialism out of emotional or aesthetic reasons: science is objective, it keeps advancing our knowledge, etc. Pointing out that, actually, modern scientific theories suggest that the physical world is affected by being observed, and that physics is divided between multiple competing theories with no way of telling which is true, rather takes the wind out of these reasons.

          • Urstoff says:

            That’s a fair criticism, but that’s not the one that he seems to be making (insofar as he is making one at all). Bringing consciousness into is also extraneous to that criticism.

            Indeed, his position seems much more sweeping:
            We know that matter remains mysterious just as mind remains mysterious, and we don’t know what the connections between those mysteries should be. Classifying consciousness as a material problem is tantamount to saying that consciousness, too, remains fundamentally unexplained.

            Substitute “baseball” or “combustion” or any other phenomena for “consciousness” and the argument stays the same. The result seems to be that since the ontological difficulties of QM are unresolved, then nothing can be explained in terms of the physical.

          • The original Mr. X says:

            I think consciousness was just included as a kind of hook to get people interested. The issue of how to explain consciousness is probably the most well-known argument about materialism at the moment, so if you’re trying to get people to read your article it’s probably a good example to use.

            Substitute “baseball” or “combustion” or any other phenomena for “consciousness” and the argument stays the same. The result seems to be that since the ontological difficulties of QM are unresolved, then nothing can be explained in terms of the physical.

            The article does include the qualified “fundamentally unexplained”. Something can be explained in a way that’s good enough for everyday purposes but still not fully complete or comprehensive.

          • Something can be explained in a way that’s good enough for everyday purposes but still not fully complete or comprehensive.

            In particular, it’s possible to explain something in terms of the production of external behaviour, but that seems inadequate int he case of consciousness because we do not perceive our own consciousness as external behaviour.

        • random832 says:

          I just spotted this in the comments feed and haven’t read the article you’re referring to, but a complaint I often have about the “allowances for changes in physical theory” sort of materialism (my complaint is generally to its application as a universalizing claim about fictional settings) is that it’s unfalsifiable. People can’t or won’t imagine a reality whose operating principles can’t be reduced to a single self-consistent set of laws that applies to all things.

          Basically the “It doesn’t make any difference if (say) dualism is true, because dualism is materialism because whatever the other substance is can be simply defined as another kind of matter” philosophy. It’s obnoxiously reductive.

          • Urstoff says:

            I don’t think it’s unfalsifiable so much as it’s a (weak) inductive prediction, and one that’s too often invoked to support a strong form of materialism (that is, explanatory materialism rather than mereological materialism).

    • rahien.din says:

      I mean… this guy…

      When I was a young physics student I once asked a professor: ‘What’s an electron?’ His answer stunned me. ‘An electron,’ he said, ‘is that to which we attribute the properties of the electron.’ That vague, circular response was a long way from the dream that drove me into physics, a dream of theories that perfectly described reality.

      What is a horse? That to which we attribute the properties of a horse! Therefore zoology doesn’t work and horses are magical.

      Above all, what is matter?

      Fucking magnets – how do they work?

      • The original Mr. X says:

        If you can only define something circularly (“An electron is a thing we call an electron”), you can’t actually define it at all.

        What is a horse? That to which we attribute the properties of a horse! Therefore zoology doesn’t work and horses are magical.

        A horse is a quadrupedal mammal of the genus Equus. Since it is possible to give a non-circular definition of horse, your analogy doesn’t hold up.

        ETA: Also, could we taboo the term “magical”, please? It’s nothing but a boo-word and adds precisely nothing to the conversation.

        • suntzuanime says:

          The only reason the definition in question was circular is that the professor was too lazy to actually enumerate the properties of the electron.

        • rlms says:

          But electrons can be defined non-circularly. That professor was just being silly.

        • Siah Sargus says:

          An electron is a fundamental particle with negative charge. The only thing circular here now is its standing wave orbital.

        • rahien.din says:

          Mr. X,

          Quadrapedal, mammalian, and “similar to zebras and asses” are some of the properties of a horse. Saying “a horse is a thing with X properties” is identical to saying “a horse is that to which we attribute the properties of a horse.”

          It’s possible to give a “non-circular” definition of an electron, too, by similarly describing its properties. But saying “an electron is a thing which has X properties” is identical to saying “an electron is that to which we attribute the properties of an electron.”

          You can’t define a thing other than by describing it.

          • The original Mr. X says:

            Saying “a horse is a thing with X properties” is identical to saying “a horse is that to which we attribute the properties of a horse.”

            No it isn’t. Even if they both have the same reference, one contains useful information, the other doesn’t.

          • rahien.din says:

            No it isn’t. Even if they both have the same reference, one contains useful information, the other doesn’t.

            I agree that one is more informative than the other, but that isn’t the question at hand.

            Going back to the interchange in the article:

            Adam Frank : What’s an electron?

            This question could either be a relatively straightforward request for an on-the-spot description of an electron (which I find to be wildly implausible) or, it could be a question along the lines of (as Frank himself later demands) “Above all, what is matter?

            The latter is the correct interpretation. It indicates a sort of philosophical confusion. That is why the interchange continued :

            Adam Frank : What’s an electron?
            Professor : An electron is that to which we attribute the properties of an electron

            The professor isn’t being glib or evasive or deliberately withholding information. He is perceiving Frank’s confusion and attempting to refocus him on what matters.

            One could reconstrue the interchange as follows:

            Adam Frank : What’s an electron?
            Professor : That’s a weird question. An electron is a certain thing, which we describe as having a certain set of properties, such that no other thing has that set of properties. But, I don’t think you are asking me to list the properties of an electron. I think you want very badly for physics to have a rationalist rather than constructionalist epistemology, or maybe you are making some implicit ontological claim, and you want me to engage in a philosophical discussion. I’d rather we talk about physics.

          • The original Mr. X says:

            An electron is a certain thing, which we describe as having a certain set of properties, such that no other thing has that set of properties.

            That reply seems to make the electron a mental or theoretical construct with no necessary connection to external reality (“a thing which we describe as having…” rather than “a thing which has…”), in which case it doesn’t actually undermine the article’s argument that physics doesn’t give us information about the external world.

          • rlms says:

            We also define a horse as “a thing we describe as having certain properties”. I think that the “we describe” part can be removed in both situations.

          • rahien.din says:

            It is a necessary clause because it’s a restatement of the professor’s phraseology.

            [A horse] or [An electron]

            [is a thing]
            meaning, a separable extant entity

            [that we describe as having] = [to which we attribute]
            these phrases are synonymous

            [a certain set of properties]

          • The original Mr. X says:

            It is a necessary clause because it’s a restatement of the professor’s phraseology.

            The whole point was that the professor’s phraseology resulted in a definition that didn’t actually tell us anything about electrons.

          • rahien.din says:

            The original Mr. X,

            As above : in asking “What is an electron?” Frank is not requesting a description. He’s picking a philosophical fight.

          • The original Mr. X says:

            in asking “What is an electron?” Frank is not requesting a description. He’s picking a philosophical fight.

            Since we don’t really have the context of that exchange, I think you’re being overconfident in your ability to discern his motives here.

  24. notsobad_ says:

    Given that Deiseach called for more SSC lurkers to comment in the survey thread, I decided I’d create an account and make an effort to comment at least once a week (I’ll likely be asking more questions than contributing to any discussion tho haha). So here I am. Hi, everybody!

    Some things about myself: Irish, 22, soon-to-be physics grad, ~8 months reading SSC. Hoping this is a community I can integrate into and contribute discussion to at some point in the future.

    Now that I’m here I might as well ask, are there any other blogs people would recommend? Anything similar-ish to the topics covered by Scott. My current RSS feed contains the following: SSC, Thing of Things, Marginal Revolution, EconLog, LanguageLog, and Paul Krugman. I’d also be interested in knowing if Scott has any detractors that regularly comments on his writing. It would be fun to see the other side of things, and I’ve recently grown worried I am creating my own bubble. Also, self-promotions are cool in my book. If you think I’d like your writing please let me know 🙂

    Thanks in advance! Feel free to say hi!

    • Eltargrim says:

      I’d toss Popehat into your blogroll. American free-speech (and to some extent criminal) law perspectives, and a completely righteous dislike of ponies.

      EDIT: Another blog that may strike your fancy is Retraction Watch. YMMV though, they have a definite med/bio slant. Maybe there are just fewer retractions in chem/phys?

      Question for you (and Deiseach as well): how “seriously” is St. Patrick’s Day taken on Éire? Obviously parties will happen, but is it a bank holiday? Are there solemn ceremonies carried out by governments? Raucous ceremonies by governments? Or is it an excuse for everyone to get a day off and have fun?

      • notsobad_ says:

        Thanks for the suggestions! Popehat looks really cool.

        Regarding your question: Paddy’s Day is a public holiday, many places of business are closed. Grocery stores and pubs stay open. I really don’t know about any religious ceremonies, though many will attend mass, including some non-regular types that only show up on Christmas/Easter/etc (though this is less likely on Paddy’s Day than the aforementioned holidays).

        Many towns will have parades, attracting people from the locality. I usually avoid these myself, I don’t like the large crowds and I find them uninteresting. Despite that, I was at one this year as I was visiting a friend. This parade included tractors, children dressed in costume, vintage cars, vans advertising local businesses. An odd assortment, but people really enjoy it. The pub I was in emptied out onto the rainy street to watch.

        Many people drink during the day, relax, hang out with friends. Young people take to the clubs at night. I really don’t know anybody that directly treats it as a celebration of St. Patrick. As you suggested, it’s just an excuse to have fun and spend time with your friends and family.

        Also, problems with binge drinking and underage drinking on Paddy’s Day are well known. The Gardaí (police force) have their work cut out for them AFAIK.

        Long response. Hope it was helpful.

        Edit: I feel most of this could be summed up with “People really like to drink and party on Patrick’s Day” haha. I’m not sure how Paddy’s Day is celebrated elsewhere so I gave some info

        • Eltargrim says:

          Appreciated! There’s varying levels of celebration here in North America, but in my east-coast Canadian city it’s basically an excuse for undergraduates to start drinking obnoxiously early and start making fools of themselves. I expect that the closest thing we have to St Paddy’s as you celebrate it would be either Canada Day or Canadian Thanksgiving (which takes many cues from American Thanksgiving).

      • Deiseach says:

        Once upon a time there used to be a ban on drinking on St Patrick’s Day. Pubs had to remain closed. This went about as well as all attempts to ban drinking, and due to pressure from both the Vintners’ Federation and the public, as well as “it’ll boost tourism”, now you can get drunk as a skunk on Lá ‘le Pádraig.

        And the plain people of Ireland take full advantage of that.

        Generally it is traditionally cold, wet, and windy on St Patrick’s Day and this year was no exception (the irony being that round here, the first three-four days of the week were glorious sunshine and blue skies but by the time the feast of our patron saint rolled round, so did the clouds and winds).

        As notsobad says, mainly the parades are small affairs with local businesses, the fire brigade, ambulance service, town/county council etc. all putting floats on display (or maybe just driving the fire engines with a few balloons or some bunting). Pipe bands (local and visiting), some attempts at culture. We had fireworks the night before, though, and those were great!

        Mainly Dublin pushes the boat out for the big parade and during the hey-day of the Celtic Tiger they tried making it “Patrick’s Week” as an arts festival (but mainly to soak the tourists for as long and as much as they could). This has been scaled back to a more sensible long weekend thing.

        It is very much not like the American version, and again ironically, as with Hallowe’en, the Americanised version that re-crossed the Atlantic is more popular than the traditional native one. This sums it up pretty well 🙂

        • Eltargrim says:

          Thanks for the reply! Follow-up: is there a significant holiday for a particular independence day? I understand that there are technically a few to choose from.

          • Deiseach says:

            Ooooh, that’s a vexed question! Usually it’s the anniversary of the Easter Rising and last year we celebrated (for a particular value of “celebrate”) the 100th anniversary of 1916. At the start, the emphasis was more on the Decade of Commemoration (to start off with things like the 1912 Lockout and we’ll be heading into the 1919-1921 War of Independence and then the 1922 Civil War, not to mention the 1914-1918 centenary of the First World War that is ongoing) than singling out the centenary of the Rising, but in the end it was reasonably well commemorated. This is more to acknowledge the “two traditions” on the island and to extend an olive branch to the Unionists, as well as overhaul and take a critical look at the foundation myths of the state (Revisionist history has been going long enough now to be respectable; indeed, now we’re swinging into the Revision of the Revision phase).

            It was sorta easier back in the 60s when being patriotic and nationalist was still okay, but combined with the Troubles and a certain tendency not to want to be too mean to the Brits, that became frowned on as undesirable hyper-nationalism due to a combination of the IRA plus the bien-pensant capital-located media and chattering class wanting to get away as fast as possible from traditional roots and adopt “universal culture” as Scott has put it; that is, copying the English who were copying the Americans on the East and West Coasts (my peasant roots may be showing here).

            Apart from St Patrick’s Day, we don’t really have a heavy emphasis on a national holiday, even the commemoration of the Easter Rising is a low-key affair – certainly nothing in comparison with the Fourth of July or Bastille Day. The political and armed violence in the North has made the southern political establishment very wary of anything that smacks too much of enthusiasm about revolution and overthrowing British rule by force. Generally it’s marked by the president laying wreaths in the Garden of Remembrance.

          • dndnrsn says:

            Reading this post caused me to recall the weird needle thing in Dublin, and somehow I had gotten the idea that it was a monument related in some way to independence. I looked it up, and as far as I can tell, it doesn’t really mean anything? Just someone was like “hey there used to be something here, so let’s put something here” and some architect said “GIANT NEEDLE!” and everyone decided this was a good idea?

          • Eltargrim says:

            @Deiseach:

            Thanks for the detailed reply! I knew that it was unlikely that there was going to be one single date, but I didn’t think of the context of the Troubles. It makes sense to low-key some events related to contentious dates when the contention isn’t quite history yet.

          • Deiseach says:

            some architect said “GIANT NEEDLE!” and everyone decided this was a good idea?

            THE SPIKE! Or, to give it its official name(s), The Spire of Dublin or The Monument of Light/An Túr Solais! This marvellous feat of engineering was produced in my own fair county so I have some measure of proprietary pride in it 🙂

            The bones of the tale is this: when The Millennium in 2000 was coming up, our government and various bodies ranging from the arts to the money-grubbing (tourism and hospitality industries) decided we should do something to mark this. All world centres of culture were doing something, so we should also! (And the money-grubbing pointed out that we could maybe make a few bob off the tourists while we were at it).

            Our first effort was a clock in the Liffey. That lasted about as long as everyone said “You know, putting something electronic into flowing water isn’t really a good combination, is it?” would do, except that The Experts told us all that we were ignorant peasants and Modern Technology would triumph (I may be paraphrasing a bit here; I saw the clock myself and it was… well, I can’t say “impressive” but I was suitably “yeah, that’s grand” for about five minutes which was as long as you could reasonably expect).

            Defeated by Anna Livia Pluribella, the great and good cast about for a replacement. Concurrently with this, there had long been discussion about replacing Nelson’s Pillar – that’s the “something that used to be there on the site” you mentioned 🙂

            This, again, will necessitate a diversion into history. You know Trafalgar Square in London with Nelson’s Column? Well, Dublin had its equivalent – Nelson’s Pillar, built when Dublin was regarded (or regarded itself) as The Second City of Empire, in a burst of patriotic (ahem) enthusiasm in the wake of the victory of the Battle of Trafalgar. This got blown up in 1966 by Republicans – no, not those Republicans, these Republicans. 1966 was the 50th anniversary of the Easter Rising and patriotic (ahem) fervour was high enough that the public generally looked favourably on the sudden, not to mention explosive, amateur removal of what was regarded as a monument to imperial and colonial power.

            What to do with the empty space, though? Debate raged (well, more rambled and sauntered) on through the years, until with the Millennium approaching, a competition was finally run and the winner was the English architect with the Spire.

            New York has the Statue of Liberty, Paris has the Eiffel Tower, and now Dublin would have the Spire! (We don’t really do monuments well; the visual and plastic arts are not a strong part of our national culture.)

            It was supposed to be completed and in place for 2000 but eh, this is Ireland. It was finally erected in 2003 and the “Millenium” part of “the Millenium Spire” was quietly dropped. Why is it called “the Monument of Light”? Why, because at dusk, an internal light at the very top of the tip is lit! (I told you we don’t do monuments well). You can’t climb or ascend up it (unlike Lady Liberty or La Tour) but you can stand well back and look at the tiny light way up high, and isn’t that as good? 😀

          • Deiseach says:

            somehow I had gotten the idea that it was a monument related in some way to independence

            Re: the monument that used to be there long before The Spire, The Dubliners covered a song to mark the removal of Nelson’s Pillar.

          • John Schilling says:

            New York has the Statue of Liberty, Paris has the Eiffel Tower, and now Dublin would have the Spire! (We don’t really do monuments well; the visual and plastic arts are not a strong part of our national culture.)

            OK, that is a thing that totally exists. And, having spent a long weekend in Dublin post-2003, it must have been in my field of view at some point. I have absolutely no memory of it. A check of the trip report I wrote shortly afterwards has no mention of it. So I have to agree with your assessment of Irish monumental aptitude.

            Well, OK, whoever designed the Trinity College Library has some skill at the visual arts, applied to a work of monumental stature. That, I remember. Several of Dublin’s libraries and related museums are memorable. Possibly the climate incentivizes the local population to stay in and read on days when they are not staying in and drinking, so the artists and architects all focus their energies on the libraries and pubs?

          • The original Mr. X says:

            THE SPIKE! Or, to give it its official name(s), The Spire of Dublin or The Monument of Light/An Túr Solais!

            Oh, so that’s what it called. When I went there on holiday with my family a few years back, we took to calling it the Dublin Phallus.

          • Murphy says:

            @Deiseach

            The spike? You mean The Stiffy on the Liffey?

            To go along with The Tart With The Cart, The Floozie in the Jacuzzi, The Queer with the Leer and The Prick with the Stick.

        • Deiseach says:

          Well, OK, whoever designed the Trinity College Library has some skill at the visual arts, applied to a work of monumental stature. That, I remember. Several of Dublin’s libraries and related museums are memorable.

          Through gritted teeth I would probably have to admit that the better architecture in Dublin comes from the period of British rule – look at Georgian Dublin, well duh the Georgians. Copying the English who were copying The Glory That Was Greece And The Grandeur That Was Rome when they came home from the Grand Tour.

          We wrecked an awful lot of what architectural remains we had via neglect or in the 60s (when times were good)/Celtic Tiger era (when times were good) pulling down the old (oftentimes to do with dodgy deals involving planning permission and land to enable developers and politicians to enrich themselves) and sticking up bad imitations of what was currently faddish, e.g. the Wood Quay controversy where Sam Stephenson got the architect’s gig for building new modern offices for Dublin Corporation and went with fashionable Brutalism (the irony being, after he’d inflicted this style on Dublin, he went to England and re-discovered Classicism and went for a quasi-Luytens style, the Brits not being impressed by knock-off European stale Modernism and he not being able to sell it to them as ‘you don’t want to be provincial, do you? this is all the rage in the best capital cities!’)

          We’re very short-sighted in the planning for infrastructure and development department and only realise after we’ve let something fall apart that hey, this was a good thing (generally when someone realises ‘we could have made money out of this, tourists like this kind of thing!’) Much too easily impressed by “all the big cities are doing this now” and desperately wanting to throw off anything that smacks of our unsophisticated, rural past.

          • Nyx says:

            “Through gritted teeth I would probably have to admit that the better architecture in Dublin comes from the period of British rule – look at Georgian Dublin, well duh the Georgians. Copying the English who were copying The Glory That Was Greece And The Grandeur That Was Rome when they came home from the Grand Tour.”

            I feel like that’s more an artifact of the British being in charge back when architects weren’t awkwardly mashing fashionably geometric shapes into each other. Our best architecture comes from that period too. The trick is not to throw it out every hundred years on a whim, something that happens in both countries.

        • JulieK says:

          While we’re talking about Irish culture, can I ask how you pronounce “Deiseach?”

    • Siah Sargus says:

      I’m surprised less people here recommend Gwern. I understand it’s not a blog, per se, but there’s a lot of interesting information there.

    • xXxanonxXx says:

      I appreciated Deiseach’s encouragement, but the results of the survey confirmed what I suspected. Namely, that while I’m the bright one in my real life social circles I’m a dullard by SSC standards and should really just stick to lurking.

      • andrewflicker says:

        By simple math, the vast majority of us posting on SSC are less intelligent than the smartest person on SSC- and half of us are dumber than the average poster! So don’t worry about it- if you have something interesting to say, speak up.

        • Siah Sargus says:

          Half of us a dumber than the median poster. *flees immediately*

          • JShots says:

            Haha, as an actuary with a Statistics degree, I’m ashamed of myself for not picking up on that. I’d say let’s assume we’re all normally distributed, but I’d be hard-pressed to call some of the people on here “normal”. Amirite?? Stats joke…*crickets*

          • andrewflicker says:

            Eh, “average” is quite frequently used to mean “for some particular variety of average”. If I’d meant exactly “mean”, I’d have said “mean”.

        • xXxanonxXx says:

          I suspected a torrent of replies along the lines of “stay positive!” or “here are some statistical reasons you should stay positive!” and was not disappointed. When I said I knew the average commenter here was smarter than me before I saw the survey results I should have also mentioned they aren’t snobs about it. That’s likely due, as Scott has pointed out, to the blog attracting a bunch of nerds who idolize titans like Von Neumann. Even the best here know what it’s like to, how did Hitchens put it? “Graze on the lower slopes of your own ignorance.”

      • James Miller says:

        Even accepting your assumptions, you should comment on issues on which you know a lot. Crystallized intelligence can beat fluid intelligence.

        • q-tip says:

          Easy to say. However, there are a lot of polymaths* commenting here. Someone with expertise in one area may very undestandably feel too intimidated to comment on topics in that area if the usual suspects are dominating the convo – even if the usual suspects have their heads up their asses and our hypothetical commenter has proofs and citations galore..

          *real or self-diagnosed

      • Art Vandelay says:

        You should also consider that you’re going to learn more and get smarter by discussing things with people who are more intelligent than you than you will through discussions with those who are less so. I also personally wouldn’t put too much stock in IQ.

        • nimim.k.m. says:

          It does no necessarily have to have much to do with IQ. While the conversational aptitude, ability to be witty etc might be correlated with IQ, it’s probably a relationship with very high variance. Some of relevant skills also can be practiced.

          I think the general consensus is that to most people, writing text worthwhile to read is hard, except by an accident, and so the most important skill is the ability to publish (or vomit commentary into the pipes of internet) not caring too much how substandard it probably is.

      • Deiseach says:

        Namely, that while I’m the bright one in my real life social circles I’m a dullard by SSC standards and should really just stick to lurking.

        Listen, I’ve been noodling around online trying to see if you can indeed find a correlation between SAT scores and Leaving Cert points, using my (very much antiquated by now) results from when I did the Leaving, and for amusement value turning those results into IQ equivalences, and holy Hannah, I’m coming out as thick as the ditch. If I take the “convert Irish grades into US grades to GPA to new SAT” route that puts me in the bottom 13th percentile nationally on the new 1600 point SAT, and if I try “turn SAT points into IQ”, results range from mid 90s to a generous 103 IQ.

        Believe me, if you have the intellect of a soggy turnip, you are doing well by comparison with me, and I’ll challenge all comers for the title of Stupidest Commenter On SSC 🙂

        Please don’t be afraid to comment, please don’t think you’re the dumbest stump (see above) and please we need fresh meat new blood more varied range of commenters.

      • James Miller says:

        Imagine we come up with a genetic test that’s a good predictor of height. Giving this test to a 5-year-old would tell him information about his future self, while giving this test to an adult would be useless. I think the same might almost be true of IQ tests. IQ tests are an imperfect indicator of intelligence. By the time you’re an adult, especially if you are introspective as I bet most readers of this blog are, you know so much about your own intelligence that taking an IQ test or remembering the results of an IQ test you took long ago, provide you with little information about your own abilities. IQ is fantastically important given that it is just one number, but IQ can also be of little importance given everything that you likely know about yourself, so long as you are reasonably honest with yourself.

        • This is basically my experience. Based on SAT IQ mappings I’m *somewhere* in the 115-125 range *shrug*. It meant that I couldn’t get a PhD in Econ from a top 10 school (or…well… top 30 if we’re being honest).

          It doesn’t mean I’m unable to have an intellectually rich and lucrative career in applied science. It does sorta mean that self-studying stochastic calculus is basically worthless, and no matter how hard I have tried, making Ito’s lemma, measure theory, or Taylor series expansions ‘intuitive’ just isn’t happening. Those things don’t really come up much in industry, outside of very specific engineering subfields. And other than these very explicitly challenging markers of ability, I’m able to make up for most things by working a little harder, and reading a little more. My coworker might have a higher IQ, but if he doesn’t read the new research and methodology blogs, I’m going to still have my own set of advantages.

          You just do what you’re good, do what you can, and move on with your life.

          • One thing I want to add to anyone reading this, is don’t *ever* fail yourself. Don’t ever count yourself out because *you* don’t think *you’re* good enough.

            Let other people fail you. If other people don’t fail you, then you won’t know where you truly stand, and you will know you haven’t pushed yourself as hard as you can.

            After a miserable experience with grades and undergrad courses in anything faintly STEMy (not a STEM major), I didn’t think I could pursue a career that required STEM skills. But I decided I was going to try anyway, because I would rather fail than not try because I didn’t think my IQ was high enough, and I ended up going further than I anticipated.

            Anyway, sorry for the lame motivational talk. I really hate seeing people count themselves out though because they think their IQ isn’t high enough, and I sometimes think that sorta stuff is happening a lot in this community, since it’s a high-IQ community that focuses a lot on stuff like IQ research.

          • hoghoghoghoghog says:

            (Side note: I often hear people pushing Stochastic Calculus with Infinitesimals as (a book with) an alternative, potentially more intuitive approach to stochastic calculus. But the people who actually try it are all logicians, the most bendy-minded of all humans, and they would succeed no matter how bad an approach they chose. Did you ever try that tack, or know anyone who did?)

        • Reasoner says:

          I think of IQ as being predictive at the population level but probably pretty noisy at the individual level. For example, I seem to recall that simply giving people financial incentives will cause them to perform significantly higher on an IQ test. I’d also be interested to know the retest reliability for IQ tests, e.g. if I take the test and get 130, how surprised should I be if I take it again and get 140 next time?

    • Paul Brinkley says:

      Other blogs I find interesting:

      In the Pipeline – Derek Lowe on pharma
      Lawfare – group blog on national security and the law
      Heterodox Academy – blog devoted to diversity of view, featuring authors such as Jose Duarte, Jonathan Haidt, Glenn Loury, et al.

  25. BBA says:

    From the annals of highly specific legislation: my state’s governor has just signed a law “to exclude certain musicians or persons who had a work-related accident on December 17, 2011 who are an executive officer of a corporation who contracts for the musician or person’s services from the definition of employee for purposes of the workers’ compensation law.”

    The plural “musicians” is laughable here – obviously this law was written with exactly one person in mind. Her name is Wendy White, she was an opera singer who fell from a collapsing platform during a performance at the Met on December 17, 2011, and she injured her diaphragm and hasn’t been able to sing since. She’s sued the Met for negligence and this law settles one contentious point in the case. Oddly, the law upholds the most recent ruling in the case. My guess is the legislature wants to forestall the state’s high court from reversing this ruling, and to prevent this weird exception from having any precedential power.

    But what if some other musician had an on-stage injury in NYS on that particular date, claimed workers’ comp from the venue, and can’t afford a negligence suit? Are they out of luck? (Probably not, the text of the law tries very hard to keep itself from applying to anyone but Ms. White, but it’s interesting to think about.)

    • Machina ex Deus says:

      I was totally pissed off at this and ready to go on a rant about the NY legislature and governor. Then I did a web search and discovered this law (and the January appellate court decision) is in White’s favor. Now I’m totally cool with it.

      It appears I lack principles. And it won’t do me any good to try to find a principle, now: I’d know it isn’t the real reason I think what I do.

      I mean, unless they titled it “Wendy’s Law.” Then it’s a piece of garbage and should be struck down: No Laws Named After People is a principle I’ll always cling to.

      • WashedOut says:

        Now you’ve got me thinking about a world in which my appliances don’t obey Ohm’s Law.

      • Jiro says:

        It appears I lack principles

        I think it’s reasonable to consider “the power of the state is used to screw over a specific individual” to be worse than “the power of the state is used to help a specific individual” and accordingly to object more to the former for principled reasons.

        If you tried to write down your principle as “the state should not do anything to a specific individual at all”, then that wouldn’t work–but that probably means that you still had a real principle and just didn’t write it down very well.

      • BBA says:

        Yeah, it tripped me up too. Normally in a workplace injury matter, you want to be considered an employee instead of a contractor, because it’s much easier to get a workers’ comp claim than to win a negligence lawsuit. Here the incentives are flipped around.

    • Scott Alexander says:

      Is this an unconstitutional bill of attainder? Would it be one if they just named the musician? Can you really get around the restriction by describing someone without naming them?

      • John Schilling says:

        IANAL, but I’m guessing that if Ms. White decided to challenge the law on that basis she would win hands down, but since it works in her favor she won’t. And other people can’t challenge the law just because they think it is unconstitutional or otherwise objectionable; lots of people think lots of laws are objectionable for lots of reasons, and to keep their workload somewhere in the vicinity of reasonable the courts are very specific about throwing out any claim that doesn’t first establish why you, the plaintiff, were specifically hurt by this law.

        So the New York Metropolitan Opera could presumably sue, but nobody else can. Since the Met is a nonprofit that I believe gets most of its funding from A: the governments of the state and city of New York and B: the people who elected those governments to do things that make them feel good, I’m going to guess that they are going to take the hint and settle their lawsuit with Ms. Williams rather than doubling down on unpopular legal actions.

        • Douglas Knight says:

          But that leaves the mystery of why they didn’t just name her.

          I’ve heard it claimed that legislatures pass highly specific laws (eg, dependent on incorporation dates) all the time and no one knows what they do—which is the point.

      • The original Mr. X says:

        An act of attainder is a decree by the legislature finding somebody guilty of a crime without going to the bother of trying them. Since this law apparently makes things better for the individual in question, I’m not sure it would fall under the definition.

        • John Schilling says:

          Given the context, it’s pretty clearly a law saying “The New York Metropolitan Opera automatically loses the civil lawsuit it is currently arguing with Wendy Williams”. That would almost certainly be considered a Bill of Attainder against the Met.

          If the legislature were to pass a law saying “Wendy Williams henceforth wins all lawsuits against anyone“, one might pedantically argue that this doesn’t meet the definition of a BoA. But I’m guessing even the ghost of Antonin Scalia would be looking for excuses to rule otherwise.

          • BBA says:

            It’s not an automatic loss of the lawsuit, it’s an automatic loss on one point of the lawsuit. The matter is going to proceed as a negligence case where a jury will determine whether the singer or the opera was at fault, rather than as a no-fault worker’s comp claim in which the singer gets a (smallish) check from the opera’s insurance company and nobody’s allowed to appeal after that.

            Now, it’s likely Ms. White will win, but nothing in the law guarantees one outcome or another. There was a prominent case last decade in which Congress passed a law benefiting a named individual, but the beneficiary ended up losing the case anyway. Since the result wasn’t required by the law, I don’t consider either of these cases to be an attainder.

          • Deiseach says:

            I wondered that; was the point of it that she’d be entitled to greater damages for suing the Metropolitan Opera for negligence than if she was compensated for a workplace injury? Fair enough, a professional singer who can’t sing anymore has a genuine case for loss of future earnings, but I’m still wondering if this is a good idea in principle; if everyone is going to be more encouraged to say they were contractors rather than employees, what’s the point of workplace compensation? And if businesses think they might have a better chance of winning a case in court would they prefer to say “you weren’t an employee, you contracted your services to us”?

          • Machina ex Deus says:

            @John:

            Given the context, it’s pretty clearly a law saying “The New York Metropolitan Opera automatically loses the civil lawsuit it is currently arguing with Wendy Williams”.

            With that Freudian slip, you’ve now outed yourself as a Plasmatics fan.

      • Corey says:

        This form is typically used for business-specific tax breaks. For example, NC’s constitution forbids laws that only affect one person (natural or corporate). So what to do when you want to give Dell a tax break for building a datacenter? Pass a law with a tax break for datacenters built in County X between dates A and B by PC manufacturers with a number of employees between Y and Z.

        • BBA says:

          I think the circumlocutions here are meant to get around a similar restriction in NY’s state constitution (article 3, section 17). I know that the state can’t pass a law about a particular municipality without the municipal government’s permission (article 9, section 2(b)(2)), so when they intervene in the NYC government, they refer to “a city with a population over 1 million”, knowing that no other city in the state has even half a million residents.

          • Machina ex Deus says:

            On behalf of Saul Kripke:

            ≖_≖

            Under what plausible theory of naming is that not naming New York City?

          • skef says:

            @Machina ex Deus

            Uh …

            “A city with a population over 1 million” is a description, and could counter-factually refer to other cities in New York state — if history had gone differently Albany might have been that large. Kripke argues that names aren’t disguised descriptions, so he would definitely not consider that specification to be a name, or equivalent to a name.

          • yodelyak says:

            Oregon does the same thing w/r/t/ Portland, as a city with >500k residents.

        • Creutzer says:

          It’s odd that people read these restrictions of the form “no law about one particular entity” as restrictions on intensions rather than extensions so that you can work around them with trivial reformulations. It seems some lawyers read too little philosophy of language.

      • Brad says:

        No, a bill of attainder has to either be criminal or at least punitive. The ex post facto clause has similar restrictions.

        There are some cases that hold there are limitations on Congress intervening in particular cases in federal court, but they are pretty narrow. See for example Bank Markazi v. Peterson (2016). These cases have been decided using general separation of powers principles. I’m not aware of any federal cases that hold that state legislatures can’t intervene in pending state court cases, though there may be some I’m unaware of. There could also be some state level separation of powers doctrine that would forbid it.

      • BBA says:

        It used to be very common for Congress and state legislatures to pass laws about particular people. Grover Cleveland set the record for most vetoes in a presidential term by rejecting hundreds of bills to grant pensions to particular Civil War vets who had been denied them by the predecessor to the VA. It was also once necessary to get a private law from a state legislature in order to change your name, incorporate a business, or get a divorce. None of these was considered an attainder at the time.

        Now with the growth of the administrative state, these have grown much rarer, but they still exist. The most recent Private Law passed by Congress was a 2012 act to grant someone a green card. The NY state lege grants a few pensions by private act each year.

        • Brad says:

          Private bills in the immigration context didn’t just disappear as a result of the rise of the administrative state. ABSCAM which was an FBI sting operation against sitting congressmen to accept bribes in exchange for private immigration bills is what really killed them off.

    • quanta413 says:

      Do they not have better time investments than to further complicate the law in order to specifically intercede in a single current legal proceeding and no other? Like, could they have at least attempted to change what might generally be wrong with the law that occurred in this case?

      Eh, I should probably be glad time was wasted in doing something with probably minimal damage at worst.

  26. Machina ex Deus says:

    [Edit: culture war averted!]

    • Douglas Knight says:

      Did you read the post? This is the open thread for avoiding culture war topics. Hey! Wait! You’re the one who specifically noted this on 70.5!

      Gay Marriage was specifically asked on the survey. Of the 42 people who comment many times per week, 5 rated it 1 (most opposed) and 3 rated it 2. And a lot more than 42 people comment that often. 5 more people who comment weekly rated it 1.

      • Machina ex Deus says:

        Note to confused readers: DK did not bring up culture war stuff, I did—it is totally not his fault and is completely mine. Sorry, sorry, sorry: I will actually read the Open Thread posts in the future.

        Thanks, DK: I have edited out my faux pas with seconds to spare. Obsessively checking SSC for the win!

        Didn’t have time to write something substantive in its place, so I’ll put it here:

        If your life were a Lifetime movie, what would the title be?

        For example:
        Depth at an Early Age: The Eliezer Yudkowsky Story

  27. HeelBearCub says:

    I think people hear might be interested in listening to the interview Soledad O’Brien gave to the NPR podcast “The Mashup Americans”

    Soledad O’Brien Sees All Sides
    Soledad O’Brien on why she counts the minorities in the room and how Mash-Ups have the power to change the conversation.

    I can’t seem to find a link directly to the interview, but it’s currently the most recent interview on their site.

    She offered some interesting points about why representation of diverse viewpoints in the editors room matters, and offered a fairly stirring defense of listening to everyone’s viewpoints. 30 minutes long and I don’t have a transcript for those who would rather read, unfortunately.

    • HeelBearCub says:

      Sorry. Didn’t process that this was a “no culture wars” OT. Apologize for breaking the interdiction.

  28. Dr Dealgood says:

    Ok, so sorry for the repost but this got buried in the last OT and I want it to have some chance of being seen.

    From here:

    This paper on how tardigrades survive desiccation by vitrifying their cytoplasm and turning into a sort of living glass just came out a few days ago. It seems very relevant to people’s interests here.

    Tardigrades, or water bears, are a species of microscopic extremophiles famous for their extraordinary durability. One example of which is their ability to survive in the vacuum of space for at least ten days. Back when I was still pushing the SSC Science Thread I had linked to this 2016 paper which identified a family of radioprotective proteins called Dsup.

    As in the Dsup paper this doesn’t seem to just be a curiosity but something with immediate practical application. The same way that transgenic Dsup increases the radiation tolerance of human cells in culture, expressing TDPs in yeast allowed them to vitrify to survive desiccation. Unfortunately the authors didn’t test it in human cell lines but that’s an obvious next step.

    I don’t want to overstate things, but if you care about transhumanism or cryonics that has a chance of actually working this sort of thing is a really big deal. This is exactly the sort of thing which got me into genetics in the first place and I’m ecstatic.

    I got one good response, some pushback from user Well… on my endorsement of transhumanism. It’s probably wise to remember not to be incautious when we’re talking about never-before-seen changes to human nature. Even if it does seem like we’ve got a shot at actually becoming the Omar from Deus Ex.

    • Machina ex Deus says:

      Vitrifying their cytoplasm sounds cool (and like something Dave Barry would definitely be interested in), but I can’t help noting that tardigrades are very, very small, and humans are very, very large, and so the same way freezing small animals quickly is a Thing that Could Happen whereas freezing large animals quickly is more of a Thing that We Hope Could Happen at Some Point, well…. Why should one suppose that we could Quickly Dessicate or Vitrify a Human, such that we could become a sort of Living Glass rather than, say, Dead?

      Admittedly, it’s possible I don’t understand this simply because I am a Water Bear of Little Brain.

      • Eltargrim says:

        Interestingly, and counter to my glass sensibilities, the drying times appear to be on the scale of hours, though the authors are not specific. Given that there appears to be some time required for the vitrified proteins to actually be produced, it raises questions as to how fast we would actually need to cool a human to receive similar effects.

      • Dr Dealgood says:

        Well it’s not a bad point; it probably would be hard to desiccate and cool tissues evenly enough to avoid damage. If the outer layers are vitrified and the insides are still wet that is obviously going to cause some problems.

        But as Eltargrim said, this would give a lot more leeway in terms of time than any perfusion method currently in use. It’s much more plausible for a person to be slowly mummified over the course of a day or so before ‘death’ than it is to freeze a brain in a matter of minutes immediately after death.

    • Eltargrim says:

      Not my area, but a few notes:

      Their description of the glass transition temperature makes me cringe. Melting is a very poor analogy to Tg, though in fairness, I’m not sure if there’s actually a good analogy for the glass transition.

      Clever use of H/D exchange to probe the surface vs the bulk. The authors could probably make use of DNP in this system if they wanted to get more specific structural details of the vitrified proteins.

      I saw this tidbit in the experimental:

      Using a mouth pipette, excess water was removed and lids were placed in humidified chambers.

      Horror of horrors, how are people still doing this.

      • Dr Dealgood says:

        I missed that bit lol.

        I know some PIs favor mouth pipetting over mechanical pipettes, since it’s supposed to be more precise with small volumes. Although for what they’re doing that doesn’t matter at all. So I have no idea why they would do that.

  29. bean says:

    This time, we’ll do battleships after WW2. (Previous edition: naval engineering) There were reasonably serious proposals in three countries to continue capital ship construction after WW2. The US at least looked at building follow-ons to the Iowa, although never formally, and primarily as escorts for the carriers. The British had plans to complete two of the Lion-class to a revised design, and the Soviets actually laid down a trio of battlecruisers, although they were scrapped after Stalin died. There was never much rationale for them, except Stalin’s long-running desire for a fleet. The US and British plans are not as stupid as they might seem. The Achilles heel of carriers remained that they were vulnerable at night and particularly in bad weather. If they were unable to operate their planes, enemy surface forces could potentially kill them with relative ease if there were not heavy surface escorts. It took a while to realize that the only enemy was the Soviet Union, and their heavy surface combatants were the Sverdlov-class cruisers, which could be dealt with by much lighter and cheaper ships.
    Of the roles listed for the ships in WW2, only one persisted as a reason for maintaining battleships, namely shore bombardment, although many of the ships that remained in service in the decade and a half after the war did so for other reasons, mainly serving as flagships and training ships. Battleships are excellent at both roles, due to their size and impressive appearance, but most navies were too cash-strapped to afford to keep them in service for very long. Only the US actually used their ships in combat after the end of the war, so the rest of this will concentrate on the Iowa-class.
    In 1950, when the Korean War began, the only US battleship in service was the Missouri, primarily because she was Harry Truman’s favorite ship. She was sent over to provide gunfire support to the UN forces fighting there, and was followed in rotation by her three sisters, who had been reactivated for the mission. They also served as fleet flagships at various points. After the war, they remained in reduced service, primarily for training duties, although there was some work done to improve their capabilities. The most notable example was the addition of the Mk 23 “Katie” to their arsenal. This was a 16” nuclear shell, of approximately the same design as the bomb dropped on Hiroshima. In fact, it was basically an 11” nuclear shell from the Army’s atomic cannon fitted in a 1900 lb HC shell body. No live round was ever fired, and the Navy refuses to confirm or deny if any was even carried.
    In the late 60s, heavy aircraft losses over Vietnam prompted the US to reactive the New Jersey, and send her over as a bombardment platform. The reactivation was quite austere, with some new electronic warfare systems, and all of the 40mm guns stripped off. She was a tremendous success in that role, firing almost 6,000 rounds of 16”, and occasionally creating helicopter landing zones with a single 16” HC shell. However, due to what can only be characterized as rank stupidity in Washington, she was deactivated instead of being sent back. The official rationale was that this would send a signal to the North Vietnamese that we wanted peace. Yes, this is exactly how all of the Vietnam War was.
    In the early 80s, Reagan announced that the four Iowas would be reactivated as part of his 600-ship Navy. The ships were comprehensively refitted, with 32 Tomahawk cruise missiles (3 versions: anti-ship, conventional land attack, and nuclear land attack), 16 Harpoon anti-ship missiles, 4 CIWS (anti-missile gun units) and a new electronics suite. The two common rationales given for this decision are that they were intended to counter the Soviet Kirov-class, and that they were intended for gunfire support of amphibious operations, although both explanations have serious flaws. The Iowas did not carry enough anti-ship missiles to be a serious threat to the Kirovs, and it is vanishingly unlikely that they would have survived to reach gun range. If all that was intended was shore bombardment, then the full overhaul given was unnecessary, and a much more austere reactivation, similar to that given to the New Jersey before her tour in Vietnam, would have sufficed. In fact, a friend of mine who was involved with the initial planning for the reactivation told me that there was serious talk of not reactivating the guns, and that the primary role of the ships was a new type of surface bombardment, that of deep land attack with cruise missiles. At the time, the Tomahawk was just entering service, and the Iowas were big, fast, and capable of launching in rough weather (a trait smaller ships did not share). In the early 90s, improved launchers removed this advantage, and the Iowas were deactivated.
    That isn’t to say they weren’t useful during their decade of service. New Jersey bombarded Beirut in 1983, although her guns proved very inaccurate, and the program that resulted produced the pinnacle of battleship gunnery. Missouri and Wisconsin supported Desert Storm with gunfire and Tomahawks. Iowa alone did not fire her guns in anger, and instead suffered tragedy. On April 19th, 1989, while conducting gunnery exercises off the coast of Puerto Rico, she suffered an explosion of the powder being loaded into the center gun of turret 2. 47 men in the turret died. The Navy investigated and blamed one of the dead sailors, claiming it was a murder-suicide, although an independent investigation by Sandia National Lab found that it was likely caused by the rammer being in the wrong mode and slamming the powder against the base of the shell, either due to inexperience on the part of the operator or technical defect. As a result, Iowa was the first of the four to leave service, in October of 1990. Missouri was the last, in January of 1992, having been kept in commission to be present at the 50th anniversary of Pearl Harbor. (For the record, the surrender was signed aboard her. It should have been Iowa, but Harry Truman failed to appreciate that we were the better ship.) Now, all four are museums. Iowa is in San Pedro, CA. New Jersey is in Camden. Missouri at Pearl Harbor, next to the Arizona memorial, and Wisconsin is in Norfolk, VA.
    (As an aside, I may not post Wednesday. I’m having a busy week. If I don’t put something up, don’t panic. Or I may do a very short one that doesn’t require much in the way of references.)

    • John Schilling says:

      Honorable mention for postwar battleship service. The SMS Goeben, a German battlecruiser of the First World War and previously discussed here as one of the most important warships in history, remained in operational service (under her new name “Yavuz”) as the flagship of the Turkish navy until 1950. The Goeben/Yavuz was in service during the early period of Turkey’s participation in the Korean War, but did not deploy to the Pacific because that would have been silly. Awesomely, impractically silly.

      And this wasn’t even the last survivor of the Kaiserliche Marine, one of whose warships (also renamed) remains in service today.

      • bean says:

        I didn’t mention her for narrative reasons. And for some reason I’m drawing a blank on the other survivor you mention.

        • AlphaGamma says:

          MV Liemba, ex-SMS Graf von Goetzen. Formerly a Kaiserliche Marine gunboat on Lake Tanganyika (inspiring The African Queen), now in use as a ferry on the same lake.

          • John Schilling says:

            That’s the one. Unlike her cinematic counterpart, the historic Graf von Goetzen was non-dramatically scuttled by her own crew, who figured there might be a use for a ship of such scale as nobody would ever again be daft enough to build in such a remote location and who thus took great care to make her salvageable.

          • bean says:

            I actually ran across her trying to figure this one out, but for some reason my brain read it as “still in service (as a warship/military ship)”.

      • cassander says:

        if we’re going to talk about german relics, I have to bring out my favorite in the genre, SMS Seeadler, the last, afaik, sail powered vessel of war to sink an enemy ship, did so in 1917.

      • AlphaGamma says:

        In related very-old-ships news, I found out that there is still a ship of the Imperial Russian Navy in service in its original role. The Russian Navy’s submarine salvage/rescue ship Kommuna was launched in 1913 as the Volkhov and is still in use with the Black Sea Fleet.

    • Fossegrimen says:

      Missouri and Wisconsin supported Desert Storm with gunfire and Tomahawks.

      Heh, turns out I have more experience with battleships than I knew. Due to having other things on my mind at the time, I only knew them as “something big out to sea flattening our enemies with sheer awesomeness”, but that they did well.

    • cassander says:

      I’ve long felt that the US would have been better served by keeping the Alaska in service rather than the Iowas. They’d have been a lot cheaper to operate and presumably just as good for shore bombardment. A hell of a lot more vulnerable to submarines, but I doubt the Iowas ever got sent anywhere that was considered a threat. Of course, they wouldn’t have been suitable for my absolute favorite conversion project, but we didn’t get that anyway, so might as well.

      • gbdub says:

        That’s definitely a cool conversion. Seems like a ship with an air wing of helos and jump jets, plus a few big guns and/or cruise missiles for shore bombardment (and hell, throw in a small wet deck for a couple landing craft) would be the ultimate “littoral combat ship”. Would probably look more like a carrier with guns and VLS cells than a battleship with a flight deck though.

        • cassander says:

          it was designed for amphibious assault, basically combining one of these and these . But you don’t want to use cruise missiles for shore bombardment. For the task, they’re too expensive and take up too much space. guns are better.

      • ThaadCastle says:

        Isn’t that conversion project somewhat similar to the idea behind the Tarawa class LHA’s with three 5inch guns? Although I have to imagine that Iowa style guns would be more useful…although they would possibly disrupt the aviation side…not sure I would want to be taking off/landing when they fired!

      • John Schilling says:

        An inferior and overpriced substitute for a Tarawa in every way except for the big guns, which are the least important part. Meh.

        To elaborate, for most shore bombardment purposes, a pair of 5″/38 dual purpose guns beats a 16″/50. Same weight of HE per unit time, and VT airburst fuzes as standard. Less range, but you can put it on a smaller ship and push it closer to shore. Ask a group of Marines whether they’d rather have a destroyer just offshore supporting their company, or a battleship ten miles away answering to the regimental commander, see how many votes you get for the battleship.

        The battleships used to excel at the subset of shore-bombardment missions involving hardened point targets, e.g. Atlantic Wall fortifications. But for those you needed not only heavy shells, you needed them dead on target, and battleship gunnery made that problematic. You’d eventually manage it, because there wasn’t any choice. Now there is a choice, and the choice is a precision-guided bomb or missile. Plus, as ThaadCastle notes, you really don’t want to mix weapons systems with grossly different operational requirements on the same platform.

        Now, if you were to use the flight deck and hangar for naval (ASW and sea surveillance) helicopters rather than the assault variety, replace the forward turrets with bignum VLS cells, and somehow declutterify the topworks enough to fit an Aegis system and half a dozen fire control directors, you’d have a sea control ship worthy of the name and able to send a Kirov running for home. But that would have been a very expensive conversion, and we already had a production line for Aegis cruisers.

        • cassander says:

          I didn’t say it was a good use of money, sensible, I said I enjoyed it. But a modern version would have 16″ Excalibur shells, so it would have accuracy just as good as any cruise missile, but cost 1/10th as much. They could also steam at 32kts, which meant you could get your fixed wing aircraft off the deck with heavier loads.

          Is it inefficient? yep Massive overkill? most definitely. But it looks cool, and at the end of the day, what really matters is form, not grubby functionality.

          • John Schilling says:

            so it would have accuracy just as good as any cruise missile, but cost 1/10th as much

            Not when you amortize the fixed costs of developing a custom munition that no more than four ships would ever use. Did you notice that the Zumwalts are going to deploy with empty magazines rather than 155mm precision-guided shells, for just that reason?

      • bean says:

        I’m actually with you on the Alaskas, and have proposed the same thing elsewhere. But the problem is that there were only two, which wasn’t enough. Also, the Alaskas were never popular. And the Navy recognized the true brilliance of the Iowa and her lesser sisters. OK, I’ll stop editorializing now.

        Re the conversion:
        GET YOUR HANDS AWAY FROM MY BEAUTIFUL SHIP! THAT LOOKS LIKE YOU TOOK A SUPERMODEL AND GRAFTED ON RANDOM BITS OF MODERN ART!
        On a practical level, you’re shoving the front half of an Iowa onto the back half of a new ship. The aviation facilities and the guns aren’t going to coexist well, and it’s a weird hybrid, which is rarely a good idea. The flight deck cruiser may or may not be an exception to that rule, but this is just a bad idea.
        There were lots of plans for Iowa-class conversions, and all of them were generally pretty terrible. The big, fast hull is tempting, but usually to people who don’t know naval architecture, but do know “what the Navy needs”. (A phrase that is reputed to draw gunfire from naval architects.)

        • cassander says:

          Oh, I don’t pretend it was actually a good idea, just a fun idea.

          That said, I don’t think the gun/aviation conflict would have been that much of a problem. you aren’t going to be using them both at once. now, working around the architecture of the rear barbette, that would be an issue.

    • dndnrsn says:

      For reasons I can’t quite remember I was reading about the Iowa turret explosion on Wikipedia. The article there talks about some pretty sketchy-sounding gunnery experiments. Do you know anything about this?

      Based on this and what you posted about Jutland, it sounds like “doing stupid things in gun turrets” is, uh, a bad idea.

      • bean says:

        Both investigations ruled out the experiments as more than a minor contributory factor. Basically, there was a batch of powder that was intended for 16″/45 guns and they’d remixed it for 16″/50 guns, but you were only supposed to use it with the HC shells, to avoid overstressing the guns. To try to improve accuracy, the Iowa’s chief gunner decided to try firing with 5 bags of this powder (instead of the normal 6), which would keep pressure within limits. The only effect was the potential for confusion inside the turret, because the crew was doing something besides what it normally did. And to the best of my knowledge, the runaway rammer might have killed them regardless of how calm or skilled they were.

        • dndnrsn says:

          Did the Navy really focus as much on the “gay affair goes wrong and one guy decides to kill them both and also blow up a turret full of people” theory, then the backup of “guy decides to kill himself and also a turret full of people” as much as the article suggests?

          • bean says:

            Yes. I recently read a book by the leader of the Sandia investigation, and he was pretty shocked at how far the Navy went to keep blaming Hartwig. If not for a fortuitously-timed test which resulted in the propellant going off the day before a Senate hearing, the Navy might have gotten away with it, too.
            Probably the most blatant example of misconduct was that the initial report blamed an electrical igniter, but the Navy had already ruled it out and moved onto a chemical igniter before the report was issued. And that turned out to be normal chemicals found in the guns.
            Also, the sailor who got interrogated after watch, was pressured into confessing, and then recanted, and they kept using his statement. Frankly, several people should have been court-martialed for gross misconduct.

          • Aapje says:

            I’m not surprised at that, because the Dutch military behaved really badly for a cover up too. The Dutch developed their own land mines for some reason and did a really poor job of it. Initial tests in 1968 showed that the mines could explode when being dismantled. The military then began to destroy the mines they had in 1970, where many mines failed to explode when triggered. There was a near accident when one mine fails, two soldiers approach the mine after a waiting period and it exploded when they approach it. The soldiers were far enough away still to find cover quickly and did not get hurt.

            In 1970 the military examined the mines and determined that a failed mine can get in an unstable state and advises that any malfunctioning mines have to be destroyed from a distance, but they still allow the mines to be used and start producing them again.

            The Dutch military doesn’t instruct the people who use the mine about the issues and in 1983, 7 soldiers die when a mine explodes in a class room. The Dutch military doesn’t do proper research, thus doesn’t find the report from 1970 and probably determines a wrong cause (handling error).

            In 1984, a soldier who tested a mine gets killed when he walks up to a malfunctioning mine and it explodes when he is close.

            A HR guy who worked for the military was tasked to tell the widow a lie that it was the victim’s fault, to prevent her from suing the state for damages. He refused and started investigating the issue. The military then falsified a psychiatric report in 1987 to get him declared mentally deficient due to paranoia and schizophrenia. As a result, he could no longer get a job and had many other problems. It took until 1997 for the courts to clear him, where it was a close call whether the courts would have dropped the case.

            PS. Note that some of these mines were sold to the US Marines without telling them about the issues as well.

            PS2. The crazy part is that we recently learned that the Dutch military had later again falsified a psychiatric report for a different whistle blower case, where a false psychiatric evaluation was backdated to a date where the military pilot was on holiday at the other end of the world. So this was hard evidence that it was falsified.

          • Cypren says:

            A lot of the cover-up seems to have been driven by the appointment of a highly-motivated officer (Capt. Miceli) to run the investigations, not once, but twice, even after being specifically warned (by the Sandia team, and by a senator who was pressing for investigation) that the officer was not a neutral party.

            The officer in question had supervised the redesign of the powder mixture and storage for the powder bags at issue in the accident, and so had a great deal of incentive to shift the blame onto Iowa personnel and avoid scrutiny of the physical materials involved. It was especially egregious that even after the Sandia report showed the powder bags were the likely cause the same officer was put in charge of the re-investigation.

            It’s very emblematic of the self-protecting, insular culture of government agencies in general, I think, and hardly limited to the military. Quis custodiet ipsos custodes indeed.

          • dndnrsn says:

            Government agencies in general? I don’t know that the private or semiprivate sectors are better for it… CYA, empire-building, fighting over funds, etc is the norm in pretty much every organization isn’t it?

            Militaries do offer some of the most… pungent examples though. Situations where the cost of failure is “our enemies win and we are all put to the sword” still see bureaucratic nonsense!

          • Trofim_Lysenko says:

            I would say that more specifically it’s peace-time militaries that offer those examples, and for a very simple reason: The cost of failure ISN’T “our enemies win and we are all put to the sword”, because the bureaucrats in question can happily while away their entire careers if they’re lucky without the damage they’ve done to the capabilities of the military ever actually getting a realistic training/simulation test, much less actual trial by fire in a shooting war.

            Realistic military training, testing, and simulation is A) massively expensive and B) fairly dangerous. People die in vehicle accidents, people get shot during live fire exercises, blown up during live demo training, etc, etc etc.

            Most government bureaucracies have to maintain at least SOME level of function. After about the sixth or seventh nationwide food poisoning epidemic in a 2-3 month period, people would start looking pretty hard at the FDA. But if your military (or more likely pockets of it, or specific pieces of the complex machine that is combined military operations) is dysfunctional, it’s entirely possible that the first you’ll really know about the problem is when you go to use your military for a real world problem on a large scale for the first time in years or decades and it fails to function.

          • Fossegrimen says:

            @Trofim_Lysenko:

            it’s entirely possible that the first you’ll really know about the problem is when you go to use your military for a real world problem on a large scale for the first time in years or decades and it fails to function.

            In other words, fighting all these little wars around the globe for no direct purpose is essential to keeping the armed forces functional. I never thought of Operation Urgent Fury as preparation for Desert Storm before.

          • ChetC3 says:

            I would say that more specifically it’s peace-time militaries that offer those examples…

            And you’d be wrong. Turns out there have been plenty of people willing to risk other people’s kids dying horribly as long as they got to make a fast buck. Most wars don’t present an existential threat to the sorts of people in a position to be war profiteers.

          • John Schilling says:

            Government agencies in general? I don’t know that the private or semiprivate sectors are better for it… CYA, empire-building, fighting over funds, etc is the norm in pretty much every organization isn’t it?

            Private organizations frequently get investigated by government agencies with the authority to shut them down if they don’t like what they find, e.g. every pharmaceutical company every time it wants to produce a new drug, or an existing drug at a new facility. Also sometimes by other private organizations that report to e.g. their insurance company.

            Governments don’t get investigated by private organizations that have the power to shut them down. Occasionally one government agency will be the target of an investigation by another, independent, government agency, but that is relatively rare. And how do you, even as a government agency, investigate someone who has their own battleship and doesn’t want you meddling on their turf?

            Well, OK, there’s precedent for that sort of thing, but that’s even more rare.

          • dndnrsn says:

            @Trofim_Lysenko:

            Cases of stuff like different branches of the military arguing over who gets what even during serious crises abound.

            @John Schilling:

            Most organizations, public or private, will have a lot of internal politics and such that seriously impedes the functioning of the organization.

          • Nancy Lebovitz says:

            I’ve wondered whether one of the reasons large countries fight little wars is so as to have an ongoing supply of experienced soldiers. I’m adding a general check on the effectiveness of their military to my hypothesis.

          • Jaskologist says:

            @Nancy

            I think that hypothesis ascribes far too much competence to our leaders.

          • Nancy Lebovitz says:

            Jaskologist, you could well be right.

            Anyone have history of whether and how much governments learn about military practice from their small wars?

            Here’s an insight I got from a news report– let me know if I’m on to something.

            It was about finding out that IED explosions temporarily disable radios in vehicles, and it occurred to me that a huge proportion of military knowledge is acquired at a high cost.

          • The original Mr. X says:

            Anyone have history of whether and how much governments learn about military practice from their small wars?

            In the early months of WW1 the British army was probably the best unit-for-unit, mostly because it had had recent experience fighting the Boers.

          • John Schilling says:

            The Spanish Civil War was definitely used as a testing ground for new weapons and tactics by the various participants, with mixed results. The Luftwaffe, for example, learned a great deal about how to defeat opponents inferior in quality to the RAF, which helped them exploit their initial advantages against the Poles, the French, and later the Russians, but left them overconfident and ill-prepared for the Battle of Britain.

            In the Pacific, the Japanese had a substantial early edge from their pilots mostly being combat veterans of China and Manchuria. Again mostly flying against inferior opponents, but in that case the Americans obliged them by mostly using old-style dogfighting tactics. On the other side of the equation, the American Volunteer Group deployed to China shortly before the war took the lead in developing tactics that could defeat the Japanese. This actually happened after Pearl Harbor, but early enough that regular US forces were still gearing up; it was the willingness to send “volunteers” to fight in someone else’s war that had US pilots in theatre from day one.

          • Trofim_Lysenko says:

            @Fossegrimen

            Not necessarily. as John Schilling pointed out sometimes you get people honing the techniques that will be used in the next war and the force comes out blooded and experienced…and sometimes you have a force that is perfectly prepared to win the last war but is not terribly prepared for the one it is actually faced with.

            The US military is actually case in point. There was a lot of institutional “Oh shit how do we do this COIN thing again?” scrambling post-collapse of the Iraqi resistance and the rise of insurgent groups in 2003.

            @dndnrsn

            True, but I would argue that those cases:

            A) generally arise out of pre-existing peacetime rivalries, both personal and service/branch.

            and

            B) are usually less frequent the longer into a conflict you get, and the more serious that conflict is. Careers that survived (and even thrived) on peacetime bureaucratic/civil service infighting and empire building often come tumbling down in the opening months or years of a major war.

            When B) FAILS to happen, it’s usually because there are bigger structural problems in play (e.g. we can’t relieve the incompetent General Ned Potism because his uncle Senator Potism has the leverage to bring down anyone who tries to hold him accountable), and it bodes very ill for that side.

          • AnonYEmous says:

            I think that hypothesis ascribes far too much competence to our leaders.

            but what if generals actually want to have small wars for effectiveness and they use their expert knowledge and weight to sort of push the leaders into doing it? I can definitely imagine that happening in most eras (not this one tbh)

          • dndnrsn says:

            @Trofim_Lysenko:

            Yeah, those are all probably true.

          • bean says:

            Re Miceli, he came across as being nice and knowledgeable, but also really deep into confirmation bias. He’d come to the conclusion that it had been an igniter, and wouldn’t budge. Period. He didn’t seem to be scheming to deflect responsibility so much as totally incapable of believing that the technical side could be responsible.
            As for his assignment to the second investigation, that came before Sandia demonstrated it was the powder bags.

  30. psmith says:

    How legitimate are concerns about screen-induced central blindness (see e.g. 1, 2)? Is this a serious problem and should I do something (special glasses, the filters that they’re shilling, turn down brightness and adjust color balance) or am I getting memed on? What about myopia/astigmatism?

    • Fossegrimen says:

      Not a medical researcher and all that, but:

      The cell death thing seems a lot like the ‘cellphones causes cancer’ fad in that the radiation in question is not strong enough to actually cause direct damage on a molecular level. I tend to distrust research that do not have both an observed effect and a plausible explanation for how it occurs. It may be that the cells are dying because of simple fatigue, but then being out in the sun should have the same effect (which might indeed be the case, and the observed effect is also known under the term ‘age’.)
      I am not going to worry until more evidence is in and there is a direct causal explanation. Turning down the brightness is a good idea anyway for other reasons like sleep cycles and whatnot.

      Myopia/astigmatism is definitely a thing but I’m not sure what you can do about it short of not sitting in front of a computer/smartphone/tablet/book/anything-else-that-puts-focus-at-arms-length. The best solution is possibly not to worry and then get laser surgery after every 4 decades or so in front of the screen. Seems to work for me so far.

      • psmith says:

        the radiation in question is not strong enough to actually cause direct damage on a molecular level.

        And we know this because we don’t get sunburns from computer screens? Or what?

        Myopia/astigmatism is definitely a thing but I’m not sure what you can do about it

        Yeah. I couldn’t read the blackboard when I was in third grade, so I’m pretty used to it by now, but it’d be nice to have a way to keep it from getting worse.

        • Siah Sargus says:

          We know this because we know where sunburns come from:

          Ultraviolet Radiation. More specifically, usually UV-B. Because you see, there’s this concept called ionizing radiation; radiation powerful enough to knock an electron off of an atom, and make that atom an ion; this ionizing radiation only starts happening around the UV frequencies, and picks up more as you get further from visible light.

          Suncreen works because it protects the skin from UV-B (and some UV-A) rays. You can even test it under a blacklight! Normal LCD screens don’t really emit light in the UV spectrum.

          LCD screens are bad because they are bright, and right in front of you, not because they can cause cancer.

          • Eltargrim says:

            In fairness to psmith, the paper they linked showed evidence of cell death from exposure to 400-500 nm light due to the generation of damaging chemical species. This is distinct from the usual damage mechanism from sufficiently high-energy photons, which is to directly cause chemical changes in the DNA.

            There is potentially cause for concern from sources outside the conventional wisdom. However, I’m personally not concerned, for a few reasons:

            1) The literature is still mixed, with some studies showing different effects;

            2) The actual amount of flux is likely important: the linked study used fairly high levels for fairly long times; and

            3) Something kills cells in a petri dish? So does a handgun. In vitro studies are a fantastic starting point, but we need to be sure not to generalize too broadly too quickly.

          • suntzuanime says:

            I wouldn’t fire a handgun at my eyes either, FWIW.

          • Cypren says:

            LCD screens are bad because they are bright, and right in front of you…

            This is one reason why many (maybe even most) professional programmers use very dark color schemes on their tools and get very annoyed at websites/apps that are only available in bright-white color schemes. Dark backgrounds with high-contrast text are far easier on the eyes when you’re staring at a monitor for 12+ hours a day.

          • Siah Sargus says:

            >Dark backgrounds with high-contrast text are far easier on the eyes when you’re staring at a monitor for 12+ hours a day.

            Yeah, I wish web devs properly understood that, instead of giving me a bunch of white pages all of the time! But even darker backgrounds won’t save me all the time. Especially when it comes to color grading my work! Then, I absolutely need the high brightness and saturation present!

    • Deiseach says:

      Things to do: the recommendations like “don’t have the screen too bright or in direct glare, don’t strain your eyes looking at it, if your eyes are dry/watering, that’s a sign it’s too bright and you’re looking too closely for too long; every half an hour move your eyes away from the screen and look into the middle/far distance to compensate”, etc.

      Don’t buy special filters/glasses etc unless you’ve been recommended to do so at your most recent eye test from a reputable optician, not some “you will GO BLIND unless you BUY OUR GOODS!!!!” website.

    • One Name May Hide Another says:

      myopia

      Not really answering your question, but there’s a very interesting article in Nature I read some tine ago called Myopia Boom (sorry, can’t link because of the spam filter, but it’s easily googlable.) The article claims that myopia is caused by lack of daily exposure to sunlight and presents some evidence for the assertion. For example, it once seemed that exercise could help prevent myopia, but then it turned that only exercise done outside is beneficial. Similarly, children spending hours in front of screens and/or reading didn’t seem to develop myopia as long as they spent enough time outside. And some schools in China started experimenting with glass roofs in order to see if this sort of sunlight exposure will help prevent myopia in their students.

      • psmith says:

        Now you mention it, I think this came up in a links thread not too long ago.

        Too late for me, of course, but interesting stuff.

        • One Name May Hide Another says:

          I think this came up in a links thread not too long ago.

          That’s probably why I read it, then. Just goes to show all of the interesting stuff I read comes from Scott. =)

          Too late for me

          I’m wondering if daily exposure to sunlight in adults might prevent eyes from getting worse, though.

    • Reasoner says:

      Some thoughts:

      * The Daily Mail is a British tabloid newspaper.

      * Searching for “central blindness” (what’s supposedly being caused here) seems to indicate this is a term used mostly in a veterinarian context. (Perhaps this nonstandard term use is due to the research being translated from Spanish?)

      * They’re quoting a sentence fragment “it is now clearer than ever that we are facing a global epidemic” that’s supposedly from a recent report. Searching for that sentence fragment brings the Daily Mail as the first result. Does this report exist? I’m not able to find it by searching for “Celia Sanchez-Ramos” on Google Scholar and filtering for publications since 2016 either.

      * It looks like the report is based on 2 papers, one of which you linked. The other has to do with rats and tablets. I haven’t been able to find this second paper and I’d appreciate a link to it.

      * This looks like it might be the press release for their study: http://www.reticare.com/press/wp-content/uploads/2017/02/Press-Release-Study-LED-Light-and-Damage-to-Eyes.docx Note that it’s being hosted on the website of a company that sells a solution to this ‘problem’.

      * Dr Sanchez-Ramos seems to have some kind of Harvard affiliation: http://rcc.harvard.edu/eyecare-research Here’s her Wikipedia article: https://en.wikipedia.org/wiki/Celia_S%C3%A1nchez-Ramos

      * None of the citations of the paper you mention seem very doom and gloomy: https://scholar.google.com/scholar?start=0&hl=en&as_sdt=2005&sciodt=0,5&cites=6657082982669988686&scipsc=

      * Here’s a link to the full text of the paper: http://www79.zippyshare.com/v/AATdnIhu/file.html (Courtesy of gen.lib.rus.ec/scimag)

      * From a glance at the paper, it appears that their research was done *in vitro* (with light cells in a petri dish essentially) over the period of a couple days. It’s unclear what relevance this research has to cells *in vivo* (light cells in your eye) over many years of use.

      * Quote from the full text of the paper

      Epidemiological studies suggest an association between visible light exposure and increased risk of advanced age-related macu- lar degeneration (AMD). Visible light can affect the retina and RPE by photochemical, thermal and mechanical mechanism (8).

      * Here’s some info on the prevalence of age-related macular degeneration: https://nei.nih.gov/eyedata/amd

      * I searched online for info on age-related macular degeneration and LED lights and found this article from the Financial Times, apparently on the same research, which I consider to be a much more reliable source: https://www.ft.com/content/a78d6b68-85ef-11e4-a105-00144feabdc0 Quote:

      Dr Sánchez-Ramos acknowledges that it may take another 10-15 years for research to demonstrate conclusively that LED light causes macular degeneration in the same way that sunlight does.

      * The Financial Times article also has a good advice section at the end. (I’m not gonna quote it because I appreciate the fact that FT is so much more levelheaded than the Daily Mail and I want them to get every bit of ad revenue they can. BTW have you thought about subscribing to the Financial Times? It’s a great paper. I’ve noticed that papers like FT which are targeted at investors seem to have higher quality information than other sources. [Cue investor cat meme])

      * “LED lamps are another great option. They provide quality light without doing further damage to the eyes.” http://www.amdblog.org/macular-degeneration/optimizing-indoor-lighting-for-macular-degeneration/ This looks like it’s a blog written by some people who sell a supplement for age-related macular degeneration. So no real reason to believe it’s a reliable source.

      * Instead of buying Dr Sanchez-Ramos’ screen filter, consider simply using https://justgetflux.com/ to decrease the amount of blue coming from your screen. This solution has the advantage that it can modulate the level of blue reduction through the day in order to help you with your sleep cycle. (After reading all this info, I tweaked my F.lux settings to make my screen oranger and also turned down the brightness on my screen in general.)

  31. Well... says:

    Are there any services that allow you to purchase a URL (with forwarding) for just a day or two, and that charge proportionally for it?

    • HeelBearCub says:

      Are you looking for a one time thing, or something that lets you cycle domain names every few days?

      Because if it’s a one time thing, you can get promo rates that are 1 year for a dollar.

    • skef says:

      It seems that a use of a URL shortener would constitute obtaining “a URL with forwarding”, and many of those are free. But I suspect that you mean something more specific.

    • This may make Web creator Tim Berners Lee cry, since “Cool URIs Don’t Change”.
      https://www.w3.org/Provider/Style/URI.html

      • Jiro says:

        The reply to most of those is “Doing those things is not free. Are you going to pay us for them? No? Well, why should we listen to you?”

        • skef says:

          Pride in creativity and craftsmanship could be one reason. If you’ve devoted your career to making a pile of bits for the next two months only, maybe you could do better for yourself and others.

          • Jiro says:

            The reply to that is basically the same, even if the details are different: You’re telling someone else to bear the costs of something that you won’t bear the costs of yourself. Phrasing it as “of course you can do better, it’s easy” doesn’t change this, and since you’re not bearing the costs, you’re not in a position to judge whether it;s really all that easy.

        • roystgnr says:

          Putting up a website is not free. Therefore, people putting up a website generally already have significant incentives for putting up a website. These incentives probably correlate very highly with incentives for putting up a website that doesn’t work poorly, in which case information about how to make your website work less poorly is helpful.

          • Jiro says:

            Doing extra work to make the website work in the way that an unpaid bystander wants it to work is not free, and its costs are over and above the costs of just creating the website at all.

            These incentives probably correlate very highly with incentives for putting up a website that doesn’t work poorly

            That’s equivocating on “doesn’t work poorly”. It works poorly by the standards of the unpaid bystander; it doesn’t work poorly by the standards of the people paying for it.