Open Thread 128.25

This is the twice-weekly hidden open thread. You can also talk at the SSC subreddit or the SSC Discord server.

This entry was posted in Uncategorized and tagged . Bookmark the permalink.

589 Responses to Open Thread 128.25

  1. johan_larson says:

    The penultimate episode of the miniseries “Chernobyl” ran on HBO yesterday. This one featured the famous “bio-robots”, workers assigned to clear highly radioactive debris in very short shifts, 30-90 seconds in some cases.

    Anyone here know enough about nuclear engineering to explain why the plant operators are so adamant that an RBMK reactor cannot explode? Nuclear power plants are in the business of turning water to steam to drive turbines to generate electricity. For steam to do useful work it needs to be under pressure. Pressurized systems can fail explosively. What am I missing?

    • HeelBearCub says:

      Not really answering your question, but “explosion of pressurized steam container” isn’t what I think of when someone talks about a nuclear plant exploding. To me that suggests a nuclear explosion.

      • bean says:

        This is correct. Nuclear reactors, even poorly-designed Soviet ones, can’t explode like nuclear bombs. They can fail in other ways, including producing steam explosions, but an engineer is going to insist, correctly, that these are not the same thing.

        Or CatCube’s explanation could be right. This was the USSR, after all.

        (Also not a nuclear engineer, but I very briefly worked on a reactor project with some.)

    • johan_larson says:

      Looks like the draftees assigned to shoot stray dogs were using Moisin-Nagat rifles:

      https://en.wikipedia.org/wiki/Mosin–Nagant

      Man, that is an old, old design for 1986. Developed in the last days of the 19th century. Not even semi-automatic.

      • DarkTigger says:

        The gun is still in civillian use. You can buy them for around 75$ in the US last time I checked. And why not? As long as it is properly stored, the lock of a gun stays usable pretty much forever, and both the Russian Empire and the UssR build untold millions Mosin-Nagants.

      • CatCube says:

        Bean linked to a great website in a post on battleship aviation: WWIIafterWWII. There’s a tag page on the Mosin-Nagant, and one of the articles shows uses of this rifle in Syria, in 2018. Not in large numbers, and some are post-war copies, but some of that stuff has lasted a long time.

      • bean says:

        The Russians never throw anything away. There are apparently still warehouses full of WWII-era artillery and maybe even tanks, just in case they need them. And a bolt-action rifle is perfectly adequate for shooting stray dogs. Plus, you can easily throw it away if it gets contaminated.

      • Aapje says:

        @johan_larson

        Interestingly, firearm design shows fairly little autonomous innovation. Most innovation happens relatively soon after new technology becomes available.

        The end of the 19th century is notable for the development of smokeless powder, which was an immense improvement over black powder, being both far more powerful, but also much cleaner. Black powder quickly fouls up any complex firearms to the point where they stop working. So smokeless powder allowed for new designs, that wouldn’t have worked with black powder.

        Pretty much all of the common bolt-action designs are from the end of the 19th century. Both the Germans, Russians and British had bolt-action rifles in WW 2, based on designs from the end of the 19th century. The US was the only large nation who had a semi-action rifle as standard issue during WW II, although the French just started producing one in 1940, but didn’t make very many to the occupation (although they amazingly kept it out of the hands of the Germans).

        The first (real) machine guns were also developed at the end of the 19th century, as they wouldn’t have worked with black powder.

        Semi-automatic rifles and pistols took a little bit longer, but were designed early in the 20th century, most notably the M1911 pistol, which is still quite common today (and still used by part of the US military).

        A major reason for the slow adoption of semi-automatic rifles is firstly that militaries worried about running out of munition. The bolt-action rifles with magazines were for quite some time made with cut-offs, that prevented feeding from the magazine. The idea was that soldiers would normally single-load cartridges and fire volleys under the direction of an officer. Quicker loading from the magazine was only to be used for ‘oh shit, moments.’

        A second reason is that militaries favored long distance shooting for a very long time. It basically took both WW I and WW II for them to truly recognize that effective shooting tends to happen at fairly short distances and that smaller cartridges are generally better.

        Anyway, the Russians had great difficulty fine-tuning and producing the AK-47, only getting it right in 1959 or so. They also wouldn’t give these nice weapons to draftees, who typically get the old shit until everyone and their mother has the newer weapons.

        Furthermore, countries commonly stock huge quantities of ammunition for the next big war. If they switch to a rifle with different munition, they have a strong incentive to make draftees practice with the old weapon, so they use the obsolescent ammunition, rather than the new stuff.

      • John Schilling says:

        I think the record for “WTF why would anyone still use that thing” is the Madsen light machine gun, where someone tried to make a lever-action rifle into an automatic weapon by using a long-recoil mechanism coupled to a steampunk monstrosity of levers and cams and springs, and it somehow worked. Worked well enough to be adopted by over thirty countries, used for almost a hundred years in wars on five continents, and still keeps showing up in Brazilian police hands even though they claim to have officially retired it a decade ago.

    • CatCube says:

      I think there’s a maximum amount of steam that can be generated per unit time, based on the reactor power, and the reactor operators thought that maximum was less than the amount required to blow the reactor apart. It turns out they were wrong, but that’s because the limitations of the system were concealed from them.

    • John Schilling says:

      Some of the key points have already been made, but to elaborate:

      No nuclear power plant can explode like a nuclear bomb. Any nuclear power plant can violently disperse critical parts of itself. Ideally, you can design the containment structure to withstand the worst-case violent dispersal. No competent engineer can claim that this was done with the RMBK-1000.

      The reason for the first three claims is that it takes deliberate, extreme effort to achieve a highly supercritical configuration, and nobody has any reason to do that with a power plant. The common conception of a “critical mass” is an oversimplification, and usually contains assumptions like “assuming a uniform sphere of fissile material at normal density in an infinite vacuum”. Really, it is a combination of mass and density of fissile material, presence of moderators and neutron reflectors, and geometry of all of this, that defines criticality. And while hypothetical configurations can be subcritical, critical, or supercritical, there’s a “you can’t get there from here” problem in manifesting the supercritical situations in the real world. Nuclear reactions are self-initiating on a timescale of microseconds to maybe (with highly enriched uranium, if you are careful) milliseconds. In the course of assembling what you imagine to be a highly supercritical configuration, you will pass through a barely-supercritical configuration.

      Whereupon the nuclear reaction starts and,unless you deliberately back off and/or have incorporated negative feedback, grows exponentially(*) until it produces enough energy to violently destroy some element of the supercritical configuration. Maybe this just means boiling some water that you were using as a moderator, maybe it means actually vaporizing your plutonium sphere, but even in the latter case if you just barely vaporize part of a barely-supercritical plutonium sphere, it drops back to a subcritical state and the reaction dies down. Whatever it takes to wreck the critical parts of the reactor, that’s about as bad as the explosion can get.

      For a Hiroshima-style nuclear explosion, you need some clever way to go from a subcritical state to a very supercritical state, in milliseconds or less. To assemble a supercritical configuration, faster than it can disassemble itself under the influence of a nascent nuclear explosion. Firing uranium slugs at uranium targets from a high-velocity cannon or collapsing plutonium spheres to twice their normal density with carefully-arranged high explosive lens assemblies, that sort of thing. There’s no mechanism for that in a power plant.

      Worst case for a power plant, you build the nuclear core of it out of something flammable, so that once you reach the “have wrecked the critical assembly” part, there’s a good chance that it will finish burning itself to the ground the old-fashioned way. And then wimp out on the steel-and-concrete containment structure designed to withstand a reactor burning itself to the ground in a pool of spilled, boiling water.

      That’s the RMBK reactor, with its graphite moderator and easy-access roof. If you use water as a moderator, boiling off enough water means ending the reaction. With a graphite moderator, basically high-grade coal, boiling off enough water means the reaction continues but now you’ve got exposed graphite to catch fire. And if you insisted on an easy-access roof because this is a military plutonium breeder as well as a civilian power plant and you’re going to be swapping fuel elements regularly, then you’ll probably blow it off with the first bit of steam pressure.

      Anyone claiming an RMBK cannot explode, is either quietly caveating that with “…like a nuclear bomb” and ignoring the problems with an uncontained conventional explosion coupled with a working reactor’s worth of nuclear waste, or they are arrogantly assuming that it is “impossible” for their safety mechanisms to fail and allow the reactor to proceed to the catching-fire stage that is an inherent failure mode of the design.

      * With a relatively slow time constant because only barely supercritical

  2. Theodoric says:

    I recently heard about something called Fundrise. From what I can tell from their website, they are non-publicly traded REITs and direct investment in real estate properties. You do not have to be an accredited investor to invest with them.
    1) Is this some kind of scam?
    2) Is there a compelling reason to invest with them instead of just buying a REIT (I am not talking about a retirement fund or anything like that)?

    • brad says:

      I don’t know anything more than briefly glancing over their website, but it reminded me of this: https://www.youtube.com/watch?v=bZ8VSZni_YY and I agree entirely with what Mark Cuban says in that clip.

      • Douglas Knight says:

        [Roll eyes]
        [Clutch head]
        This is horrible in so many ways.

      • Theodoric says:

        And now I wish I had the money to pay the sharks to critique anything I’d want to invest in.

    • Erusian says:

      1.) You normally need to be an accredited investor to invest in private REITs. However, they might be registering this as crowdfunding and using multiple shell companies to skirt under the rules. It’s not illegal but it’s a little wild west. Then again, all people using that exemption are a little wild west because it’s so new.

      Looking up its history, it appears it’s not a scam-scam (in the sense that they’re going to take your money and run). But it does look like they’re double dipping in a couple of ways, doing things that lead to conflicts of interest, and that their marketing claims have received complaints.

      2.) Not that I can see, other than the time/effort aspect of doing it yourself. Their selling point appears to be ease of use and not much else. Even their returns look a little anemic to me.

  3. Deiseach says:

    Serendipity, or something at work here.

    While tuned in at lunch time to our national classics station, I heard this – the “In Paradisum” from Benjamin Britten’s War Requiem, and I thought I’d link to it here because it’s a great piece of music – making beauty out of horror.

    Then logging on I find that today in the United States, it is Memorial Day: “Memorial Day is a federal holiday in the United States for remembering and honoring people who have died while serving in the United States Armed Forces.”

    Requiem æternam

  4. Laukhi says:

    My admittedly shallow understanding of economics has lead me to believe that the main reason for the prevalence of firms over independent contractors is transaction costs. However, it seems to be that nowadays there are many people who do not do their work in firms, such as those working on Youtube or Patreon, or some alternative, due to much cheaper capital and easy communication.

    I am not sure if I am falling prey to the fallacy of thinking this age exceptional when it is just that I’m uninformed, but could it be that increasingly advanced technology will eventually lead to firms being unnecessary, or at the least less needed?

    • BBA says:

      The distinction between “employee” and “contractor” is fairly arbitrary, and more about the legal consequences of one status or the other than anything inherent in the relationship itself.

      • bullseye says:

        That’s true for “independent contractors” working for a firm, but I think Laukhi is talking about contractors who work directly for the customer without a firm (or with the firm doing nothing but handling the transactions and taking their cut, as with Youtube and Patreon).

        • CatCube says:

          The issue there is that there is a lot of business conducted that way, and technological improvement doesn’t seem to produce a huge sea change in the problems that those kinds of firms have that would make them more attractive.

          The website Clients From Hell can be summarized as “mostly small one- or two- person independent art firms talking about how hard it is to get paid.” There’s also a helping of “we wasted a bunch of time because the client couldn’t figure out what they wanted.” How does technology fix the management, legal, and accounting realities? One way the firm solves it is that these administrivia–very critical to the success of the business–becomes somebody’s entire job, enabling them to get better at it.

          Technology may help pull together data for people who are inclined to look at it, and may do some of the arithmetic bitchwork involved in tracking the books. It may also, however, have a hand in making those things more complicated. Because the computer can take more of the burden of that kind of “simple*” work, it allows it to become more complex.

          An example from my own field of structural engineering: as computer usage has grown, the equations and methods used to solve for both the loads and capacities of structural members have gotten more complicated, consuming much of the simplification that computers would have provided had we kept using them. This complexity has made things more accurate, allowing the use of less material for the same relative safety, but much of the now-required analysis would be prohibitively time consuming, tedious, and error-prone with a slide rule. There are of course also things that are possible with computing technology that would have simply not been possible to do safely with old hand-analysis methods (e.g., supertall buildings).

          If anything, the increase in complexity and the greater required interfacing with the government could make smaller firms less likely due to this increased complexity. If you’re outsourcing your judgement on many of these matters to “the magic box said so,” you might end up in a real bind when the IRS disagrees with your tax return.

          * This is definitely scare quotes–ADBG’s posts show that this accounting can be very complex, even for something simple to describe.

        • BBA says:

          It’s a matter of perspective what the “firm” is. I once heard the description that VC funds are “firms” and VC-backed companies are “divisions” of those firms, which doesn’t correspond to appearances or legal structures, but may be a better way to get at the underlying social relationships. Likewise you can look at YouTubers as “employees” of YouTube if that’s where they get all their money and they have to abide by corporate policies or get “fired” (i.e., demonetized).

          It’s all a matter of where you draw the lines and boxes.

    • toastengineer says:

      I don’t think the firm is at any risk of fading away entirely any time soon – things that require big buildings full of equipment and people operating them are always going to be managed by firms, you’re never going to have an “indie” oil company for example. Nor do I think we’re going to see Maersk replaced by a bunch of independent freighter captains, because the up front capital investment is just too high. But we are definitely seeing smaller, less firm-y firms pop up. A lot of software companies don’t even have offices, for example, it’s just a bunch of people who pay each other. Compare that against IBM or Microsoft.

      My dream is to start a company that’s basically just a hackerspace that whenever someone comes up with something really cool we start selling it and split the profits.

      But, then again, IBM and Microsoft are doing pretty well for themselves – it seems like communications technology hasn’t completely eliminated the problems the idea of a company was created to solve. Companies can use those technologies too, after all.

      • Clutzy says:

        I agree. I don’t think anything has significantly changed. Taxis were often independent contractors, remain so with Lyft/Uber.

        Youtube is a firm, the people who make content are ICs. Lots of firms hire outside work to do their content. NBC doesn’t create and produce all its shows. It hosts the NFL which is an IC, various studios have shows on NBC, etc. So there is no new business model.

        And Patreon is just a charity aggregator masquerading as something else.

    • Viliam says:

      Seems to me that the essence of the change is not technology per se, but rather popularity of the business model: “build a marketplace, let other people bring their products there and compete with each other, and tax everyone”.

      Before this model became popular, companies would try to build everything themselves, or to buy it. If they would receive a trustworthy message from the future that one day people will watch online videos more than they watch TV… the recipient of the message would probably try to make their own TV channel, which would either make or buy movies, and stream them online. (Instead of making YouTube, where millions of people can upload their own videos.) Because they would not know there is a different way.

      As an example, before smartphone era, Microsoft Windows used to be installed at almost every computer. A perfect opportunity to create a global software marketplace! Instead, Microsoft was busy creating their own products and trying to destroy competitors. So another company had to make Steam; then Google and Apple made app marketplaces for their operating systems… and only afterwards Microsoft finally got the memo and created Microsoft Store. They could have done that at least a decade ago; and instead of fighting their competitors they could have taken 30% cut of their sales, and everyone would have been happier! But the idea simply wasn’t out there.

      Okay, it is about technology: you need internet, and online payments. Actually, probably only the former, because you could transfer the money to your account offline, and use it to buy the apps afterwards. (Just like now you can buy Steam credits offline.) Also, Microsoft could have provided the first few “free credits” as a part of buying Windows.

      Things are easier now for independent contractors, because in the past no one thought that “making things easier for independent contractors, and then taking a cut of their sales” is a good strategy for many things; especially for digital goods or services that can individually be quite cheap.

  5. HeelBearCub says:

    Via Kevin Drum, Andrew Van Dam at WaPo has a piece up that points out something that is perhaps “obvious in retrospect”.

    Rural towns that are successful grow in population. Once they grow to a population of 50k, we reclassify them as “urban”. Thus, the definition of rural America automatically includes a bias against showing rural populations as successful.

    So, any story that talks about how “poorly” rural communities are doing may just be an effect of survivorship bias.

    Now, another way to look at this is perhaps to say that in the U.S., for whatever reason, it’s impossible to maintain the rural character of successful rural areas. Nobody goes there anymore, it’s too crowded.

    • Nabil ad Dajjal says:

      This is a very silly argument.

      The rust belt consisted of a number of what were, in the last century, thriving cities such as Detroit, Cleveland, Pittsburgh, Buffalo and a dozen others with less recognizable names. Their decline precisely mirrors the decline of rural factory and mining towns.

      The hollowing out of middle America isn’t a semantic problem of definitions, it’s a very tangible and visible process. Large swaths of the country look like bombed-out ruins: that’s not because of how those areas are classified, it’s because the industries they were built around have been offshored.

      • baconbits9 says:

        I don’t know what you mean by precisely mirrors, but the declines in rust belt cities don’t precisely mirror each other.

      • HeelBearCub says:

        This has nothing to do with arguments about rust belt city decline. Those cities have been and continue to be classified as urban.

        Even if we dip into the idea that the exurbs of some of the smaller areas got reclassified as rural again, due to overall population decline, it’s still a process of survivorship bias.

    • Erusian says:

      This argument is probably comforting to those who want to ignore rural problems and the problems of the minor cities. But it doesn’t work out statistically.

      Rural, in the US, is defined as ‘not urban or suburban’. An urban area has to have an ‘urban nucleus’ of about fifty thousand people and then includes the surrounding areas in addition to the nucleus. There’s some ambiguity in the 50-100,000 range but by the time there are 100,000 people in the nucleus surrounding area it’s a city if a small one. That’s roughly 2% of American communities and 10% of the land, but 80% of the population.

      And it’s actually very rare for a non-city to become a city or a city to become a non-city. Only about four change status every year on average. This is out of 22,498 places, not including unincorporated ones, counted in the census. So in a good year, .02% arguably leave rural status. That process is almost always long term. The Bakken Oil Boom, for example, produced no new urban areas.

      • baconbits9 says:

        Even if only 0.02% left rural status each year they would be by definition the largest population areas and could easily be several orders of magnitude larger than the average rurual area. If you are trimming 1-2% a year of the most prosperous population areas then this could have a significant statistical effect over 20-50 year periods.

        • Erusian says:

          They would be several orders of magnitude larger than the average. However, they would on average be about half a percent, maybe up to a full one percent, of the total. If losing the most concentrated .5% of their population was enough to collapse society, even as the total population grew, you’d expect to see other places with net positive migration but a small consistent emigration to have similar issues. Like, say, New York and California. Which don’t.

          I’m not arguing this has no effect. Clearly, the exit of large communities has some effect. But the argument is (quoting here), “If rural Americans complain of being left behind, it might be because they literally are [by census statistics].” This is an attempt to dismiss other factors and it’s far from proven that that’s the case. I find it doubtful such a small percent can have such a large effect. Especially considering the process is more or less constant and it gets better or worse depending on time period.

          (Also, keep in mind that populous does not always equal prosperous. For example, several areas are swelling due to relatively poor migrants. These communities are much more populous but can be poorer than, say, smaller mining towns or more capital rich farming communities.)

          • baconbits9 says:

            You would be losing the top 0.5% EVERY YEAR, not once.

          • Erusian says:

            As would the effect in New York. Indeed, it makes my 7.6 million example even more poignant because it would require the houses to appreciate from roughly $150k to $7.6 million every year.

            I am comparing like to like here. At least I think I am. If you think I’m not, please share how I’m not.

          • baconbits9 says:

            First you aren’t addressing the actual argument. The argument is that rural society isn’t actually collapsing, but the statistical methods are creating an artifact that makes it appear as if it is collapsing. By taking the top performing rural areas and reclassifying them as urban you are basically defining success as urban. Exactly how large the effect is would take a lot of investigation, but the potential there is big.

            1. Obviously the largest town would have an out sized effect relative to the number of small communities on rural growth.
            2. The effect doubles as you trim the best performing rural areas and add them to urban areas, now their growth will increase the rural/urban divide.

            There’s some ambiguity in the 50-100,000 range but by the time there are 100,000 people in the nucleus surrounding area it’s a city if a small one. That’s roughly 2% of American communities and 10% of the land, but 80% of the population.

            There is a lot more ambiguity than this, if you follow the WaPo link there is a claim that communities of 8,000 are classified as urban if there are enough commuters within that population.

            The other major factor is a persistent shift in the definition used for rural/urban. Using the 1950s definition has rural areas growing faster by population than urban, and (eyeballing the graphs) it appears that the first 6 definition changes all favored urban growth. Now there might be some good reasoning in there, but it at least pushes back against the common narrative and requires far more rigorous investigation.

            As would the effect in New York

            I am not sure what this is in reference to.

            Indeed, it makes my 7.6 million example even more poignant because it would require the houses to appreciate from roughly $150k to $7.6 million every year.

            I don’t believe those are the numbers required to the rural/urban gap if you strip out the top 0.5% of rural homes in value and reclassify them as urban every year, but I am not going to be able to do that math right now.

          • Erusian says:

            First you aren’t addressing the actual argument. The argument is that rural society isn’t actually collapsing, but the statistical methods are creating an artifact that makes it appear as if it is collapsing. By taking the top performing rural areas and reclassifying them as urban you are basically defining success as urban. Exactly how large the effect is would take a lot of investigation, but the potential there is big.

            Yes, I am. I am making the point that in order for the exit to account for the whole variances in housing (for example) the houses would have to appreciate from $150k to $7.6 million before dropping down to $200k.

            Since they obviously do not, that’s obviously not a significant part of the variance. It is making a wider point about just how small the transitional effect is.

            I am making a mathematical argument using statistics. You have not addressed it. If you want to the case that it’s an iterated effect that does not appear in single instances, fine. But make that case.

            1. Obviously the largest town would have an out sized effect relative to the number of small communities on rural growth.

            2. The effect doubles as you trim the best performing rural areas and add them to urban areas, now their growth will increase the rural/urban divide.

            1.) Outsized, yes. But since it’s a widely dispersed population, it’s still small. As I said, we are talking about significantly less than half a million people out of a population in the tens of millions. As I said before, on average, less than 350,000 people leave rural status a year this way. There are currently sixty MILLION rural Americans. This means it is a significantly smaller population flow than, for example, illegal immigration to rural areas.

            2.) This again presumes, without evidence here or in the article, that population growth inherently leads to or comes from economic prosperity. This simply isn’t true. A great many towns have been formed by large masses of poor migrants that decreased most of the relevant statistics. But they had the raw population numbers to be urban.

            There is a lot more ambiguity than this, if you follow the WaPo link there is a claim that communities of 8,000 are classified as urban if there are enough commuters within that population.

            That community would have to be located around an urban core. So, yes, the if you live in a commuter town outside Boston with 8,000 people, 4,000 of whom work in Boston, you’re considered part of Boston. But you need an urban nucleus of at least fifty thousand people to qualify. They sometimes make exceptions, but rarely.

            I don’t believe those are the numbers required to the rural/urban gap if you strip out the top 0.5% of rural homes in value and reclassify them as urban every year, but I am not going to be able to do that math right now.

            Who says it’s the top .5% of rural homes by value? It’s not. Farms, large mansions, resorts and the like would all be valued more highly.

            Anyway, I’m happy to go over the math if you’re interested in investigating the phenomenon. You’re right it’s a back of the napkin calculation and we could make a more sophisticated model.

            If you just want to keep your beliefs and not investigate further, then that’s that though. But this was what I was afraid of.

          • HeelBearCub says:

            Your claim is the math doesn’t “work out”.

            But we already know that the math works for the actual population numbers. We know that enough changes in the definition of urban have occurred such that the non-urban areas in 1950 grew far faster in population than those already classified as urban at that time. That math works.

            What exactly is your claim here?

      • HeelBearCub says:

        it doesn’t work out statistically.

        I don’t think you bothered to click through to the article.

        If we look at all the area categorized as rural in 1950, the growth rate of that “rural” land area well exceeds the growth rate of the “urban” areas at that time.

        • Erusian says:

          I’m afraid I did bother to read the article.

          The article has statistics showing that all rural land has grown faster than urban areas but that what remains urban land has grown more slowly. It then draws the conclusion that rural areas that are successful exit being rural areas, leaving behind poorer communities as ‘rural’ which is effectively a synonym for ‘failing’ because of statistical definitions.

          I have a bunch of objections to it. I could just go down the list but here’s the one I made in my post: When you look at significant per capita discrepancies, something that removes a tiny part of the total population annually doesn’t work for large discrepancies. Not unless those people are ridiculously different from the people around them. For example, the average home value in rural America is about 25% lower than in urban America. In order for that effect to be the result of the reclassification, the average reclassified home has to be worth at least $7.6 million.

          The math simply does not work on the community level. The change in classification is too small to explain the whole effect when combined with observed statistics. We are talking about significant discrepancies and the article is trying to explain them with about two hundred status changes over seventy years.

          • HeelBearCub says:

            You are talking about “losing”.

            The article is mostly talking about “gaining”. Rural areas that make the most gains cease to be rural.

            So, when looking at current rural areas we should ask “what is difference between the rural areas that succeeded and the rural areas that did not?”

            It’s not an argument that we should ignore rural issues. It’s an argument that we need to understand what is actually happening.

          • Erusian says:

            It doesn’t even prove that areas that make the most gains cease to be rural, actually. It just sort of assumes that. There’s no statistic about income per capita leading to population growth, for example.

            It also puts forth a thesis that rural problems are overstated because statistics are skewed by areas leaving rural classification. Which it doesn’t prove either.

            I agree there is something to be learned from areas that prospered, whether that prosperity led to exiting rural classification or not. If that’s your only argument we have no disagreement. But I’d require giving up on the article’s main argument: that rural areas are actually doing reasonably well but the statistical conversion of .02% a year (or about .9% over the entire period) accounts for a significant amount of rural underperformance.

          • HeelBearCub says:

            Huh?

            If you look at the graph, which was in the original piece and the re-blog by Drum, you can see that, in terms of population growth, the rural areas from 1950 outperform the urban areas from 1950, by a large margin.

            Then it looks at subsequent classifications and looks at what the population growth looks like from 1950 on. You can see that each reclassified maps shows slower and slower population growth. The slowness of the growth of the currently rural areas is essentially the reason they are still classified as rural. The rural areas with high enough population growth to become urban …. cease to be rural. That’s axiomatic. You don’t need to posit causation, because it’s by definition.

            If your point is “We haven’t proven that low incomes in these areas caused their low population growth”, I concede that.

            But that is also sort of the point. You also can’t say “rural areas are in decline because they are rural”.

          • Erusian says:

            But that is also sort of the point. You also can’t say “rural areas are in decline because they are rural”.

            I agree with that.

            What exactly are we disagreeing over? My understanding was that the claim (as it is in the article) is that a significant part of rural underperformance is because the most successful communities cease being rural. That is a ridiculous idea for a number of reasons. If you’re not arguing that, I’m not sure what your argument is so I was probably doing a poor job of arguing against it.

          • HeelBearCub says:

            @Eurasian:
            Well, let’s ask this question … how rural is Ames, IA?

            Generally speaking a town in the middle of an ocean of corn fields codes as pretty rural, but somewhere around 1990 (at a guess) they stopped being “rural”. The metro area is about 100k over two counties.

            And currently unemployment in Ames is 1.5%. They are well past full unemployment. But that success is now coded as urban. Whenever Ames “became” a metro area , rural performance not only statistically worsened prospectively, but also retroactively.

            Which doesn’t mean that lots of rural towns aren’t shrinking. Not that this necessarily must cause immiseration. But of course just shrinking in and of itself axiomatically makes for a loss of measured economic activity, even if residents maintain or even grow their income.

            My point simply is that when we look at overall economic statistics for “rural” areas we need to keep the fact that overall economic activity has a couple of different biases built into it based on population changes.

            And then we start digging down into the per household data, as Drum is starting to do … you don’t really see much in the way of a trend that even says the rural areas are diverging from urban areas.

          • Erusian says:

            Well, let’s ask this question … how rural is Ames, IA?

            Generally speaking a town in the middle of an ocean of corn fields codes as pretty rural,

            You grew up in a city, didn’t you?

            No, Ames, Iowa doesn’t code rural for me. It’s a small city. Nor does the presence of corn fields, or else you’d classify most small Midwestern cities as rural. To me, rural means (eg.) Bremen, Indiana or Hooke Farm, Idaho.

            I lived in a rural area that had recently become a city and I lived in several rural areas and small (as well as large) cities. When I see things like this, I’m reminded of why federalism is such a great idea. People who have lived in urban areas their entire lives do not understand rural areas.

            This is a point I’ve made before: I criticized someone for having a definition of ‘rural’ that amounted to ‘a city but smaller’.

            Perhaps that’s the fundamental disconnect.

            rural performance not only statistically worsened prospectively, but also retroactively.

            Are you under the impression that the government back-adds areas that have become urban as not being rural in the past? They don’t.

            I don’t think you’ve addressed my point about how small this effect is. It’s a small part of the population, both in human terms and number of communities. So even if it had an effect, it would be small unless that population was extreme outliers.

            My point simply is that when we look at overall economic statistics for “rural” areas we need to keep the fact that overall economic activity has a couple of different biases built into it based on population changes.

            And then we start digging down into the per household data, as Drum is starting to do … you don’t really see much in the way of a trend that even says the rural areas are diverging from urban areas.

            Sure. We also need to keep in mind rural communities are significantly more diverse than urban ones. Not in ethnic terms but in terms of their organization, economically, culturally, and politically. For example, a bunch of large landholders grouped together who hire migrant labor to do the actual work on their farms are very different from a community of farm laborers working for a large corporation. But they are both more similar than a coal mining community. And so on. It’s part of being a classification that literally means ‘not urban’.

            Also, looking at your own graph it appears that rural areas do have a significant statistical disadvantage. Larger than the difference between whites and blacks, which most people agree is a problem.

          • brad says:

            I lived in a rural area that had recently become a city and I lived in several rural areas and small (as well as large) cities. When I see things like this, I’m reminded of why federalism is such a great idea. People who have lived in urban areas their entire lives do not understand rural areas.

            Alas federalism in the United States has long since devolved into mi casa es mi casa y tu casa es mi casa. I’ve rarely seen federalism deployed as an argument constraining the proponents own willingness to mettle in the affairs of others, rather it is consistently used as attempt to constrain those others from some area the speaker wishes minority control for. McDonald v. Chicago being perhaps the quintessential example of fair weather federalism.

            In any event the argument itself has been overtaken by facts on the ground. As you point out Ames, IA and all but a tiny handful of states are dominated by urban areas at least the size of Ames. Rural polities depend for any political power at all on suburban voters that are rural dwellers only in their own imaginations.

          • The original Mr. X says:

            I’ve rarely seen federalism deployed as an argument constraining the proponents own willingness to mettle in the affairs of others

            That seems like a very strange objection to me. If you’re a principled federalist, presumably you wouldn’t propose meddling in other states’ affairs in the first place, so the issue of constraining you from doing so isn’t going to come up.

          • HeelBearCub says:

            I grew up in a town about the size of Ames, another college town, Chapel Hill, NC. It was just part of a larger metro area.

            Whereas my wife grew up in Rocky Mount, which is a town of about the same size, an hour east of the large metro area of Raleigh-Durham. The general culture of Rocky Mount and Chapel Hill are completely different. So to Greenville, NC. Her dad grew up planting tobacco on a farm in a community that was one stop sign big playing 6 man football in a county where his ancestors have lived since the 1600s.

            But, regardless, Ames appears not to have been classified as metro area until 2003. I think by your view they weren’t rural even before then, and I’m not sure what your exactly cut off is for actually considering someplace rural.

    • CatCube says:

      It’s interesting, but I don’t think it grapples with one of the two big challenges to rural areas: formation of stable rural areas is probably no longer possible. (The other big challenge is the decline in the industries that formed the basis of the towns, which is discussed some.)

      This is more a hypothesis on my own part, driven by my observations on why I couldn’t raise kids in my hometown, nor could even if the things causing the town’s decline were to reverse. There is the problem of wrapping too much in the example of one area, but I think there’s at least some food for thought that can apply to elsewhere, and I’m familiar with the history and why things were the way they were (and have linkable stuff to hand). So, to use my own hometown as an example: Republic, Michigan. It started as a mining town in the 1870s, with several shafts. That Google Maps link is centered on what looks like a lake, but is actually an open mine pit from the 1950s now filled with water after it closed in 1981. A JPEG map from prior to the closure of the old shafts in 1928 shows what it looked like during the town’s heyday. To orient you to the area, the rail bridge, Kloman Ave bridge, and Mirror Lake (now locally called School Lake) are still there, and the south of the JPEG map ends just immediately prior to the dam you can see in the Google photo near the bottom of the water-filled pit.

      At the time of the town’s founding, if you worked in a mine, you lived right next to the mine. You did this because you got to work by walking your happy ass from your house to the shaft. An autobiography of a miner from a town nearby is called “52 Steps Underground“, and discusses how close he lived to the mine his father worked in and where he later worked. Cars were a thing by the early 1910s, but both them and their fuel were expensive enough that most of the workforce didn’t generally commute well into the ’40s. The numbered houses in the JPEG map were probably those owned by the company, but you can see that there were houses only several hundred feet away from the headframes owned by individuals. Working on a farm was similar. Logging camps were a bit different, but it was typical for loggers to work for a month and then come into town.

      So what does this have to do with rural towns in general? Because everybody needed to live near the industry, there would be people there to enable the formation of schools, churches, and provide customers for stores, bars, entertainment, etc. Because there were guaranteed to be people there, you could move your family there and be confident that there would be going concerns, which meant that people would invest in those community things, and it feeds on itself. Maybe not to the point of growing, but at least remaining stable. Now, towns based on a single extractive industry (like Republic) were very sensitive to the fortunes of that extractive industry–I don’t want to lose sight of that. But they were also creatures of the fact that people needed to live (permanently) near particular physical locations.

      When I was growing up, the school did a pretty good job of educating me. There were stores, bars, other kids to do things and explore with. We had community organizations like churches, the Lions Club, a VFW post, and a sense of community. That started to break down after the closure of the mine, but was still more or less true when I graduated high school later.

      Now, however, most of that has disappeared. I couldn’t in good conscience live in the house where I grew up (and my parents still own, owing no money on) and send my kids to the school where I graduated from. The school is struggling, and will probably close entirely within the next few years. The community organizations are small and declining further. There isn’t a convenience store, and even the nice Memorial Day parade we used to hold is a sad shell of itself.

      The decline is due to the mine closing. But here’s the point that I’m making: if CCI were to reopen that mine tomorrow, it wouldn’t bring the town back. Because the workers at that mine probably wouldn’t move to Republic. They’d be more likely to live in Ishpeming, Negaunee, and maybe Marquette, with commutes in the half-hour to 45 minute range. There, people moving in would have guaranteed access to communities and schools that are currently healthy(ish). Moving to Republic itself would be taking a chance on other people moving there so functional institutions could be recreated. Which you know others won’t, so you won’t either. It’s a self-fulfilling spiral.

      As a matter of fact, there’s not even a guarantee that the workers for this new notional mine would live even in the bigger towns. They might very well live elsewhere, and commute in by plane for a weeks or months long shift, then go back home, leaving their families elsewhere. @Erusian notes that there’s been no urban formation due to the Bakken oil. This is why: because current easy transport means that people can be permanently domiciled tens, hundreds, or thousands of miles away, earn a bunch of money in an extractive industry, then go back there.

      I don’t have a solution to this. This is why I’m so frustrated by people that think that Trump will fix what ails their communities, because while offshoring jobs is a part of the problem, there are even deeper structural issues that halting offshoring of manufacturing and extractive industries won’t touch (environmental regulations and increase worker pay are others, but even those pale next to this). I just want to point out that if you want to live in a stable rural society, our current economy’s answer is “go fuck yourself in the ear with a drill.” Current easy transport very heavily favors suburban and urban living, and squeezes other modes out.

      • brad says:

        I don’t want to make light of the pain of seeing the place where you grew up disappear, but I’m not sure we should have a solution to this. And it isn’t only rural areas either.

        The complaints about gentrification come from the exact same emotional place. And here’s another, personal, example that has little to do with class.

        The suburb of New York City I grew up in is still there–the houses, most of the stores, the schools, the libraries, and so on. The area is richer now, more upper than upper middle, not a whole lot of teacher/pharmacist couples like one of my friend’s parents were. But by and large the same type of successful 30 something professionals that want backyards and good schools for their kids. The problem, inasmuch as it is a problem, is that the families are Indian, South Korean, and Chinese rather than Jewish and Catholic. When I was growing up the town was about two thirds Jewish, another 20-25% Italian, and almost all the rest Irish.

        That kind of Jewish community doesn’t exist in my hometown anymore or anywhere else in the world. Almost certainly it will never exist anywhere every again. We were all indisputably Jewish–Ashkenazi ancestors on both sides as far back as anyone could trace, bar/bat mitzvahs, grandmothers that made matzo ball soup, names like Goldberg, Cohen, and Rosenstein–but no one kept kosher or wore a yamaka. Our parents might have been able to have a halting conversation with their parents or grandparents in yiddish, but English was their first language, they had gone to college (and often grad school)–they were Americans as well as Jews.

        Those people I grew up with? They are married to Chinese, Irish, WASPs, Latinos–any kind of background you could meet at a high end American university or the workplaces people go to from there. A few became fundamentalist and joined entirely different kinds of Jewish communities. A small handful ended up with other Jews, mostly by luck it seems. I’ve attended very few brises or bas mitzvahs–though I did go to one very awkward joint christening/baby naming.

        There are still Jewish by default communities in Israel, but for various reasons not worth getting into here, they are not at all the same thing.

        It’s painful to me to recognize that the culture that played a large role in making me who I am is dead and buried. But would I fix this “problem”? Would I tell my brother he shouldn’t have married my sister-in-law? Would I say that Indians should not have been allowed to move into my hometown? No and no.

        No man ever steps in the same river twice, that’s our human existence for better or for worse.

      • S_J says:

        This is both interesting and troubling.

        I saw the after-effects of a similar end of the mining business in one area, somewhat near that town.

        The instance I’m thinking of is the pair of towns known as Houghton and Hancock, and the University there, which started its life as a College of Mines.

        That University was the reason that I spent a few years of my life in that part of the Michigan. And it is most of the reason that either Houghton or Hancock are still towns of any size. (Though there is a good amount of seasonal tourism.)

        Many small towns in that area are faded relics of formerly-bustling mining towns. There are a few that are ghost towns.

        Towns like Republic also reminds me of the small town in Ohio that my Dad grew up in. That town had a limestone mine, and lots of surrounding farmland. The mine closed sometime after my dad left high school (and went to Engineering school at U-Toledo). It’s less busy than it used to be, but also has opportunity for people to live there and commute to jobs elsewhere.

        Notably, I could spend a day traveling through cemeteries in that part of Ohio and see graves of many of my ancestors. (The oldest of those graves are from the early/mid-1800s; the generation that moved from the East Coast shortly after Ohio Territory opened up to settlers.) Almost all of those ancestors lived in rural areas. Many were farmers, a few were shopowners in rural towns.

        The life that they lived was available to my Dad, but he didn’t take it. It would not have been available to me, even if Dad has stayed in that part of Ohio. Many of the distant family from that part of Ohio lives in a larger city now. Those who don’t live in larger cities still drive a good distance to their job.

        I saw most of this at second hand: I used to visit that area of Ohio every year for holiday celebrations and family reunions. It’s a little sad, but I recognize that I could not have had the small-town life that my Dad had, even if I grew up in the town that he grew up in.

      • HeelBearCub says:

        Whether it’s living immediately adjacent to a factory or mine, I don’t believe this is generally considered to be the quintessence of rural living, and to the extent that it is, it’s not generally considered a positive example of rural living.

        To say nothing of the fact that extractive mining is inherently locally unstable, especially as it becomes more eficient.

        One of the more recorded bluegrass songs is John Prine’s “Paradise”.

        When I was a child my family would travel
        Down to Western Kentucky where my parents were born
        And there’s a backwards old town that’s often remembered
        So many times that my memories are worn.

        And daddy won’t you take me back to Muhlenberg County
        Down by the Green River where Paradise lay
        Well I’m sorry my son but you’re too late in asking
        Mister Peabody’s coal train has hauled it away

        That was written in 1971.

        Now, your main thesis is that is that easy transportation eliminates the forced closeness that enables small sized communities. But I would actually point to efficiency as a problem that has to be considered. One farmer can plant, tend and harvester more acres than in the past. Same thing with mining or ranching. When you needed hundreds of miners for generations you could form a local community. Not so much when you need tens of them and the extraction will take less than a generation.

        • brad says:

          Whether it’s living immediately adjacent to a factory or mine, I don’t believe this is generally considered to be the quintessence of rural living, and to the extent that it is, it’s not generally considered a positive example of rural living.

          Maybe not in romantic imagination but I think it probably is in a statistical sense. And come to think of it, I’m not sure you’re right about the generally considered part either.

          It’s true that people have realized the vulnerability of relying on a single major employer forever, but I’d argue that small town living has been more glorified in Americana than truly isolated living. Farming, at least as to cash crops, hasn’t been labor intensive enough to support small towns for almost hundred years.

          Finally, farming in many places is an extractive industry too. It’s often the extraction and export of fossil water.

          • HeelBearCub says:

            I’d argue that small town living has been more glorified in Americana

            Sure. But the small town living glorified has almost uniformly included vehicles. I mean, I suppose we can go to stuff like Our Town or Oklahoma, Disney’s Main St, USA, etc. I think it’s interesting that those were mostly written at the peak of what we are now looking back on as “a better time”.

            Now, factory towns have also been similarly glorified, but not as small rural towns. Those communities tended to be larger. American Graffiti, Allentown, things like that.

      • Dack says:

        I can think of some rural mining towns that have death spiraled/ghost-towned while the mine is still open. The employees just don’t live there anymore.

      • AG says:

        This isn’t just the situation in “area in decline” places, either.

        Huntsville, Alabama, and its surrounding area has an active growth in various industries putting their plants there. By all accounts, there are a lot of good white collar and blue collar jobs there, so there should be town growth, right?

        Nope. There’s so much “there’s nothing to do here” that people are putting their families in Nashville and eat the 3 hour commute (sometimes by staying in a long-term hotel during the workweek and going back to Nashville on the weekends). Culture and/or community not developing in Huntsville.

        Even China can’t make thriving cities happen by fiat.

        (The shitpost answer to this is that cheap shipping is to blame. Clearly, we need not just US vs. the world isolationism, we need state-to-state isolationism! The less shitpost answer is that we need gleeful trust-busting to an extraneous extent.)

        • Matt says:

          Your post doesn’t match my experience as regards Huntsville, Alabama. I’ve lived here for over a decade and moved here for a job. I think none of my work friends live in Nashville and commute to HSV. (Of course none of my acquaintances would be in that situation – all of their non-work activities would be in Nashville) I do know one guy who used to live here and now commutes irregularly and mostly teleworks. I also, of course, know people who have commutes of >1 hour, but that’s so they can live in the country or a smaller town, not a superior city. Tons of people I work with live next door in Madison and commute ‘here’.

          How much larger would you argue HSV’s growth rate would be but for these folks you say are commuting from Nashville?

  6. BBA says:

    Hello from Chicago. I’ve got another deep dive on an obscure topic for you – well, it’s a deep dive some other people did for a law review article, but I don’t entirely agree with their take on it.

    The article is very long, so here’s a summary: The Midwest was laid out in evenly spaced rectangular grids, like any proper modernist region should be. When the commissioners of the Illinois and Michigan Canal subdivided their property in section 15 of township 39 north, range 14 east, third principal meridian, there was only room for two blocks of buildings before Lake Michigan interfered with the evenly spaced grid, so the subdivision map designated a slim parcel of land east of Michigan Avenue as “public ground, to forever remain free of buildings and other obstructions.” Fast forward a few decades, and the city of Chicago has landfilled much of the lake, largely with ashes and debris from the great fire of 1871. The much larger parcel is now called Lake Park (renamed Grant Park during the ensuing proceedings) and is home to railroad tracks, an art museum, a baseball stadium, and other obstructions. And then mail-order magnate Aaron Montgomery Ward, whose headquarters were on Michigan across from the park, started a series of lawsuits to block further construction, tear down existing buildings, and enforce the restrictions on the original map.

    Now in general I think parks are a good thing, and clearing out commercial activity from what was supposed to be open space was certainly a worthwhile endeavor for Ward. But he went further, challenging a plan by the city to build the Field Museum in Grant Park, in opposition to almost everyone else in the political establishment, and he won then too. Today the Field and other museums sit just south of the original “public ground” and the Ward rulings don’t apply there. The Art Institute was for many years the only building in Grant Park, because Ward consented to its construction there.

    The law review paper goes into the legal doctrines involved, and generally approves while noting that their particular application to Grant Park was somewhat incoherent. A dedication of land to public use creates a sort of “negative property right” for owners of neighboring land to veto uses contrary to the original dedication. In Grant Park, Ward’s objections were always upheld, but a landowner who objected to the Art Institute was overruled and given money compensation. But later when the city tried to seize Ward’s right to control Grant Park, the court decided that the dedication was permanent and absolute, and could not be extinguished by eminent domain or overruled by statute. In the 20th century it got to the point where the city would not construct a bandshell – sponsored by the Montgomery Ward Foundation! – for fear of attracting a lawsuit from an aggrieved neighbor, never mind that in the Ward cases the courts had explicitly named bandshells as an appropriate use for a public park. Meanwhile the almost-freeway Lake Shore Drive cuts off pedestrians from the lakefront, but since it’s not a building it was totally fine to put there.

    And now in the 21st century, the city has built Millennium Park on the northwest corner of Grant Park – land that was not part of the original dedication, but had the Ward restrictions extended to it. With consent from its neighbors, Millennium Park is home to a massive bandshell and large public art installations, as well as an underground theater, train station, and parking garage. These are all things that Ward would’ve objected to if he were around today, but Millennium Park has been widely successful and popular (despite the massive expense of construction worsening the city’s already precarious finances – but that’s another deep dive). The question is, who is the “public” that a public space like Grant Park is supposed to benefit? Just the immediate neighbors, or the city as a whole? And legally should a neighbor with deep pockets and a grudge get a veto over everyone else’s preferred uses for public land?

    The paper’s authors come out vaguely in favor of the public dedication doctrine. I’m more skeptical, just because I don’t think a few words written on a map should mean that we are permanently to be ruled by a dead hand. If you look at another subdivision map, the one made for the Manhattan grid in 1811, there’s a large open tract designated as a “parade ground” in Midtown that mostly isn’t there anymore (only Madison Square survives of it). But it would be utterly ridiculous to tear down half of Midtown because somebody stumbled on this old map and realized there was supposed to be open space there. On the other hand, for public parks it is a good thing to have some kind of external check on the whims of an imperious, egotistical mayor, something that New York and Chicago have had no shortage of.

    • Tenacious D says:

      Interesting history, thanks! There’s a lot of path-dependence in how places develop, but Grant Park turned out pretty well, in my opinion.

    • Dack says:

      I agree that we should be careful in making rules/rulings that cannot be reversed.

      On the other hand, Grant Park and Millennium Park are a bright spot in that city, and I feel like that land would have been gobbled up for development long ago without extreme measures taken to protect it.

  7. AG says:

    I started watching the anime “The Reflection” on Crunchyroll. It gives me a lot of Worm vibes.
    (Not in the “superpower tactics” sense, which doesn’t work well with animation, but in the thoughtful character study and deep-dive world-building sense. This is a show that cares about the practical implications on a sociological level.)

    It’s a collaboration between Stan Lee (who is also a character in the show, heh) and Nagahama Hiroshi, who is known for The Flowers of Evil, which is like a cross between Fight Club and Catcher in the Rye.

    • AG says:

      The Reflection bungled the landing pretty badly, and the dub has some really distracting miscasts. If you can get over that, it’s still worth watching, and (other than the ending), probably more interesting to y’all than the MCU.

  8. johan_larson says:

    So, now that The Game of Thrones is done, I need a new drug another series to follow. Suggestions, anyone?

    • Hoopyfreud says:

      Old or new?

      For olds: Babylon 5, M*A*S*H, and Twin Peaks are the greatest shows ever to hit American television screens (and why is literally everyone screaming at me?). No idea how many of which you’ve seen.

      New I can’t recommend.

      If you want interesting Chinese cartoons, I’ll plug Planetes, Gankutsuou: The Count of Monte Cristo, Serial Experiments Lain, and Texhnolyze. All avoid anime’s typical failure modes and are interesting.

      I’ll also say that watching through Roger Ebert’s Great Movies list has been extraordinarily rewarding. But I’m not much of a TV guy, so YMMV.

      • johan_larson says:

        I’ll also say that watching through Roger Ebert’s Great Movies list has been extraordinarily rewarding.

        Interesting. There’s a lot of stuff on that list that I haven’t seen, but those that I have seen have been very good. That’s a promising sign. Thanks for the tip.

        • Hoopyfreud says:

          Just to drive this home – Ebert’s list is unequivocally the best “watch this” list I’ve ever come across. IMO, Ebert had the greatest ability to judge a movie on its affect of anyone. It’s a quality I sorely miss, and one I wish critics were judged by more. He certainly had his misses, but he had far, far more hits.

      • Nabil ad Dajjal says:

        Twin Peaks has, to be polite, extremely uneven quality.

        The first season and a half, up to the Big Reveal, is some of the best television I’ve ever seen in my life. David Lynch is a weird dude and the show definitely has an otherworldly dream-like feel to it, but the characters and the way they interact is very genuine and forms the heart of the show. I would recommend people to watch it if and only if they promised to stop and never watch anything past season 2 episode 9.

        The rest of the second season and the prequel film Fire Walk With Me aren’t bad so much as just very bizarre. The heart that I mentioned before is gone and without it there’s nothing to ground yourself with amidst the strange and inexplicable things that are happening on-screen.

        The third season is absolutely without a doubt the worst television I’ve ever seen in my life. It would have been better if it had never been filmed and it killed any good will I had towards David Lynch and the returning actors. This is speaking as someone who has seen several of his movies and didn’t hate them: Twin Peaks: the Return was an abomination.

        • Hoopyfreud says:

          I should mention here that I haven’t watched Season 3 and don’t plan to (thanks mostly to things like this comment), but that I didn’t think Fire Walk With Me was the worst thing of all time.

        • Nick says:

          Would you mind expanding on why you hated season three? When it was coming out I heard things that were generally negative, but not nearly as negative this.

          Cards on the table: have seen the first season but not the second season.

          • Nabil ad Dajjal says:

            Very old spoilers!

            Season 3 picks up 25 years after the end of the second season. At the end of that season Cooper found himself trapped in the “Black Lodge,” the dimension where BOB and the one-armed man MIKE originated, by an evil doppelganger who took his place.

            The first two episodes follow this in a way which is a bit unsatisfying but sort of follows. Cooper is trying to get out of the Black Lodge and back to reality; Cooper’s doppelganger knows that his time is almost up and is desperately trying to prevent that; Cooper’s old friends in Twin Peaks and the FBI are trying to solve the mystery of why he disappeared suddenly and someone who looks just like him is running around committing random crimes. Cooper makes his way back but Cooper’s doppelganger manages to avoid being destroyed in the process and enough breadcrumbs are laid for the others to follow.

            Then in the third episode everything goes off the fucking rails and the entire rest of the season up to the penultimate episode becomes the ‘Dougie Jones Show.’ Cooper has lost his memory and mindlessly repeats bits of whatever people say to him, all of whom manage not to notice. He wanders around like this doing fuck-all for the majority of the season while the plot grinds to a halt.

            All the while, the supernatural elements of the show just keep getting more and more confusing. There are magic hobos called the Woodsmen who go around killing people for seemingly no reason; Laura Palmer is retconned into an angelic being created by the giant while her mom is retconned into a demonic entity named Judy who kills people for seemingly no reason; everyone is a goddamn Tulpa; Cooper travels back in time to save Laura Palmer but ends up stuck in an alternate reality where he has sleazy sex with Diane in a motel room; there’s something about electricity that doesn’t make any sense even by David Lynch’s standards. It’s all unpleasant to watch and adds nothing to the story.

            It’s a story that initially showed signs of potential, enough that you keep hoping that it will eventually pay off at some point, but instead just gives you episode after episode of “Dougie Jones” wandering around interspersed with bits of cryptic nonsense that aren’t even self-consistent. Then the non-ending comes and completely shits the bed.

          • Nick says:

            Thanks. Absolutely baffling that Lynch would go that route with the story. Are you serious about the tulpas thing? And were the Woodsmen just introduced? I don’t remember characters like that in season one.

          • Nabil ad Dajjal says:

            Are you serious about the tulpas thing? And were the Woodsmen just introduced? I don’t remember characters like that in season one.

            There were three or four Tulpas, depending on whether you define a second Tulpa of the same person to be a different Tulpa. So not literally everyone but they play a large role in the story despite having never been established previously.

            The Woodsmen were also never mentioned or even alluded to before this season. It’s also totally unclear to me what their role in the story was.

            ETA: I just checked the Twin Peaks wiki, apparently two characters credited as woodsmen appeared in Fire Walk With Me. However they were nothing alike in appearance or behavior.

      • Deiseach says:

        I won’t scream at you about Babylon 5, the last season had problems that were not really the showrunner/writer/onlie begetter’s fault, given that he had to work around (a) completely changing his lead character after the first season and (b) okay we’ve got five seasons to do this no we have to wrap it up in four say the Powers That Be okay lemme shoehorn the kitchen sink in to tie up all the plot threads oh hey now we’ve got that fifth season need to fill it up somehow problems. (Ths is all quite distinct from me personally that I will love Sinclair forever and tend to throw things at the telly when Sheridan turns up).

        The Telepath War episodes are unintentionally hilarious, though. I’m sorry, JMS, I could take the plight of the poor oppressed telepaths more seriously if they didn’t all look like models flicking their luxurious manes around in a shampoo ad (and that’s just the guys) 🙂

      • LesHapablap says:

        If you watch M*A*S*H, make sure to get a version without the laugh tracks.

      • littskad says:

        If you are interested in M*A*S*H, there’s a website with a number of essays which analyze the writing of the series here. Time sink warning!

      • J.R. says:

        My wife and I are looking at 90s Sci Fi. We’ve watched and loved The Expanse and Ron D. Moore’s BSG. Any takes on Deep Space Nine vs. Stargate vs. Babylon 5?

        • Protagoras says:

          Babylon 5 is the best. Deep Space 9 is probably the high point of the Star Trek franchise. Stargate is kind of campy and cheesy, but mostly fun; Stargate Atlantis has less interesting villains but more interesting protagonists than Stargate SG1, and Stargate Universe is uneven, but is the least campy and cheesy and its best episodes are quite good.

          • Nick says:

            Stargate SG1 was my introduction to scifi, before Star Trek and even Star Wars, and I have a special fondness for it. Atlantis is pretty good too. I couldn’t get into Universe.

          • Hoopyfreud says:

            Agree with Protagoras. Would advise searching online to find out which Season 1 episodes can be safely skipped – there are a few that are bad and add nothing. Including the pilot movie.

        • John Schilling says:

          Assuming that what you are after for your GoT replacement is interesting people playing with the fate of kingdoms, nations, planets, species in an interesting secondary world:

          Babylon 5 seasons 1-3 are at the top of this and most any other list concerning televised science fiction. You can probably skip S4, and should skip S5, the pilot movie, and the spinoffs.

          Stargate SG4 seasons 1-4 are in second place, with quality then declining through S7. You should probably quit when General Hammond does, if you’re still watching, and skip the sequel series.

          Deep Space 9 is to some extent redundant with Babylon 5, but it’s better than any subsequent incarnation of Star Trek, and at its best is close to TOS and TNG at their best. Exactly when to start and stop watching is going to depend on exactly what you are looking for here; in particular, the early seasons are the ones most redundant with B5 if you’re watching that.

          As you note, The Expanse is always good, and the Battlestar Galactica reboot had two solid seasons. Going back a bit, I’d also nominate only the first season of Earth: Final Conflict, and the first 1.5 seasons of Andromeda. Both derived from unproduced Gene Roddenberry projects.

          Leaving the realm of outer-space science fiction, I second the nomination of Orphan Black for your near-future Earthbound SF fix, and throw in The Last Kingdom for (non-fantasy) medieval power politics.

          • Doctor Mist says:

            Stargate SG4

            or rather, Stargate SG-1. And don’t write off after season 4 completely; there are some fine moments. The two-part Moebius episode is just lovely. But it’s serial enough that you kind of need to watch everything to really appreciate the gems, and there’s a lot of woo-woo stuff that stops being SF in my opinion.

            Also, if you start at the beginning, don’t give up until you’ve seen a half dozen. There are a few really cringeworthy moments in the first handful; my wife and I figured we had the measure of the show: we visit a new planet where people are not woke; we wake them (though we didn’t use the word “woke” in those days). We gave up, only to start over when Ben Browder and Claudia Black joined the series in season 9. (Alas, though we love Browder and Black, the show really should have ended after season 8.)

            We met Browder and Black in Farscape, which we liked a lot, but it’s been so long I can’t remember why.

          • Edward Scizorhands says:

            First season of SG1 was trying to do Real Drama. There are some nice character pieces in there you will not see in any of the other seasons (until SGU comes along) but also some really atrocious episodes.

    • Aapje says:

      Chernobyl is very good.

      • Vermillion says:

        I second this recommendation, first 3 episodes have been up there with some of the best horror movies I’ve seen in years. They’re also releasing a podcast with every episode where they talk about what they adapted, what they left out and all the parts that are completely, shockingly, real.

        • The Nybbler says:

          I second this recommendation, first 3 episodes [of Chernobyl] have been up there with some of the best horror movies I’ve seen in years.

          Doesn’t hold a candle to the 80’s original, though. Pacing is way too fast, for one thing, and the viewer gets too much information right away.

          (And I saw the American version; the European version apparently had far more emotional impact)

      • dndnrsn says:

        Worth watching. I like that they kept people’s British accents instead of having everyone do fake and corny Rrrroooooshian accents or whatever.

    • Thomas Jorgensen says:

      Person of Interest is very, very good. And stuck the landing, as regards the end, which, I suspect you might find important at this precise moment in time.

      Warrior is about tong intrigue and violence in Chinatown just before the exclusion laws, based on a concept Bruce Lee failed to get made and is hitting it out of the park so far. GoT levels of nudity, gore. Better politics, very, very good fight-scenes.

      Shield, starting with the episode after the winter soldier premiered. (The show completely sat there and spun wheels until they could pull the trigger on the hydra reveal, but everything after that has been good)

      Halt and Catch Fire. Well, I really liked the first season and a half so far, anyway

      Good Shows on HBO (Since you just finished game of thrones, I presume you are subscribed)

      Westworld is quite good, and probably stronger as a binge than it was in bits.

      Gentleman Jack is fantastic: Deeply religious (angelican) lesbian agricultural landlord, coal mining entrepreneur, from the victorian age and it is based on the diaries of an actual historic figure. Very strong script, very strong performances, and it is going to run until they run out of historical material.

      • AG says:

        Be advised that Person of Interest is a slow burn, and also obligated to continue with case-of-the-week crime procedural things throughout its run. And while it sticks the landing as far as the final episode goes, the final season, and the half season before that, are fairly weak. (But I prefer procedurals to prestige TV, so I really like PoI, even it’s like not even 101 levels of AI Risk.)

    • johan_larson says:

      Anyone have a take on Orphan Black?

      • Thomas Jorgensen says:

        Tatiana Maslany deserves all the praise she got for it, it has a lot of twists and turns, none of which made the fandom cry out “Oh come on”, and like Person of Interest, the ending was good.

        If you do binge it, do report back, because that seems like it might be a bit brain melting – see earlier comment about twists and turns.

      • The Nybbler says:

        Starts strong, then falters somewhere in Season 2.

        • Nick says:

          Yeah, I really enjoyed season one, but everything I was hearing was that later seasons were not nearly as good.

        • Tenacious D says:

          It picks back up in my opinion, with Season 3 having some of the most impactful moments in the whole show. After that, however, Season 4 introduced some new characters that I didn’t find as compelling and I didn’t follow Season 5 that closely.

      • AG says:

        OB falls to the recent common case of shows having a near-perfect debut season, but the showrunners not having much of a long-term vision, so its plotting after that quickly goes whack, hurting the characterizations along with it. If you treat it as pulp, and don’t care about continuity over the moment-to-moment aesthetic execution, it’s enjoyable.

    • Yair says:

      People already mentioned Person of Interest, but I’m going to add my vote to it.

      The Americans, if you like spies, the 80s, the Cold War etc.

      One has 5 seasons, one has 6, both have very good endings.

      If you want something that is still going season 2 of Killing Eve is worth watching.

      Last but not lease 2 Israeli series on Netflix that are worth watching: When Heroes Fly, and Fauda (make sure you use subtitles the dubbing is atrocious).

      • LesHapablap says:

        The Americans is one of the best shows ever.

        I guess I’ll have to watch Person of Interest. I watched the first few episodes when it came out and found it quite preachy.

    • Nick P. says:

      If you’re willing to pivot from fantasy to Sci-Fi I’ve been enjoying The Expanse so far.

      Fun bonus: They’ve made three seasons so far which roughly cover the first three or four books with the book series having already released eight out of a planned nine, so there’s little chance of show writers running out of source material and winging it. Also Jeff Bezos personally stepped in and bought the rights to the show when it was going to get cancelled so one can say with some confidence that it’s going to last all the way to the end.

      • LesHapablap says:

        The Expanse is fantastic but I’d warn that the first season does drag for at least a few episodes. On rewatching the pacing is fine, but without context it seems slow.

    • sty_silver says:

      Black Mirror, if you’re not watching that already. Also check out the first season of Westworld, but stop there.

    • Radu Floricica says:

      Will be using this for reference, good idea 😀

      Just got up to date with the Overlord novels, and there’s an anime series as well. It’s pretty close to Rational Writing, in that the characters never hold the idiot ball (unless they’re idiots) and they actually react naturally to what happens to them.

      The plot is the classic “trapped in a VR game” – veteran gamer and real life salaryman gets teleported with all his guild’s possessions to an upgrade with apparently sentient NPCs, and more or less by accident ends on a path towards world domination. Some of it is played for laughs (more in the anime), but there are quite a few very subtle gems – like how he casually worries if he still has a brain. I particularly like the description of an emerging competent organisation – since we have his PoV most of the time, we see all his fears and internal bumbling, but objectively he’s actually pretty good.

      Apparently there’s also a subplot on how he got there, and it makes for a rather scary view of the future.

    • AG says:

      If you like the tactical back-and-forth of people changing factions quickly to serve their own ends, I recommend the 2010 Nikita. (Which is still on Netflix, I believe) 4 seasons.

      I’ve been enjoying Continuum on Netflix, which is 4 seasons of a Time Travel story, which follows an antihero who interestingly prioritizes preserving the Capitalism Bad End future she comes from because she values the existence of her family more than making things better.

      Halt and Catch Fire is one of the best series I’ve ever watched, 4 seasons of following an ensemble continue to be just behind the curve of the next big technology thing from the 80s through the 90s, from the development of portable computing through the Usenet days to the introduction of the internet.

      The Magicians is still going (4 seasons aired so far), and is probably the best modern fantasy show on TV. The characters keep fucking up, but in ways logical to their own characters, and never in ways that insult their or our intelligence.

      Killjoys is like if Firefly had a more industrial instead of western aesthetic. 4 seasons.

      Series I have not watched, but have been acclaimed: Breaking Bad, 12 Monkeys, Barry

      • Unsaintly says:

        I’m going to have to anti-recommend The Magicians. Everything about its setting and magic system feels made up on the spot, and it’s impossible to enjoy actual dramatic stakes when previously-unhinted magic can solve or cause any problems.

        I only watched the early show before giving up in disgust, so it possibly could have gotten botter

        • J Mann says:

          I like the first few seasons a lot, but now that The Magicians has outrun its book material, it has a much different, more standard genre flavor. Of course, that might feel comfortable since the OP is looking for something to replace Game of Thrones.

          SciFi’s gonzo adrenaline rush Happy is probably worth checking out, although I’d be amazed if they greenlight a third season.

    • dndnrsn says:

      Fargo is extremely good.

  9. Le Maistre Chat says:

    Even More Evidence That We’re Eating All Wrong

    I’d love to eat in an evidence-based, life-extending way, but it’s sooo hard. Will saturated fat cause cardiovascular events, or is it carbs? Is there no maximum amount of fruits and vegetables to eat, or are fruits the evil sugar (the carbiest carb)? Am I doing enough by being skinny, or can skinny people drop dead from cardiovascular events by eating processed foods while not all obese people are at risk? Augh!

    • sfoil says:

      Ultra-caloric, low-satiety “processed” foods are definitely bad. This is probably the lion’s share of the problem, I think. Probably related, I’m pretty convinced that seed oils are unhealthy and I’ve cut them out of my diet. I have always eaten a lot of (mostly white) rice and later I noticed that people and cultures who do the same don’t get fat off of it. Dietary fat is definitely not innately bad, I’m mostly convinced because if I eat two eggs and two slices of bacon for breakfast I can easily skip lunch without getting too hungry but if I guzzle cereal I will be craving food by 11am no matter what volume I ate.

      The quasi-vegetarian diet recommended by the “mainstream” won’t kill you even if it’s less optimal than a higher-fat diet. The most important thing is to avoid the engineered hyperstimulating junk.

      nb this is all just based on non-rigorous self-experimentation and I’m not anywhere near old enough to seriously worry about heart problems. And no diet will make you immortal.

    • HarmlessFrog says:

      For general nutrition/longevity advice, I’ll recommend Eat Rich, Live Long. Citations to back up its claims are included and usually googlable. Dumping Iron is also pretty good.

      I’d love to eat in an evidence-based, life-extending way, but it’s sooo hard.

      It’s not hard, per-se, it’s just that the official recommendations are about 25% backed up by the literature, and 75% complete baloney backed by ideological bias and bribes. You might as well read the literature yourself.

      Will saturated fat cause cardiovascular events, or is it carbs?

      Mu.

      While cardiovascular diseases are a function of civilized, agricultural eating (we have evidence for heart disease in ancient Egypt), they were rare before the pyramid nutrition era. The current epidemic is caused by a number of factors:
      – A diet rich in both fat and carbohydrates, which we don’t have good tolerance for. Either carbs and next to zero fat, or fat and next to zero carbs. The human metabolism handles both at the same time poorly.
      – The use of industrial lubricants (seed oils) as food. The petrochemicals used to extract them from the seeds can’t help, either. And they’re rich in bio-available pro-inflammatory polyunsaturated fat, while being poor in bio-available anti-inflammatory polyunsaturated fat. And they are prone to form toxic transfats when thermally processed.
      – Malnutrition, especially of the fat soluble vitamins, and doubly-especially vitamin K.
      – Toxic food additives, such as the iron fortification mandated by law.
      – Too frequent eating, to the point where many people eat the equivalent of one, 15-hour long meal every day.

      Is there no maximum amount of fruits and vegetables to eat, or are fruits the evil sugar (the carbiest carb)?

      If you are on a ultralow fat diet, you can eat all the fruit you want. But combining signficant fat with signficant carbohydrates does not end well.

      Am I doing enough by being skinny, or can skinny people drop dead from cardiovascular events by eating processed foods while not all obese people are at risk? Augh!

      You can check if you are a TOFI (thin outside, fat inside), there are CT scans for that; even just getting liver function tests should give you a general overview of your internal health. Also, if you are 40 or older, get a Calcium Score CT scan. That’ll tell you how far your heart disease has advanced, and that’s the actual progression, not some wishy-washy risk factor.

  10. ManyCookies says:

    Theresa May to resign in two weeks, replacement Conservative PM race begins.

    Well then, how does Brexit proceed from here? I hear a hard Brexiteer (Boris Johnson) is the favorite replacement for the moment, but I don’t know how much power prime ministers actually have. Do they have any unilateral action May wasn’t using (force through no-deal by deadlocking discussion, call for a re-referendum) or are we stuck in the same parliamentary deadlock from April?

    • greenwoodjw says:

      Following the debate from outside the UK it seemed like the primary reason May failed was that May wanted a Not-Brexit the voters would buy as Brexit, so I expect her leaving will have a significant impact, even if it just forces British MPs out in the open.

      • ana53294 says:

        British MPs are already out in the open.

        The Brexit that commanded the closest majority in Parliament was the Turkey option (customs union).

        Although it does seem like Parliament has no way to stop a no-deal Brexit if the government chooses to do no-deal.

        • spkaca says:

          “Although it does seem like Parliament has no way to stop a no-deal Brexit if the government chooses to do no-deal.”
          Parliament’s options in the face of a no-deal PM (presumably Mr. Johnson, though it can’t be taken for granted) are, to a first approximation:
          1. A repeal of the Act providing for EU exit. This is probably a bridge too far for them. A no-deal PM would presumably make it a confidence issue. It would get both horrible and horribly complicated, but the short version is that there would probably be a General Election in which Conservative MPs who voted for repeal would lose their seats.
          2. A Vote of No Confidence (VONC). This brings down the Government immediately. The Leader of the Opposition (Mr. Corbyn) would then be asked if he can form a government; he might be able to do so with the support of the Scottish Nationalists (SNP), but they might ask more than he wishes to give, meaning, again, a General Election. For Conservative MPs, this is about as attractive as option (1), though they might threaten it, and if Conservative Remainers are willing to sacrifice their political careers for the sake of the EU, they might do it.
          3. An ‘indicative vote’ requesting the PM to ask for another extension or even revoke the Article 50 notification. The PM would be within his rights to ignore this, though again that would be risky. This might come as the first stage of an intrigue in which a VONC would be the threatened follow-up.
          4. Perhaps the most likely option is continued flailing and buck-passing. MPs in general know, I think, that EU Exit is risky; not exiting is risky in different & probably more serious ways; and their main desire is to avoid any responsibility for whatever disaster does happen. They want plausible deniability. So expect lots of posturing, votes on non-binding resolutions, last-minute cunning plans, high-profile parliamentary deals that get talked about endlessly for about a week and then disappear without trace.
          I’ve mostly talked about Conservative Remainers as their actions are key, but Labour is also split. Most of their MPs and voters are Remainers, but far from all, and Mr. Corbyn himself is I think playing a double game. He supported Remain in 2016 (in the least convincing way possible) and has to appease his own party. But I think he secretly wants a WTO Brexit, thinking it will cause chaos & therefore improve his own chances of becoming PM, while also maximising his freedom of action if he does become PM.
          In short, Parliament has mixed motives and incentives, and while they could stop a WTO Brexit, they might not. Expect lots of bluff and bluster from all quarters, and don’t take too much of it literally or seriously.

          • The Nybbler says:

            Since the fixed-term Parliaments act, I do not believe a vote of no confidence brings down the government immediately.

            I’m also not sure if repealing the Withdrawal Act could do it, now that Article 50 has been triggered. Would get very messy.

            Continued flailing seems most likely, but that results in hard Brexit if the PM doesn’t blink at the last minute.

        • greenwoodjw says:

          I meant they would actually have to vote on the issue of Brexit. When May was presenting the Not-Brexit deal, everyone could pretend they had specific objections to the deal. But if Johnson or whoever just throws down something simple or stumbles his way into crashing out, MPs would no longer be able to hide behind “Well everyone agrees the deal is terrible”

          Not just “the vote is public”. There’s a lot of show-voting in the US, and I expect some in the UK, especially on issues like Brexit.

      • Thomas Jorgensen says:

        Ehrr. No. May just straight up went to Brussels with an ask the EU could not agree to under any circumstance. Her red lines were logically and legally incoherent. If your opening bid in a negotiation is that you would like the other party to write you into their will and then commit suicide, and you make budging from that stance incompatible with your domestic political survival, you are not going to get anywhere.

    • John Schilling says:

      I hear a hard Brexiteer (Boris Johnson) is the favorite replacement for the moment, but I don’t know how much power prime ministers actually have.

      I believe that if the PM does Literally Nothing, then Hard Brexit occurs automatically on October 31st. The only reason Hard Brexit didn’t occur on April 12th was that Theresa May asked the EU to extend that deadline and the EU agreed; it is not clear that anyone but the PM has the legal authority to make that request. Pretty sure the EU can’t do it unilaterally from their end, and there is the potential for extreme ugliness if they tried.

      Almost certainly Boris Johnson is going to learn, if he doesn’t already understand, that Hard Brexit is going to do serious harm to the UK and try to use the threat of Hard Brexit (which also hurts the EU) to negotiate a “Hard Brexit but with lube” deal with the EU. That’s where it will start to matter what power beyond he really has beyond simple obstructionism.

      • Deiseach says:

        This would be Boris’ second bite at the cherry, given that he got so impressively the Ides of March treatment by Gove in the immediate aftermath of Cameron’s resignation and the leadership struggle. If he does indeed become PM, it’s going to be very interesting given that Gove was first exiled in disgrace then brought back into the fold and is currently Secretary of State for Environment, Food and Rural Affairs. Will they kiss and make up, or will it all be a cover over seething resentment and mistrust?

        The trouble is that the British government and parliament has run out of all the options (given May’s increasingly desperate attempts to cobble together some kind of deal that could pass, and yet she failed) and this whole affair has dragged on too long. All the extensions have done is kick the problem further down the road. The British want to leave the EU but keep all the nice parts, and you can’t do that – you go or you stay, you can’t have “I’ll go but I still get to keep this, this and this and do that, that and that”. I think right now the EU has run out of patience and is willing to take the hit that a hard Brexit will land on them, even though it will have an effect on other EU members (including ourselves, poised to be worst affected), just to get an end to the whole affair.

        And that’s not including that the Irish Border question is not settled, and we’ve already seen rumbles of resuming trouble by idiot factions.

        • HeelBearCub says:

          Of course a hard brexit really puts the NI border on boil. So it’s troubles all around.

    • The Nybbler says:

      I believe that the PM could effectively force a hard Brexit on October 31. As far as I can tell the only way to stop this would be if the PM’s party (not Parliament) passed a no-confidence motion, or if Parliament passed one and managed to have general elections in time.

      IMO, the new PM should announce that a hard Brexit on October 31 is likely to happen and that everyone should prepare for it. This is regardless of whether he wants that or not.

    • brad says:

      From an outsiders perspective there are a few fundamental problems with the British political system they is causing this to be such a nightmare:

      1) The internal party governance system that give a small group of activists the power to select the prime minister.

      2) The breakdown of the two party system and the corresponding rise of the minority government without the emergence of any norms to allow for government by ad hoc compromise.

    • broblawsky says:

      I think (or hope) Johnson understands that a hard Brexit would be disastrous for the UK. I don’t think there is any legislative solution other than a second referendum or a hard Brexit, though; of all the Brexit options offered, a hard Brexit came closest to a parliamentary majority in the last round of voting. A second referendum will almost certainly come back Remain, based on current polling, but if Johnson supports one, he’ll be publically humiliated, even as it saves his premiership. There are no good options for anyone but Corbyn, who is in something of a win-win situation: if he ends up as PM, he can negotiate a soft Brexit that makes him look like a genius, if a hard Brexit happens outside of his watch he can take over as PM in the next election and rebuild the UK economy as he sees fit, and if a referendum occurs he wins no matter what happens.

      • Doctor Mist says:

        I think (or hope) Johnson understands that a hard Brexit would be disastrous for the UK.

        What do you think would be the nature of the disaster? I’m not trying to bait you; I understand that there are likely negatives as well as the positives. But I have a hard time seeing the negatives I can think of as adding up to “disastrous”, so I’d be interested in a different perspective.

  11. Matt says:

    I guess I disagree with everything you predict. MCU/Avengers is not over, Star Wars is not over, and Game of Thrones seems likely to get spinoffs.

  12. Deiseach says:

    Is this Culture War? Well, if it is, then this comment will get deleted. Anyway, it’s to do with this story I read recently.

    Who controls language, controls thought. That’s why it’s important to be the person or entity deciding and declaring which definitions are official and to be used, and which are not and should not/cannot be used. This is the point I want to make, not to fight over or rehash morality or legality or “when does personhood begin, and what is personhood anyhow?” or the rest of it.

    Which brings me to NPR’s supervising senior editor for Standards & Practices. Let me quote you something before we start off:

    As the NPR Ethics Handbook states, the Standards & Practices editor is “charged with cultivating an ethical culture throughout our news operation.” This means he or she coordinates discussion on how we apply our principles and monitors our decision-making practices to ensure we’re living up to our standards.”

    And why am I drawing this gentleman to your attention? Becaue he’s just issued a memo as a style guide to NPR journalists entitled “Guidance Reminder: On Abortion Procedures, Terminology & Rights”.

    This means that whenever they are doing stories on abortion (say, a story about the law in Alabama), these and no others are the officially endorsed terms to be used. These are the terms those hearing and reading the stories will be exposed to, and which will become part of their mental furniture and shape their attitudes to the sides, persons and other things involved. Who controls language, controls thought, and shapes opinions when people then go on to have discussions and debates and arguments with others in person and online, and moulds the way in which people think of which is the good or better or just or fair principle and which is not.

    Mainly, this memo reminds the journalists that they are to use strict medical terminology for everything, not whatever the opponents of such terminology and procedure may use. As a side note, it’s interesting in the context of “is the media biased towards the liberal side?” since all the talk of “opponents” is – to my eyes, at least – contrasted with “we”, the NPR journalists who will have the normal correct opinions on this topic not like those zealots and nutjobs.

    Now, what I find interesting here is that there’s no difficulty calling one side “opponents” and putting those “opponents” very much in the “opposed to medical practices” camp, but there’s not so much about the – well, what is the opposite to an opponent? a proponent? – no, if I go by this memo, there’s two sides: opponents, and normal people who use Medicine. Though there is a reference to “advocates” that does slip in, because he couldn’t well use anything else.

    Also, babies are not babies until they’re born, they’re only fetuses. Never mind that “fetus” is simply medical Latin for “the young within the womb, the not yet born” and “baby” is a word meaning “the young outside the womb, the born”. Those horrid anti-abortion types are trying to set the terms of the debate by tricking you into thinking of babies!

    Here’s some additional guidance from Joe Neel, regarding the Unborn Victims of Violence Act:

    The term “unborn” implies that there is a baby inside a pregnant woman, not a fetus. Babies are not babies until they are born. They’re fetuses. Incorrectly calling a fetus a “baby” or “the unborn” is part of the strategy used by antiabortion groups to shift language/legality/public opinion. Use “unborn” only when referring to the title of the bill (and after President Bush signs it, the Unborn Victims of Violence Law). Or qualify the use of “unborn” by saying “what anti-abortion groups call the ‘unborn’ victims of violence.” The most neutral language to refer to the death of a fetus during a crime is “fetal homicide.”

    So are we all straight on how we should use language? Abortion clinics are not abortion clinics, “We say instead, “medical or health clinics that perform abortions.” Late-term abortion is a wrong bad term, because “Nor is it accurate to use the phrase LATE TERM ABORTION. Though we initially believed this term carried less ideological baggage when compared with partial-birth, it still conveys the sense that the fetus is viable when the abortion is performed. It gives the impression that the abortion takes place in the 8th or 9th month. In fact, the procedure called intact dilation and extraction is performed most often in the 5th or 6th month — the second trimester — and the second trimester is not considered “late” pregnancy. Thus “late term” is not appropriate. As an alternative, call it a certain procedure performed after the first trimester of pregnancy and, subsequently, the procedure“.

    So when speaking of an abortion carried out up to the 24th week of pregnancy, remember: “the procedure was carried out on a fetus in a health clinic”, not “an unborn child was aborted in an abortion clinic”, okay?

    [this bit removed for unnecessary snark]

    More soberly, the memo says “the procedure…is performed most often in the 5th or 6th month – the second trimester” and while that may be so, it does appear that such procedures are also carried out well into the third trimester, if I believe this doctor’s range of gestation and that “34 weeks” (third trimester) is not a misprint or typo for “24 weeks” (second trimester):

    The 2001 article was a report of the experience of 1677 patients from 18 through 34 weeks gestation in which fetal death was induced by intrafetal digoxin injection.

    …The 2005 article reported the results of a prospective, nonblinded, randomized controlled cohort comparison of 1040 abortion patients whose pregnancies were terminated at 18 through 38 weeks’ gestation.

    …These clinical experiences with nearly 4000 patients in second- and third-trimester abortion were omitted from Dr Hammond’s review.

    • acymetric says:

      Is this Culture War? Well, if it is, then this comment will get deleted.

      CW is allowed on this thread 😉

    • Faza (TCM) says:

      Somehow I find it hard to get myself worked up over this, much like the Grauniad’s “climate catastrophe” line.

      I thought that it was common knowledge that various media outlets have editorial guidelines for what language should be used. Same goes for anyone who has a political angle they want to work.

      Abortion is a hot-button political topic in the US right now, so everyone’s circling the wagons. Nothing to see here.

      • acymetric says:

        I mean, isn’t this just “messaging”? Wouldn’t it be pretty easy to write a a mirror-image snarky take from the opposite perspective about pro-life outlets just by reversing the terminology?

        The only other option would be to somehow get everyone to use “perfectly neutral” language on both sides…but, uh, good luck finding agreement on neutral language, let alone getting everyone to agree on using it.

        • Faza (TCM) says:

          good luck finding agreement on neutral language

          This.

          One of the more fun volumes I was forced to abandon when moving houses over the years was a political economics textbook written in the Soviet Union in the 50s (might have still been Stalin’s time).

          Among the gems therein was a discussion of what was essentially the Kuhnian issue: economic theory is intertwined with the social and political realities of its time and reflects the interest of the dominant class of wherever it was formulated.

          That’s actually an important caveat to keep in mind, I think.

          “Is there an objective examination of economic subjects?” the august tome then asks.

          “Yes!” it goes on to reply. “It is the economics that represents the interests of the progressive class: the proletariat. In other words: Marxist-Leninist economics!”

          The proof is complete.

          • SuiJuris says:

            Yes, neutral language is hard (perhaps impossible) to find. That is the point, I think: the memo talks about finding language that “carried less ideological baggage” and views the language used by medical organisations who perform abortions as carrying less ideological baggage. Yet that is the very point which is disputed.

    • Matt says:

      Are there still people who doubt that the media is controlled by advocates for one side on most issues?

      Or are you arguing that NPR, as a quasi-governmental entity, has a special duty to resist being captured by one side or the other, ideologically?

      • Nick says:

        Are there still people who doubt that the media is controlled by advocates for one side on most issues?

        Well yes, but—

        Or are you arguing that NPR, as a quasi-governmental entity, has a special duty to resist being captured by one side or the other, ideologically?

        —this is more to Deiseach’s point, although still not quite right. It’s less that they have a “special duty” (imposed by whom?), but that as long as they are so branded, criticizing them is fair game. And really, I think anything that brands itself as news and not opinion is fair game; even media like the New York Times sometimes admit it’s a problem they are trying to mitigate, so it’s not like this sort of criticism isn’t constructive. I think Get Religion is a good example of what to do here (and sure enough, they have a piece on the NPR thing, though I haven’t read it yet).

      • Deiseach says:

        As I said, I want to stay far away from the arguments about morality and is it murder or just like having your appendix out and the rest of it.

        But right now everywhere (including in my own country, which has had its very first “oops, that didn’t work out like we were told it would work out” moment with legalised abortion that only came in last year) is, as you say, fraught over abortion. Some of the comments I’ve seen about Alabama and its citizens have been what, in other contexts, would be classed as pure racism. Leave all that out.

        The terms of the debate are being set out with a thumb on the scale. As most people in the comments here seem to accept, big surprise. But that is the whole point – most of American media still likes to present itself as “impartial, just the facts, not European-model advocacy journalism”. I agree, that’s probably gone by the wayside with many reporters, columnists, and think-piece influencers admitting that they don’t care about balance, there is very much a Right Side and Wrong Side in culture war debates and they are gonna be on the right side of history.

        But I do think a substantial proportion of the population consuming media and getting their information from it do think that the stories they read are neutral, just-the-facts reporting. We’re not aware of our biases, or of having our convictions stroked the right way so that we nod and go “how sensible” while reading or listening along.

        Not to be picking on the New York Times or NPR, but haven’t we had this very dispute before on here? Fox News is partisan, NYT is ‘just the facts’ reasonable outlet. Conservatives are biased, liberals are just telling it like it is.

        And people have their opinions and feelings moulded by what they take in. So the abortion debate (or what CW topic you will) is settled as far as they are concerned, because on one hand you have the Clearly Wrong and Probably Lying people, the low-class, gauche, overtly bigoted types standing outside with their rosaries and their signs and their yelling and harassing, and on the other side there’s us, the nice reasonable educated compassionate sorts. And we the reader know they’re Clearly Wrong and Probably Lying because listen to what they’re saying with their gruesome photos and scary propaganda. We know it’s not true because when we read our neutral just-the-facts media or listened to our informative educational public service broadcaster, it told us that what goes on is simply “procedures carried out in health clinics”.

        We’ve all had procedures, I’ve had a surgical procedure myself. So who are you going to believe – the zealots and bigots with their scaremongering, or the trusted reasonable voices that have always given people just-the-facts? And so we rest assured that nothing more serious than the equivalent of having your tonsils out is going on, and we turn the page or tune in to another station, and we know we’re the nice educated compassionate sorts that operate on facts and reason not over-emotional poorly educated belief.

        Because we’ve all been conditioned by the language used to present just-the-facts to us and we don’t think about it for one second.

    • dick says:

      I kind of don’t get what I’m supposed to be upset about here. You complained that “babies are not babies until they’re born, they’re only fetuses” and then offered definitions for baby and fetus that seem to exactly match NPR’s terminology. If “late term” usually means 3rd trimester, then not using it to describe something that usually happens in the 2nd trimester seems pretty reasonable. As does referring to clinics that perform abortions as “clinics that perform abortions”.

      ETA: the idea that this is just the left sticking it to the right would also seem to be undercut by the bit about not describing partial-birth abortions as “rare”.

      • Clutzy says:

        I would say, looking at their style guide, is that it seems to encourage writers to confer less information using more words. That would be my primary complaint.

        Secondary would be that it seems to appear to be moderately pro-Democrat.

      • Eugene Dawn says:

        Yeah, I’m with Dick: the complaints offered in the initial comment are

        – NPR suggests “babies are not babies until they’re born, they’re only fetuses”, and counters this by offering definitions of the words “fetus” and “baby” that differ only in the distinction between “within the womb, […] not yet born” and “outside the womb, the born”…which seems to me exactly how NPR is using it

        – the use of the term “opponents” for those who don’t like abortion; but a quick Google turns up “opponents” of stop-and-frisk and “opponents” of the Alabama bill so I’m not sure there’s some deeper meaning in the use of this term.
        What’s more, the memo very clearly specifies the usage of “abortion rights supporter(s)/advocate(s)” as the corresponding term for those who favour abortion; it is quite explicit, so I’m not sure how anyone could form the opinion that NPR thinks that the other side of the debate should be characterized as “normal people who use medicine”. It’s clear that the term “advocate” didn’t “slip in”, it’s very obviously the term that reporters are supposed to use.

        – The term “abortion clinic” is deprecated because many such clinics provide other services too. I think this is reasonable, though of course others are free to disagree (it might depend on what the modal abortion-providing clinic actually looks like, which I don’t know). However, in mocking this usage, Deiseach says the NPR usage would be “the procedure was carried out on a fetus in a health clinic”–ignoring the fact that NPR says you have to follow “health clinic” with the phrase “that performs abortions”, which doesn’t sound quite so evasive.

        – Similarly with “late-term abortion”–I can’t find statistics on how often intact dilation and extraction is performed at different stages of the pregnancy, but if NPR is correct that the “majority” occur before a pregnancy reaches late-term then…it seems obvious to me that you shouldn’t use “late-term abortion” to describe the procedure. Again, there could be some sleight-of-hand here: 50.1% are performed before late-term, and 49.9% are performed in the last stages of pregnancy, but unless someone wants to argue that’s what’s happening, this seems unambiguously correct usage.

        • Clutzy says:

          I will illustrate to you why this style guide sucks.

          “The procedure, an intact dilation and extraction, was carried out on a fetus in a health clinic that performs abortion.”

          This is one of the worst sentences ever. It conveys almost no information unless you know the style guide. It also uses passive voice which is hard to avoid with the style guide.

          Contra:

          “The abortion clinic performed a partial birth abortion in the 5th month of development.”

          Double the information (to a layman) in fewer words that are oriented better. You could eliminate “in the 5th month of development” and it still has more information, using less than half the letters of the first.

          “Health clinic that performs abortions” is the dumbest part. If it was a hospital, we would call it that. If it was a doctors office, you call it that. If its an urgent care, you call it that. Abortion clinics are called that because that is what differentiates them. Just like a farmers market is still a farmers market even if like 50% of the shops are weird candles and wooden toys shops. Hospitals usually can perform abortions, a primary care physician can prescribe the abortion pills, birth control, etc. The differentiating factor of places like PP is the number of abortions performed, and the centrality of that procedure to their mission.

          • Eugene Dawn says:

            Two things: the first sentence is Deiseach’s parody of the style guide, not an example of it in actual use. Deiseach deliberately wrote it to be one of the worst sentences ever, so we shouldn’t be surprised if you find it so bad. A fair comparison would pick an actual sentence from an actual NPR article.

            And even so, I disagree–I think the only improvement in your version is the addition of “in the 5th month of development” which obviously conveys more information. Without it though, your sentence gives no indication of when the abortion actually happened and conveys basically the exact same information.

            The use of the active voice is a little better, but a) I think overuse of the passive voice is a common feature of news stories, and b) I can rewrite the original in the active voice too: “the clinic performed an intact dilation and extraction procedure”.

            Otherwise, we are down to whether
            a) “clinic that performs abortions” conveys more or less information than “abortion clinic”–they sound equally informative to me, though of course it depends on the actual setting where the abortion was performed. You’re right that “abortion clinic” is a bit snappier, but that’s about the only advantage I see.
            and b) partial birth abortion vs. dilation and extraction–your version still has the advantage of brevity, but only by two characters. The trade-off is against using the actual medical name for the procedure which is both more informative and more accurate.

          • Paul Zrimsek says:

            Unless you consider it important to dispose of the possibility that the abortion was performed by the pizza delivery boy or the treasurer of the gardening club, the passive voice is completely unobjectionable here.

          • Clutzy says:

            Eugene:

            #1. Its “intact dilation and extraction” so, again a lot more words.

            #2. Most people have no idea that “intact dilation and extraction” is an abortion. So you have to add that information elsewhere. “Intact dilation and extraction” might as well be some sort of boil lancing procedure, or something to do with the retina.

            That is why I said the first sentence was not ONLY longer, it conveyed less information.

          • Eugene Dawn says:

            I presume they describe the procedure as an abortion procedure once at the beginning, then use the word “procedure” from then on. This plus context ought to let readers know it’s abortion that’s being performed, and will let them be more economical elsewhere. I agree the first time they use the phrase it’ll be ungainly, but it has the advantage of being more precise terminology, so I think it’s defensible.

            Anyway, I’ll concede longer, but I think the style guide lets a writer be plenty informative so long as they follow the guide sensibly.

        • The original Mr. X says:

          What’s more, the memo very clearly specifies the usage of “abortion rights supporter(s)/advocate(s)” as the corresponding term for those who favour abortion; it is quite explicit, so I’m not sure how anyone could form the opinion that NPR thinks that the other side of the debate should be characterized as “normal people who use medicine”. It’s clear that the term “advocate” didn’t “slip in”, it’s very obviously the term that reporters are supposed to use.

          “Abortions rights supporters” is putting the thumb on the scales. After all, part of the definition of a right is that it’s OK for you to do it, so the term “abortion rights” isn’t a neutral one.

          • Eugene Dawn says:

            They use the same terminology when talking about other contested rights, like guns so you may not think it’s neutral, but it’s consistent with their usage elsewhere.

      • Deiseach says:

        What’s inside a pregnant woman is not a baby, it’s a fetus, so it’s scaremongering propaganda by the bigots to try and call it “the unborn”.

        So how many people do you know who are “trying for a fetus”, or “my spouse is in the sixth month of carrying the fetus” or “we’re going to have a fetus shower for Susan”?

        By the same token, once the fetus is born, it’s not a baby either, it’s a neonate. Possibly if it survives it will become a pais. Because it’s not called the baby-doctor, it’s a paediatrician, right? Stupid zealots with their fake emotion-rousing terms!

        Look, if you’re going to base your argument on “but the Proper Medical Term is…” then you have to go all the freakin’ way. Not a baby, a fetus. Not a baby, a neonate.

        • dick says:

          You’re making up imaginary shit to get upset about that bears no relation to the document you linked to.

          • Deiseach says:

            Hello, dick. Thanks for telling me what I am thinking.

            I am so relieved that I am only hysterical, and not trying to make a point about language and the subtle ways in which it is controlled of which we are barely aware.

      • Eugene Dawn says:

        So how many people do you know who are “trying for a fetus”, or “my spouse is in the sixth month of carrying the fetus” or “we’re going to have a fetus shower for Susan

        None, both because for most people the thing they are trying for is the born baby not the internal fetus, and also because in normal speech people use more informal expressions than is appropriate for use in a newspaper article. My friends also swear, make up stuff, and rely on sources that are probably not up to journalistic scratch, but I am not outraged at the thought that journalists hold themselves to a higher standard.

        • dodrian says:

          When NPR reports on miscarriage it uses the term baby plenty, and fetus never.

          That’s what I find galling about NPR’s guidelines. Not that they’re picking language carefully to influence perception of a story – because of course they’re going to do that. But that somehow they think it’s the pro-life movement using language to influence people, not them..

          In every other instance NPR uses the term baby. Think back to all the articles about the royal baby. Stories about miscarriage. Interviewing expectant parents. They’re all babies, not fetuses. Except if the baby isn’t wanted, then they issue guidelines to use medical terms like fetus to be correct But no, it’s those pro-abortion (the language NPR chooses to refer to them) people that are twisting language to shape public opinion.

          • Eugene Dawn says:

            Your article is not original to NPR, it seems to be syndicated from Kaiser Health Network; perhaps they have different standards than NPR. I don’t know why NPR wouldn’t ask for it to be changed, but when NPR publishes on yawning in the womb they say “fetus”, and so too when they talk about smoking marijuana in the womb.

            Also, I didn’t respond to Deiseach’s point about using the term “neonate” so I’ll do that here too: NPR seems to prefer the more colloquial “newborn” to distinguish very young babies, which I think is reasonable.

        • Clutzy says:

          I illustrated this above, but the fact is that NPR is not serving the medical community or some sort of formal society. It should adhere to the “more informal” standards, if those are easier to understand.

          • Eugene Dawn says:

            From what I can tell, NPR adheres to the standard of using roughly accurate but understandable medical terminology in all of their health articles: here are some articles that have nothing to do with abortion that maintain the distinction between fetuses inside the womb and babies or infants outside. They do this not because they are “serving the medical community” but, presumably, because they are trying to accurately convey information about health, and so defer to experts from that community on matters of terminology as much as is possible. I think this is a reasonable thing to do, and frankly, I think it only becomes more reasonable when the medical issue being reported on is one that is highly contentious.

          • Clutzy says:

            I don’t think fetus vs. baby is actually all that objectionable. They are well known words. It is many of the other ones that make no sense. “dilation and extraction” means 0 to the layman.

    • Machine Interface says:

      Who controls language, controls thought. That’s why it’s important to be the person or entity deciding and declaring which definitions are official and to be used, and which are not and should not/cannot be used.

      That’s mostly false. Nobody controls language, “official” definitions cannot be enforced because what words “really” mean is the constantly changing product of collective negotiation (just compare the “official” definitions of many key concepts of Social Justice vs how most SJ activists actually use them vs how most other people actually use them), and when words are tabooed, old ideas just find new words to sip into — that’s the whole point of the euphemism treadmill. If language and thought could be so easily controlled, chinese censorship wouldn’t find itself perpetually playing whack-a-mole with Chinese internet users who constantly come up with new euphemisms to talk about what they want without getting sniped by the censorship algorithms.

      In as much as language shapes thought, it’s not by such simplistic mechanisms as vocabulary choice or “official definitions”.

      • greenwoodjw says:

        As soon as Xingping decides he’s had it and begins executing people, you’ll see the language change quickly. There are subjects here in the US that, if it becomes publicly known you’re a thought criminal, you’re unemployable.

        In this case, it’s literally a news organization prohibiting the use of certain language in relation to a topic.

        • Machine Interface says:

          How does that contradict anything I’ve said? My whole point is that you can’t control language because language adapts extremely quickly, and people come up with new euphemisms faster than censors come up with new taboo words. Language changes in response to censorship only as much as is needed to keep the same ideas going. This is the opposite of control by the censors.

          Social consequences are irrelevant to the argument, they enforce the taboo without doing anything to actually prevent the circulation of the underlying ideas. [If you want to argue otherwise, you have to explain why populist parties have never been so popular in Europe in spite of decades of harsh anti hate-speech laws].

      • Viliam says:

        Unfortunately, those who control language, control thoughts of those who love to “argue by definition“. And that is a sufficiently large and annoying group on the internet.

      • Deiseach says:

        If I’m writing a journalistic style bible over what is and is not the correct and permitted terms to use when writing a story, I think I am pretty much controlling language.

        And are you really going to tell me it doesn’t make a difference to how people instinctively react if the story is “Representative Smith, the moderate who wants to reform the tax laws” as against “Representative Smith, the extremist on tax”?

        • Machine Interface says:

          “Controlling language” implies actually controlling the semantics – not just what words people use, but how they understand these words. Calling “controlling language” the tabooing words used by a journalists in one specific media is akin to thinking you control the christmas toy market because you decide which color of paper wrapping can be used at Target.

          People already know what they think before reading that kind of article. What makes people change their mind is if you repeat for months and months on hand that abortion is criminal, but what you actually call it is irrelevant – more insulting term will be neutered and reapropriated, and more euphemistic ones will get the same pejorative semantics reinvested into them.

          • Deiseach says:

            but how they understand these words

            Which is exactly the point I was trying to make. When you hear or read the term “medical procedure” is your first thought “oh they mean an abortion” or “oh like that time I had to have my tonsils out”?

            Constant dripping wears away the stone. Grow up on “trusted source to give you the impartial real facts” using terms like that in such a fashion, of course your mindset is going to be influenced towards “this is no big deal” and then yes, people will “already know what they think”.

    • S_J says:

      I have a proposal for modification of this style guide.

      1. Use the phrases “human embryo”, “human fetus”, and “human newborn” for the relevant stages of pre-birth development and shortly-after-birth.

      2. Append information about stages of development when discussing abortion practice. “Abortions performed at week 8 will result in the ending of a fetal heartbeat…. Abortions after month 3 will typically result in destruction of a fetal skull and the developing arms/legs… A fetus after month 5 has about a 50% chance of surviving a premature birth.”

      If the commentariat thinks this pattern would be slanting opinion against abortion… Then I reply that the current style guide under discussion slants opinion in favor of abortion. In a way that I think is not neutral on the subject.

      • dick says:

        Sure, as long as we can call miscarriages “babies that God murdered.”

        Seriously. Who here expected pro-lifers to be satisfied with NPR’s style guide for abortion? This is like David Friedman letting us know that he has read the DNC’s platform and found it to be insufficiently Libertarian.

        • Nornagest says:

          NPR is nominally nonpartisan, being public radio and all, although I’m not sure how seriously anyone takes that these days.

          • Doctor Mist says:

            Not in the slightest.

          • brad says:

            Nonpartisan is not same thing as neutral on any and everything.

          • Nornagest says:

            I don’t expect NPR to be giving equal airtime to the Flat Earth Society anytime soon, but abortion is about as close to a pure values question as you’re likely to find. And pure values questions do seem like the sort of thing that a nonpartisan outfit ought to be neutral on, provided both sides have significant representation in its audience (which they do here).

        • Deiseach says:

          You’re consistently misunderstanding my point, which has nothing to do with “pro-lifers not satisifed with NPR style guide”.

          What I’m talking about is (a) the style guide is written from one side of the debate (b) it’s presenting itself as the neutral, value-free, just-the-facts way of reporting on the debate (c) that kind of reporting in its turn helps to set the terms of the debate.

          If NPR had a style guide reading “we will from henceforth refer to abortion advocates as ‘baby-murderers'”, I think you might get my point faster.

          • Eugene Dawn says:

            What term does the style guide use that you think is equivalent to “baby murderer”? Opponent? Fetus?

          • beleester says:

            Is there any set of terminology that would qualify as neutral? If they write “fetus” instead of baby, you go off on them because it’s a dastardly plot to ensure nobody thinks of the unborn baby when they think of abortion. If they write “unborn child” then your left-wing counterpart will be here saying it’s a dastardly plot to make sure nobody can think of abortions without thinking of the potential living child they might become.

          • The original Mr. X says:

            I am informed that the preferred term in bioethics is “the unborn”, as the closest to a neutral term it’s possible to get.

      • Eugene Dawn says:

        I think the style guide does essentially follow your point 1. As to 2, I can imagine cases where some of that is reasonable, but as a general rule it seems strange: should every article insist on the medical consequences of being unable to obtain a timely abortion in gruesome detail?

        • Laukhi says:

          It seems to me that there is bi-partisan consensus that media often presents a slanted view of reality by leaving out certain details in favor of others; without lying, they can present a biased perspective of reality. If it is accepted that this is a bad state of affairs, then should news organizations not strive to present a more holistic view? If the line is unclear, it is better to err in favor of what is more clearly good. Time is not unlimited, certainly, but those who have already read it would be able to easily skip over it.

          So, why not?

        • S_J says:

          About my point 1, I disagree.

          I don’t think I’ve seen any news source use the compound phrase “human fetus” when discussing abortion.

          It is my opinion that this absence is part of a tactic of “othering” of an entity that is human, and is deserving of human rights. (Even if a human fetus has less development than a newborn human, or an 18-month old human, or a 12-year-old human.)

          • HeelBearCub says:

            I don’t think I’ve seen the phrase “human child” or “human baby” either. Nor is the phrase “animal fetus” anything like common.

            Fetus is understand to be human unless modified in some other way.

            The real issue you are trying to point to is that people generally understand that a fetus is both qualitatively an quantitatively different than a new born. You would like for us to eliminate the distinction, but the distinction nonetheless exists. The changes that occur from conception to birth exist on a continuum, thus categorization is always inherently arbitrary, however correct.

        • Eugene Dawn says:

          “Human” should be obvious from context; while it might be your opinion that by virtue of being human, a fetus deserves full human rights, there’s no reason a news organization has to agree, nor why they should add redundant information that is trivially inferred from context to make your views sound more reasonable.

    • sorrento says:

      The big argument between the pro-choice and pro-life crowds is about whether the fetus is truly a baby with the human rights that entails, right? A style guide that mandates not using the word “baby” is clearly pro-choice, as far as I can see. We can argue about whether that is good or bad, but but the fact that it’s pro-choice terminology seems clear.

      Also, speaking as someone who had a baby recently, doctors and nurses refer to fetuses as babies all the time.

      • Faza (TCM) says:

        Nego consequentiam.

        “Being a baby” does not entail having human rights. “Being human” does.

        The object-level problem with a fetus is that it is indisputably human for some values of “human” (biology), but is rather obviously not human for other values of “human” (if we put object X – which happens to be a human fetus – next to the man on the street, whom we consider indisuptably human, and compare the two, we might easily not recognize both as belonging to the same class, unless we knew in advance).

        The fetus/baby distinction is therefore no more central to the issue than subsequent age categories that we might slot people in (toddler, pre-schooler, adolescent, etc.) In short: if it’s still in the womb, it’s a fetus; if it’s outside the womb, it’s a baby, is – by normal classification standards – a rock-solid distinction.

        The problem is that “baby” carries an emotional appeal that “fetus” doesn’t, so I would expect that it’s rather using the word “baby” that is clearly pro-life.

        Again, you could put an (early-stage) fetus next to a baby and have people not guess that they’re the same thing. Similarly, learning everything you can about (already born) babies won’t tell you a good deal about fetuses. The transition we undergo during birth is probably a more radical change than anything else in our life, barring death.

        If there wasn’t an abortion debate, I expect that pro-life people would have exactly zero problems with usage of the word fetus.

        • Clutzy says:

          The object-level problem with a fetus is that it is indisputably human for some values of “human” (biology), but is rather obviously not human for other values of “human” (if we put object X – which happens to be a human fetus – next to the man on the street, whom we consider indisuptably human, and compare the two, we might easily not recognize both as belonging to the same class, unless we knew in advance).

          Yes, but that definition is even more problematic. As there is often just as large a difference between two fully grown adults as it is between a 6 month fetus and a 1 year old child.

          • Faza (TCM) says:

            Now that’s an assertion you’re gonna want to justify.

            Notice I spoke of the ability to identify two objects as belonging to the same class. At this level of classification, two fully grown adults would be virtually indistinguishable (just on a purely visual level, the amount of deformity required to make us question whether we were looking at a human or not doesn’t bear thinking about).

            It’s probably also worth noting that I did not make a claim that any fetus compared to a newborn baby would be unrecognizeable as belonging to the same class. Trivially, a fetus just prior to birth is very similar to a newborn baby.

            The claim I did make is that there’s a difference between what a fetus might be at any stage of development and what we think of as a baby – and the further back we go, the bigger the difference is. Therefore, the claim that a fetus is “just like” a baby is objectively disprovable (by listing predicates that are true of the fetus and not true of indisputable babies, and vice versa).

            The dividing line of fetus-in-womb/baby-out-of-womb, on the other hand, suffers from no such issues.

        • The original Mr. X says:

          The object-level problem with a fetus is that it is indisputably human for some values of “human” (biology), but is rather obviously not human for other values of “human” (if we put object X – which happens to be a human fetus – next to the man on the street, whom we consider indisuptably human, and compare the two, we might easily not recognize both as belonging to the same class, unless we knew in advance).

          Since you say that a foetus is biologically human, I don’t see what the point of your man-on-the-street thought experiment is. So biological humans at one stage of development might be difficult to recognise as such to an uninformed observed — so what? They’re still biological humans.

  13. Walter says:

    Random media rec:

    I really like the show ‘Occupied’, on Netflix.

    In a future where the USA withdraws from NATO and the middle east is engulfed in turmoil, Norway is supplying most of the EU’s oil and gas. A new PM is elected in Norway on a green platform, and he cuts oil and gas production off in favor of a new green energy initiative. The EU complains and protests, but the PM is resolute, until Russian forces start showing up in the country, Crimea style. The EU is tacitly backing the Russians, who are there to restart oil/gas production.

    This is the premise, and our POV flits around the Norwegians, and we follow government officials / collaborators, terrorists/freedom fighters and journalists as the situation develops over the months that follow.

    • 10240 says:

      Why would Russia want to restart oil and gas production in Norway? Russia is an oil and gas exporter, Norway is a competitor.

      • Walter says:

        The story isn’t set in Russia, so we don’t really get their viewpoint. My guess would be ‘the EU will let us put troops in Norway? We’re in!’

        • DeWitt says:

          Does it address Norway not being part of the EU and the EU having no military to speak of?

    • John Schilling says:

      The math doesn’t work out, of course. The EU consumes 15 million barrels of oil per day; Norway produces a little over one-tenth of that and even at its peak barely exceeded 3E6 bpd. Furthermore, Norway’s probable oil reserves (including undiscovered) are estimated at less than 30E9 barrels, which the EU would completely exhaust in five years.

      We can handwave EU drastically reducing its oil consumption by large-scale adoption of electric cars, etc, but that’s not plausibly going to make the numbers work and in any event the farther you go down that path the harder it is for a “Norwegian green platform” to credibly change things. Presumably they’ve already grounded all their opera singers. Also, I believe the Greek tanker fleet can single-handedly supply the EU’s full oil needs from sources anywhere on Earth.

      All that said, a well-told story about a major Western democracy facing an invasion by little green men sounds interesting. I note that Australia has its own entry in this genre, with the “Tomorrow” novels/movies/TV series; have only seen the movie and IIRC it simply skips over the “why is anyone invading Australia” question.

      • Walter says:

        Those are good objections, but I still like the show. Maybe imagine it taking place in a world where Norway has more oil?

        • Eric Rall says:

          Kinda like how House, M.D. and Grey’s Anatomy take place in an alternate universe without HIPAA.

        • brmic says:

          Even then, I found the premise and execution beyond stupid:
          – The EU is silent on/tactily encourages the Russians of all people to take over Norway. Because Russia turning on them or doing its own thing couldn’t possibly ever happen?
          – The EU maintains a united front on this, for the first time on anything ever? (For context, politicians blaming ‘the EU’ for the things their own party pushes on the european level is the default, not the exception.)
          – The EU on the show does in effect agree that national souvereignity isn’t a thing if someone else wants something bad enough. As a precedent, this removes the basis of international relations since the League of Nations and risks another World War.
          – Occupying a country by force/candestinely couldn’t ever go wrong, lead to open war with mass casualties, and, crucially, loss of oil production?
          – The series really suffers from happening post Iraq War, because it essentially say that a more ruthless US occupying a peaceful Iraq whose leaders have the support of their population would conceivably have gone so well that politicians all over the EU think this is a good idea.

          Further on, the series requires that everyone at the top levels of government just goes along with this. Which is stupid, because if they did, they could just have removed the PM and restarted oil and gas exports on their own.
          This plot hole is expanded when Russian and Norwegian security forces clash, which again apparently nobody involved foresaw, because they’re all idiots.

          FWIW, I stopped watching halfway through season one, because the stupid kept piling up.

  14. Edward Scizorhands says:

    If you do it on a slightly older thread, like for Game Of Thrones discussions, it reduces the chances of accidental spoilers.

    Also, Marvel put some of the really big showcase reveals into their broadcast commercials now. They are trying to get the people who 1) hadn’t seen it, but 2) might if they see those highlights into the theatre.

    • Doctor Mist says:

      If you do it on a slightly older thread

      And if you’re worried about people not seeing it, include a link to it in a current thread.

  15. acymetric says:

    Since it’s been almost a month since the movie’s release, I have to imagine that, if you really care about spoilers, you would have seen it already.

    I don’t think that’s necessarily true for a few different reasons, but I do think we’re approaching and maybe have reached the point where it falls on you to avoid spoilers if you haven’t seen it yet. There is a limit to how long you can expect everyone else to be spoiler free.

  16. johan_larson says:

    iTunes has a weekly feature, the Movie of the Week, which rents for $0.99, whereas most movies rent for $4.99 or $5.99. (Prices in Canadian dollars.)

    Anyone know enough about IP pricing to judge who is taking a big cut from their usual fee for a promotion like this? Is Apple getting pretty much nothing, or is it the studio/distribution company?

    • Steven J says:

      TLDR: Most likely Apple is absorbing most of the loss.

      Epistemic status: In my consulting work, I’ve been involved in a number of projects involving licensing copyrighted materials (books, music, and video) for distribution over the internet, both for internet platforms and for content owners. The client I can publicly disclose is Netflix (they hired us to oppose the Comcast/TWC merger), the others are confidential. I have no specific knowledge of Apple’s movie contracts, so I’m basing this on other companies.

      Details: When it comes to marketing promotions, the platform has stronger incentives to be aggressive than does the content owner. The platform wants to get people to a) try its service, and b) make a regular habit of checking out what’s available. Some portion of the customers the platform gains are people who would have otherwise bought something from the same content owner, but on a different platform. Thus, content owners are more worried about cannibalization than platforms. As a result, contracts typically force platforms to bear the majority of these marketing costs. The details can vary a lot, depending on whether the payments are lump sum or a percentage of revenue, but the general pattern is pretty robust. I’ve seen examples where platforms charge consumers less than the incremental royalty payments to the content owner, selling at a negative gross margin in their promotional deals. That said, Apple can be an aggressive negotiator, so it’s certainly possible that the content owners are putting up some non-trivial marketing dollars.

      • johan_larson says:

        Thanks, Steven. That’s very interesting. It’s good to hear from a real pro.

  17. b_jonas says:

    Game of Thrones sure, but I don’t think Star Wars is ending this year in any sense. We don’t know anything about the interquel film of 2020 or 2021 yet, but that’s on par, because we didn’t learn anything about the Solo film until a few months before its release either. The next big film is expected to be released in 2022, and we don’t know anything about it either, but that too aligns with my expectations, because we didn’t know anything much about the film of 2019-12 until 2019-04. That the “sequel trilogy” is ending is just a convenient assumption because the first six films were clearly grouped into trilogies and Lucas has variously claimed that he has originally planned to make two or three or four[1] trilogies, so Disney plays along with the expectations.

    Really, even if Disney were saying that this year’s film was the last Star Wars film, I’d be skeptical. I take such statements as if they were talking about Michael Jordan’s retirement or certain pop musician’s “last concert” or anything a politician says during an election campaign. But in this case, Disney is openly admitting that they’re planning to make more films, so there’s no need even for my cynism.

    [1]: https://scifi.stackexchange.com/a/17964/4918 “Weren’t there originally going to be nine Star Wars films?” answer by Hugo

  18. dick says:

    I haven’t seen it. The wife and I don’t consider a popcorn movie to be enough to justify the cost of a sitter, so I’d appreciate it if you’d either rot13 your spoilers or come over and watch the kids for an evening.

  19. The Pachyderminator says:

    I’m still hoping to see it relatively unspoiled (I don’t usually run out to see new releases very promptly), but I’ve already gotten used to scrolling past discussions of the movie. I wouldn’t object to this as long as it was clearly marked at the top of the post.

  20. Plumber says:

    @Atlas,
    I’ll missGame of Thrones, Star Wars was a fun movie, The Empire Strikes Back was a good movie, but The Return of the Jedi was very profoundly disappointing,

    The Avengers with Diana Riggs as “Emma Peel” (and what’s his name as “John Steed”) was a fun television show, but I assume you must mean the new comic book superheroes movies, I saw bits of one and despite featuring Scarlett Johansson in an outfit similiarly to the one worn by Marianne Faithfull in Girl on a Motorcycle, it was really boring.

    It would probably be disappointing (as most sequels are) but what I’d like to see is Helen Mirren reprise the role she played in Excalibur.

    • HeelBearCub says:

      If you go into Kingsman expecting Avengers you may get really turned off by the really, really, really over the top stylized violence. Like Tarantino painting a Pollock in pastel blood.

      • Plumber says:

        @HeelBearCub,
        Thanks for the warning!

        I’m one of the few who preferred season seven of Game of Thrones to the previous ones (more dragons and less graphic brutality).

      • acymetric says:

        There were a couple scenes that fit that description, I guess, but I don’t think it was that bad. Doesn’t hold a candle to Django Unchained for instance.

      • Incurian says:

        I felt Kingsman was more of the “holy shit awesome” type of violence than the “I think I’m going to be sick, wtf is wrong with this director?” type of violence.

      • Nick says:

        the “I think I’m going to be sick, wtf is wrong with this director?” type of violence.

        Did you have Tarantino in mind there? I could pretty well stomach Pulp Fiction, Inglorious Basterds, and even Django Unchained, but damn did I have to look away from some scenes in Hateful Eight.

      • HeelBearCub says:

        I would agree that for most of the movie, Kingsman was “violence with panache”. But my recollection was that I was fairly put off towards the end of the movie as it went past panache into “damn that is really gratuitous”. And on the whole it’s much more Bond and wants it’s R rating.

        ETA: and it’s not like I can’t stand Tarantino or anything. You know what you are getting. I think I felt surprised by Kingsman.

        Whereas, Avengers was a TV show.

    • Plumber says:

      @Atlas

      “….If you like the aesthetic of those fun old British spy TV shows/movies….”

      I do and thanks for the tip!

    • Ventrue Capital says:

      @Atlas said:

      A common joke in the Atlas Household is Atlas, Sr. pretending to confuse the blockbuster superhero movies with the 60s TV show.

      I’ll see you, and raise you The Defenders, The Black Widow, and Cloak & Dagger.

    • Deiseach says:

      I’m not a Guy Ritchie fan (I didn’t very much like what he did with his Sherlock Holmes movies) but I have to say, I liked the recent re-make of The Man From U.N.C.L.E. and was a bit disappointed the mooted sequel hasn’t happened (yet! growth mindset!)

      The plot doesn’t hold up if you look at it too closely but really it only serves as an excuse for the spy shenanigans, so that’s okay. Turn your brain off for any inconvenient “but that doesn’t make sense” moments 🙂

      It also completely, and I mean completely, changes the characters of Illya and Napoleon from the TV show but it manages to pull it off well enough that I was surprised I wasn’t howling for Ritchie’s blood in ‘how dare he do that to my show???’ mood.

      I also really liked the romance-that-wasn’t (they don’t even kiss!) because (a) I’m wired that way and (b) I liked seeing a chaste romance that wasn’t resolved in the usual will-they-won’t-they-of course they do manner.

      It also made me like a Hugh Grant performance, and that’s saying something. So if you get the chance to however you rent or stream it, it’s not a bad way to pass two hours.

      • Plumber says:

        @Deiseach,

        You’ve previously indicated liking enough other stuff that I liked that I consider this a recommendation that I’m confident in.

        Thanks!

        • Deiseach says:

          Well, if you hate it, tell me 🙂

          I was (and am) a big fan of the original show, and the film makes a lot of changes (like every big-name director rebooting an old franchise, naturally). As I said, I was able to tolerate those, which surprised me, but someone else might find them going too far.

          The plot is nonsense, so ignore that as anything more than “our valiant heroes must learn to play nicely together and get the McGuffin away from the baddies”. I could have done with seeing a bit more of the baddies and getting them developed a tinge more depth, but this is really meant to be a romp not a gritty 60s Cold War Le Carré adaptation.

    • Ventrue Capital says:

      @Atlas wrote:

      If you like the aesthetic of those fun old British spy TV shows/movies, consider checking out the Kingsman films.

      I enjoyed the Kingsman films.

      What do you include in “those fun old British spy TV shows/movies,” besides The Avengers, of course?

      I’m a fan of The Champions, Adam Adamant Lives!, Department S and (to a lesser extent) Jason King, the “Anno Dracula” series, the Diogenes Club series, Bullshot, Sapphire and Steel, Strange Report, and The League of Gentlemen, but (ironically) The Avengers is fairly low on the list of shows I enjoy.

      • Plumber says:

        @Atlas,
        I’m not familiar with any of the works you cited along with The Avengers, but FWLIW I thought that Secret Agent (Dangerman in the U.K.) was an excellent television series (the sort of sequel The Prisoner was very ’60’s in a different way and is also interesting), for movies The Spy Who Came in From the Cold,
        The Ipcress File,
        and Funeral in Berlin were good, there was also a movie called The Liquidator which I saw once on television decades ago that seemed good, but my memory of it is too dim to be sure.

  21. I don’t think this year is necessarily meaningful but Marvel is going to go full SJW, just as Star Wars did.

    • Le Maistre Chat says:

      I don’t know where you’re getting this. The MCU has been full SJW on race since the original Thor, and while it didn’t reach full feminism until Captain Marvel, that’s just one series.
      There’s no sign of Spider-Man, Doctor Strange or Black Panther becoming even more SJW in sequels. Black Widow might be insufferably feminist, but it might not (ScarJo isn’t as advanced a white person as Brie Larson) and there’s no sign that The Eternals will be any worse than Thor. And Shang-Chi is Asian male-centric (the protag and his evil dad), which is as un-woke (asleep?) as you can get without being white guys.

      • There’s a difference between adding a black guy and unnecessarily denigrating a white guy to prove your wokeness. I’m betting the Marvel movies will get there soon enough.

      • Nick says:

        I think this is right. MCU isn’t even close to full woke, and there’s barely even a trend in that direction. Just one thing, though: you’re forgetting that early MCU had a libertarian flick!

        • AG says:

          Yep, and also Captain America going full libertarian “fuck institutions I only trust my friends” in Civil War, which people LOVED.

      • gbdub says:

        Honestly Captain Marvel wasn’t nearly as “woke” as people complaining about it seem to want me to believe. Brie Larson’s way of promoting it is a different discussion. But the movie itself was fairly boilerplate “yay girls can be pilots too” and given the time period it was portraying, that was fine.

        Even the purported “on the nose” feminism about Captain Marvel being told to “suppress her emotions” – I mean, it turns out the guy telling her that had motivations for doing so that had zero to do with sexism.

        • Le Maistre Chat says:

          Oh yeah, if you were lucky enough to go into Captain Marvel cold, it was fine. Boilerplate superhero stuff with a fresh perspective and some SHIELD fanservice. “Yay girls can be pilots too” is perfectly appropriate for a period piece.
          It’s only the buzz around the character that’s exhaustingly woke.

        • dick says:

          It’s only the buzz around the character that’s exhaustingly woke.

          Speaking as someone who missed that buzz, it’s not that hard to miss. What’s the difference between “the buzz around this movie is terrible” and “I read a movie review I didn’t like”?

        • toastengineer says:

          Yeah, a lot of times the marketing campaign is a lot more in to that stuff than the actual movie is. I suspect the movie producers know that weird shit in the movie itself is going to drive away the common folk in a way that annoying “IF YOU DON’T LIKE THIS MOVIE YOU’RE A MONSTER” ad campaigns won’t, and that this stuff is on its way out anyway so putting it in the movie itself is going to be an embarrassment for them for way longer than it will be an asset. The advertising is enough to get the woke media on their side without having to change the actual content at all.

        • AG says:

          I mean, if the promotion around Captain Marvel was exhaustingly woke, it’s because the promotion was itself a backlash to a backlash.

          Increasingly narrower slices of noisy nasty fans, reacting to each other, and, well, I can’t blame Brie for being annoyed at all of the shit I’m sure is in her inbox. All debates are bravery debates.

          It’s a sad mutual defect position, and the majority of people who aren’t involved are getting the splash damage.

        • Edward Scizorhands says:

          I missed all of the Brie Larson comments. I only saw some of them when I watched the Red Letter Media review of Captain Marvel. They were cringe.

        • Le Maistre Chat says:

          @dick:

          Speaking as someone who missed that buzz, it’s not that hard to miss. What’s the difference between “the buzz around this movie is terrible” and “I read a movie review I didn’t like”?

          I didn’t read any reviews. This was before that, a matter of “LMC needs to leave the parts of the internet where people link to something woke Brie Larson or Kevin Feige said.”
          I’m of the opinion that film reviews are superfluous to the whole “Should I pay money to see a Marvel movie at the cinema?” decision process. You go because you’re excited about what it’s doing to the universe or because they’re super popular and friends invited you, or you don’t, waiting for home viewing like Ant-Man.

        • Edward Scizorhands says:

          And now (with the clip released over the weekend) we have to have fights about action hero dynamics. :eyeroll:

    • AG says:

      You are giving way too much credit to a subset of online fan reaction, rather than what was the actual content of the films. MCU has milquetoast politics at best, and will continue to have milquetoast politics at best. They’re pulp fiction, and any politics in them is in service of that pulp.

      Anyone online can crow about their own preferred politics being championed in these films, and films with more pointed politics don’t do as well because anything other than milquetoast politics harshes the pulp vibe, especially in China.

      Disney won’t leave that money on the table.

  22. Hoopyfreud says:

    I hope we get some neat original IPs.

    I hope the retrospectives point out the creative sterility, like what happened to Avatar.

    I hope the public sours on the next overexertions of the wheezing carcasses of these franchises.

    I hope that if J. J. Abrams doesn’t find a directorial voice he sticks to directing Audi commercials.

    I hope Disney loses gobs of money.

    I hope rock and roll comes back (but that’s completely unconnected to anything).

    (Narrator: they didn’t)

    • Nick says:

      I hope Disney loses gobs of money.

      Endorsed!

    • Nick says:

      What happened to Avatar?

      Everyone lost their collective minds over Avatar because of the cool animation, but there was eventually a backlash, pointing out (among other things) that it was practically a verbatim retelling of Pocahontas. And now it’s practically totally forgotten, which is a pretty big deal for the highest grossing movie of all time. Part of that’s because Cameron is taking approximately five million years on the sequels, but part of it’s also that it really is super forgettable.

      • acymetric says:

        Is James Cameron friends with Axl Rose?

      • Lillian says:

        Avatar is a retelling of Dances With Wolves. The plot is a war hero with a wounded leg is posted to the frontier, meets the Sioux, begins to sympathize with them, falls in love with one of their women, and eventually becomes a respected member of their tribe.

        In terms of story beats, the big difference is that the climatic moment of Avatar has our foreign hero successfully lead his new tribe in battle against the invaders. In Dances With Wolves, the climatic moment is our hero being rescued by the Sioux after he is captured by the army and charged with desertion. Also whereas the defeated humans in Avatar are forced to leave, the post-script in Dances With Wolves informs us the Sioux were defeated 16 years later.

        And yes, it is amazing how little cultural impact the highest grossing movie of all time had. There’s no references or allusions, no quotes, no memes, no anything. It’s bar none the most forgettable movie everyone watched. (Well, nearly everyone, i couldn’t be bothered because the plot sounded stupid and i didn’t care about the CGI.)

      • broblawsky says:

        I can inform you that Avatar is not forgotten among the furry community, I’m afraid.

      • HeelBearCub says:

        The Avatar ride at Disney is fucking amazing, though. And the park area it is in is really impressively done.

        We waited 3+ hours to get in that ride and thought it was worth it.

      • LHN says:

        I’m confident from all I’ve heard that the main Avatar ride is, though I didn’t have the patience to wait 3+ hours for it. The other Avatar ride (Na’vi River Journey) is… fine, reasonably pretty, and slight.

        I was okay with it as a short, relaxing air conditioned boat ride having used a Fastpass. (It’s pretty much every other dark boat ride– which I like well enough– with more glow-in-the dark vegetation and an animatronic shaman.) But if I’d waited the hour or more the standby line had required I’d have been leading a Na’vi revolt myself.

      • Deiseach says:

        Also part of it is that the blue catpeople are so soppy, I was cheering on Evil Warmongering Invading Army Commanding Officer to wipe them all out.

        I have seen people wondering why Avatar was such a flash in the pan; made a fortune when it came out, Cameron was claiming he was going to do the divil an’ all with sequels and creating the universe, seemed to be poised to become a really popular (and profitable) franchise, then nothing – no effect on pop culture at all and very quickly forgotten.

  23. Well... says:

    In his interview with MY (https://www.youtube.com/watch?v=FHpvI8oGsuQ), the Lobsterman mentions the APA being made to retract a paper on how people tended to recover from childhood sexual abuse. Where can I find information about this paper?

    • Conrad Honcho says:

      I watched that interview the other day and I was glad to see Milo addressed the cognitive dissonance of the whole “of course sex between adults and minors is wrong but it was okay when it happened to me because I could handle it and the guy was hot” thing. He’s now step dad to a 16 year old and sees the issue through an entirely different light. Now if George Takei would do the same that would be great. I mean the cogdis thing, not necessarily the step dad thing.

      • Well... says:

        I guess we can talk about that, but to subsequent commenters: I’m really mainly interested in finding out about the APA’s retraction of that paper.

      • albatross11 says:

        It’s also entirely consistent to say:

        a. Sex between adults and 15 year olds is very often exploitative and should probably be illegal.

        b. There are instances where sex between an adult and a 15 year old isn’t exploitative or bad.

        Age-of-consent laws are inherently dealing with a fuzzy, messy boundary. Some people below the age of consent weren’t being exploited, some above the age of consent were, but the law needs a bright line so we draw one as best we can. And also, society often needs a bright line or two, even when nobody’s going to jail. If a 40 year old teacher marries his just-graduated 18-year old former student, I’m probably going to think a lot worse of him, even though no law was broken and it’s possible the whole thing was positive for everyone involved.

        • It’s not the hill I want to die on but the way people talk about this subject is just bizarre. A man being attracted to an adolescent girl is a biologically normal thing and yet people act like it’s equivalent to pedophillia. That doesn’t mean you should act on your urges but they are completely natural.

          • Plumber says:

            @Wrong Species,

            And I have completely normal and natural desires to repeadely hit people who tailgate me or honk their horns on the highway with a baseball bat.

            What’s your point?

          • dick says:

            I’m not albatross11 but I suspect you’ve misunderstood his point. I would also think less of a 40-year-old teacher who marries his just-graduated student, for reasons that have nothing to do with pedophilia.

          • @Plumber

            I literally said right in my comment that people shouldn’t act on their urges so whatever you think I’m saying, you’re wrong.

          • bullseye says:

            What does it matter whether it’s “natural”? And anything going on in your head is a mash-up of natural biology and culture anyway.

          • LesHapablap says:

            bullseye,

            It very much matters if it is natural if you’re going to condemn people for having certain thoughts and feelings. This is what those arguments about what sorts of thoughts and feelings go on the DSM as disorders are all about.

          • Deiseach says:

            A man being attracted to an adolescent girl is a biologically normal thing

            Adolescence covers 11 to 17 or 18, so there’s a wide range from “jailbait” to “just about legal” (which seems to be a porn sub-genre so there’s that backing up your premise) to “that’s so wrong you ought to be run out of town on a rail”. There’s also the age of the man to take into account – a forty year old man finding a sixteen year old girl sexually appealing may indeed be Nature’s Way but it also pushes all the “for the love of Pete, you’re old enough to be her father” buttons, so it may be hitting the incest taboo instead of, or as well as, paedophilia as such.

          • albatross11 says:

            Age of consent also varies across states and countries. I think it’s 16 in my state, and that this is common, but I think there are still US states where the legal age of consent is 14. There have been cases where people were legally engaged in sexual relations with someone above the age of consent in their state, but got into legal trouble for sexting with them (there’s a US law sexually explicit images of anyone under 18).

          • mwengler says:

            Plumbr:

            Why would anyone honk their horn with a baseball bat?

            TIA,
            mwengler

        • Lillian says:

          Age-of-consent laws are inherently dealing with a fuzzy, messy boundary. Some people below the age of consent weren’t being exploited, some above the age of consent were, but the law needs a bright line so we draw one as best we can.

          Does the law really need a bright line? A while ago i was reading a book about women in the Middle Ages. It brought up a case that happened in England, i think the 13th century. At the time the age of consent was 12 for girls and 14 for boys, however their approach was that what really mattered was the person’s maturity, age was merely a proxy for it. Consequently, in some circumstances eleven and a half was considered close enough. So it was that one Sir John, of about 15 years of age, married a young lady of 11 and a half.

          The young lady was a heiress, she had no brothers and both her parents had died. Her uncle wishing to seize control of her inheritance, forcibly took custody of her, claiming that in fact she was only 10 and a half and thus her marriage was not valid. So Sir John did what any red blooded English knight would do when another man kidnaps his wife, and took his uncle-in-law to court. The interesting part here is that the proceedings don’t seem to be particularly concerned with her actual chronological age as opposed to her maturity. Evidence entered in favour of Sir John was as follows:

          1) Just look at the girl, she’s physically mature and could easily pass for being 14.

          2) The young lady’s bedmate testified that on multiple occasions Sir John had come to visit his wife at her bed, wherein upon the newlyweds had vigorously and enthusiastically consummated their union. (Beds were so expensive back then even nobles had to share.)

          3) Sir John’s manservant testified that on the eve the two’s marriage, he had joked to the young lady that when they got married, his master would be able to do whatever he pleased with her. To which she had haughtily replied, “I am to be his wife, not his whore.” A response which the court judged to be very adult.

          In light of this evidence, the young woman was ruled to be mature enough to consent to marriage, and thus released from her uncle’s custody. Reading this, it struck me that it seemed a far more reasonable approach than the one taken in modern times, where we obsess over the trivia rather than getting at the real question of whether or not someone has in fact been victimized.

          • Le Maistre Chat says:

            So it was that one Sir John, of about 15 years of age, married a young lady of 11 and a half.

            The young lady was a heiress, she had no brothers and both her parents had died. Her uncle wishing to seize control of her inheritance, forcibly took custody of her, claiming that in fact she was only 10 and a half and thus her marriage was not valid. So Sir John did what any red blooded English knight would do when another man kidnaps his wife, and took his uncle-in-law to court.

            What a great story.

          • Hoopyfreud says:

            But age is legible.

          • albatross11 says:

            I think without a bright line, you end up with a social worker/prosecutor/judge/jury trying to decide who’s sufficiently mature to have sex in individual cases, and I think they are very unlikely to do that well. Also, then there’s no point at which people can know that a sexual relationship with their girlfriend/boyfriend is legally allowed–maybe the judge will decide that your relationship with your 25-year-old girlfriend is exploitative.

        • DarkTigger says:

          @albatross11
          In Germany age of consent is 14, but if one person is between that an 17 , and the other person is over 21 it is considered a form of sexual abuse. Which it is only persecuted if the underage person or their legal gurdians press charges.

          I think that is a pretty good method to give people the leeway to decide on a case to case basis if this is an a) or b) situation.

    • Well... says:

      OK so the reason I brought this up is not to discuss whether a man being attracted to teenage girls is natural. It’s because the paper in question found that most people who were sexually abused as kids eventually recover without lasting emotional trauma. (Summary provided by the Lobsterman. Again, I can’t find anything about the actual paper so I can’t verify this, I have to go on his summary. Help finding the paper or really anything about the paper would be appreciated.)

      I’m trying to find out if this means either

      a) Even worse things than being sexual abused as a kid have to happen to you for you to be traumatized in the long term

      or

      b) The thing that traumatizes you in the long term could be a horrible thing that happened to you, or it could be a random trivial thing like a microaggression — basically the seriousness of the thing you experience is unrelated to the amount you’re traumatized

      or something else. Basically, what’s the relationship between unpleasant events and trauma?

      • Protagoras says:

        The proportion of soldiers who participated in combat that experience PTSD is, as I understand it, considerably less than half. Whether it’s having a particular vulnerability to the trauma, or having it happen in a particular way in particular circumstances, or perhaps being sensitized by repeated instances, or some combination of those and possibly other factors, it seems clear that something additional besides just experiencing horrific things is required for people to experience lasting emotional trauma. And while I also can’t dig up the citations off hand, I do recall previous research indicating that childhood sexual abuse fit this pattern; the people who say they had childhood sexual experiences that were no big deal, who are accused of being in denial and of being traitors to the cause by trivializing sexual abuse, are probably mostly sincerely describing their own experience (which doesn’t, I suppose, prevent them from being traitors to the cause, if you think the cause is important enough that people should lie for it).

        • BlindKungFuMaster says:

          I read an article recently that argued that a big part of PTSD in soldiers is actually brain damage caused by explosions. Which might mean that most people just don’t get traumatised easily.

          • Randy M says:

            Maybe we should go back to the old term–shell shock. More physical, less psychological.

          • acymetric says:

            Isn’t shell-shock used for the more immediate effects? I didn’t think that term was really used for people who were still suffering effects say 1 or 2 years later.

          • Randy M says:

            I’ll have to look that up now. I was under the impression that it was the earlier, informal term for the same set of symptoms, getting jumpy around explosions after trench warfare, which then got the technical term of PTSD once the psychologists got involved after soldiers went home.

          • acymetric says:

            A quick, non-exhaustive search indicates that it was both, with people suffering from “acute” and “chronic” shell shock. I think I was more exposed to the acute usage and you were more familiar with the chronic usage.

          • HeelBearCub says:

            Sounds like someone has been listening to George Carlin…

            ETA: A link.

  24. Le Maistre Chat says:

    If you got a group of biological females who identify as men or demand the pronoun “they” together, would that be called the not-she party?

    • toastengineer says:

      How do you delete someone else’s comment?

      • Nick says:

        Hey bakkot, toastengineer is asking for a not-see option!

      • acymetric says:

        I rate the original post about a 1/10, but for some reason this comment had me cracking up enough that I had to get up and leave the office for a minute.

        • Lillian says:

          You’re not the only one, “How do you delete someone else’s comment?” has become something of a stock joke in recent times, because it’s both easy to do and yet genuinely gut bustlingly funny. Presumably given a few years and more cultural penetration it will be slowly piled into the ground until it stops being funny, but for now it has never failed amuse whenever i’ve seen it.

        • toastengineer says:

          Yeah, it’s a stock joke, don’t give me full credit. Really, I’m shocked you did nazi it before.

          • acymetric says:

            Maybe I’ve seen it before and just glossed over it and this was the first time it really hit me? I definitely didn’t realize it was that common, despite spending far to much time on far too many parts of the internet over the last couple decades.

            I’ll probably start noticing it everywhere now.

    • Jaskologist says:

      If a ballot initiative had 3 proposed solutions, and a group found themselves neutral between options A and B, but highly opposed to the third option, would they be a Not-C Coalition?

      • Le Maistre Chat says:

        Yes, but I do not see what initiative would cause such neutrality.
        Say there was an initiative to to A) require nuclear waste to be stored in Yucca Mountain B) shut down nuclear power plants or C) allow nuclear waste to be dumped in the ocean, I’d prefer A win even though it’s more important to vote not-sea.

        • AG says:

          Dumping in the ocean is viable so long as you secure all of the containers with Knot C, though.

          • Conrad Honcho says:

            When beating a dead horse, help prevent spread of transmissible spongiform encephalopathy by making sure it died of not TSE.

          • dick says:

            If you get too many puns in a thread, a lot of peoples’ response is “God, when’s this going to end?”

            This effect is abbreviated as “God, when’s…” law.

          • beleester says:

            However, this site has a limit on how deep a comment chain can go, which limits the practical length of a pun thread. Of course, not everyone is in favor of this policy – some think that if a thread is too long for you, you should just hide it and move on. In short, there’s a split between the Nest-Cease party and the Not-See party.

          • Two McMillion says:

            If you brought a bunch of people together and had them all look at some really tangled up ropes, that would be a knot see party.

            If you examined the ropes used and found that none of them were arranged in ways typically used on sailing ships, it would be a not sea knot see party.

            And if all the people you gathered were born in the first ten years of the 21st century, it would be a naught see not sea knot see party.

            And if everyone at the party was drinking artificial orange juice with reduced vitamin content, you would have a not-C naught see not sea knot see party.

    • Aapje says:

      @Le Maistre Chat

      Quite a few of your comments are CW in a very uncharitable way. My preference would be for you to tone it down a bit.

  25. theredsheep says:

    On a related note to Aapje’s post: who’s up for discussing the implications of long-term US fertility rates? Our TFR has recently dropped below replacement. Very religious women are much more likely to have numerous kids than those to whom religion is “somewhat” or “not” important. I know these from reading studies. More broadly, based on personal experience, there are three extremes an individual’s reproductive strategy can drift towards:

    1. Marry late if at all, have few if any kids. The favored tendency of secular-progressives. Environmentalism, support for reproductive rights, and a more careerist/self-fulfillment mentality all push them in this direction. Probably also other things I’ve overlooked. In general, I find that secular progressives don’t think much of homemaking.

    2. Marry, have numerous kids. Favored by the more conservative and religious.

    3. Don’t marry (permanently), have kids. The strategy of what I think of as the “disorganized poor,” people who may or may not be religious, may or may not have fixed beliefs where politics are concerned, and generally don’t form stable families. Unstable families generally don’t produce healthy and productive individuals–this behavior perpetuates poverty over generations, though some escape it.

    Now, these are only extremes, it’s a simplified picture, and there’s room in the middle, but this has been the situation for some time, and I get the feeling we’re drifting out to the extremes; it’s more acceptable to call oneself “child-free,” more acceptable to have kids out of wedlock, than it was when I was born in the eighties. At first glance, it would seem that sec-progs are doomed to die out and be displaced, but they’re very good at conversion because they dominate education, entertainment and news media. A lot of kids are born into religious families but drift away and become apathetic or sec-prog under the influence of the dominant culture. This is much of the reason why Republicans hate public schools.

    Obviously, the above is unsustainable; since sec-progs don’t reproduce at replacement, every convert is a dead end, and eventually there would be nothing left but the disorganized poor. The disorganized poor are not ideal converts because they’re poor and have a lot of bad habits that don’t acclimate well to a more cosmopolitan lifestyle. Also as their numbers grow they will have increasingly dysfunctional effects on society.

    Okay, I’ve used up my limited effortpost time just typing this much. Will try to post more later. These are just thoughts that occurred to me and I don’t have the resources to go through sociology papers, so do let me know if (as is probable) my assumptions are incorrect.

    • DragonMilk says:

      Many Christian parents half-jokingly say that to “be fruitful and multiply” is a great strategy to increase the % of Christians.

      That being said, they tend to have 3 or 2 kids rather than 1 or 2…

      The amount of kids coming from “very religious” is unlikely to exceed those coming from “very poor” or “very immigrant”

      • RalMirrorAd says:

        One can verify that. I’d be curious to see what % these groups represent of the population and comparisons of their TFRS.

    • albatross11 says:

      My not very informed guess here is that what’s driving lower fertility may be less optimism about your own economic prospects. Responsible people won’t have kids they don’t think they can properly care for, so if your best hope economically seems to be working as a Starbucks barista forever, and your prospective husband has similar prospects, maybe you just don’t have any kids. Or if you do maybe you only have one. Something similar happens if student loan debt plus slow economic progress means that you’re not financially stable enough to start a family until you’re 40 instead of 30.

      As the cost of “properly raising” kids goes up, or the economic prospects of young people go down, you should expect fewer kids.

      • Plumber says:

        @albatross11,
        If I’m reading you right I fully endorse (+1) your post.

        While there’s some cultural “Adam Smith’s Linen Shirt effect” going on, a lot is just plain the price paid for housing.

        Women start having children earlier in Zapata County, Tex., than in San Francisco, CA. (and besides later births effecting how many children one may have in a lifetime, just plain more time between generations means slower replcement levels), so yeah sure “cultural norms”, but those norms sure hew close to housing costs.

    • JPNunez says:

      Very religious parents also find themselves with some sons/daughters straying from the path, so I doubt this will lead to a rise of very prolific religious cults, unless somehow the state subsidies them (I think this happens in Israel).

      This will necessitate a bunch of socio technical fixes, but in the meantime, migration can fix it somewhat. After that, lowering credentialism, so more people can find good jobs out of high school, and heavy social assistance to lower the cost of having children, will help slow the problem. I suspect technology will emerge to make pregnancy easier too, and ways to delay menopause.

      After that, we may find ourselves in a new equilibrium, where population barely replaces itself. Better healthcare may lower the replacement rate so just a few couples having 3 children and most just 2 is enough.

      • John Schilling says:

        Very religious parents also find themselves with some sons/daughters straying from the path, so I doubt this will lead to a rise of very prolific religious cults, unless somehow the state subsidies them (I think this happens in Israel).

        Nobody is subsidizing the Mormons or the Amish that I know of, at least not at the net. And I’m not sure I want to be characterizing them as “cults”. Yes, some children stray from the path, but not enough to offset their net positive fertility. And I don’t think you need to go Full Amish to see that effect.

        • Nabil ad Dajjal says:

          I don’t think you need to go Full Amish to see that effect.

          Depends on how high of a growth rate you want.

          Mennonites love to schism over small differences in practice, so there are a lot of opportunities to compare outcomes between closely related sects. For example, the Wenger Mennonites who drive horses and buggies and the Horning Mennonites who drive black cars split off from one another in 1927. According to Wikipedia, the Wenger Mennonites are comparable to the Amish in terms of family size and growth rate while the Horning Mennonites have comparably smaller families and lower growth. The more integrated you are into the “English” world the harder it is to avoid falling into the same traps.

          • JPNunez says:

            Apparently Amish fertility rates have fallen a little too, probably due to economic reasons

            https://medium.com/migration-issues/how-long-until-were-all-amish-268e3d0de87

            it gives America until 2218 for the Amishpocalypse.

            It’s fine.

            re: subsidies. The Amish don’t pay Social Security/Medicare taxes, but do all the rest. Doesn’t seem like a subsidy. On the other hand, Israel is paying people to study the Torah full time

            https://www.reuters.com/article/us-israel-ultraorthodox-economy/jobless-ultra-orthodox-weigh-on-israels-economy-idUSTRE73D25W20110414

            In general I am not too worried about this until the part where religious blocks vote themselves benefits. But mankind has lived for most of history as heavily religious, high fertility rates, and we survived dropping mortality rates in childbirth while at the same time see religion fall in popularity that I allow myself to be optimist. It will work out somehow. If the Amish suddenly outgrow more progressive demos, I assume they will reach a point where their growth will force them into lowering their fertility rate or opening to more liberal values anyway, or are forced to integrate more with secular society.

    • AG says:

      My stance is “let it reach equilibrium.” If fertility rates are dropping, then we’re overpopulated, simple as that. I’m not convinced that we’re in a Molochian trap where we’re DOOMED! if population drops. It’ll grow again if it really needs to.

      If you’re that worried about the disorganized poor, prioritize making it easier for them to escape that state, disrupt the transference of instability and poverty over generations.

      Like, does anyone imagine that utopias have greater-than-equilibrium fertility rates, much less than in FALGSC?

      • albatross11 says:

        I’m also not a huge fan of having the state try to decide who should have kids or how many they should have. But in practice, we are doing that now–our social and housing and tax and trade and educational and criminal justice policies all have big consequences for who has kids and how many they have. NIMBY-associated sky-high housing costs + public schools sabotaged for political/ideological reasons + public spaces yielded to homeless people and petty criminals for ideological reasons + gimicky student loans used to keep inflating the higher education bubble amount to a strongly antinatalist set of policies.

        It’s a bit like having social programs that encourage poor and dysfunctional people to have kids, and then crying “eugenics” anytime someone notices this fact and proposes changing it.

        • AG says:

          Sure, I agree that current policy is playing a big role in fertility rates, but if the rate reaches a genuinely dire number, then popular support to change those policies will win out.

          Trying to force those policy changes prematurely would end up eroding other norms we don’t want eroded.

          Like, since apparently the biggest drop is for the wealthy demographic promoting NIMBY policies, then won’t they eventually die out?

          So I’m not opposed to changing these policies for the sake of changing the fertility rate, I’m opposed to implementing them by non-democratic means. (With the exception of criminal justice reform, since that concretely impacts who gets to vote in the first place, and so inherently cannot rescue itself from corruption.)

        • DinoNerd says:

          In general, as I’m sure you know, any statement like “we don’t want those people having kids” will get labelled eugenics. I don’t think that’s a bad thing.

          Maybe try “we have welfare policies that encourage people to have kids not because they want kids, or would be good parents, but because we’re effectively paying them to do so”.

          And if that’s not your problem – it’s just that welfare pays such a pittance that only those with the lowest expectations (=? “poor and dysfunctional”) can manage to raise kids (that they want) on it, then you have a different problem.

          • albatross11 says:

            DinoNerd:

            True, and there are good reasons for this. OTOH, we end up with this situation where encouraging the worst possible prospective parents to have kids is fine but changing our policies to stop encouraging them to have kids is unacceptable because it smells like eugenics.

          • Maybe try “we have welfare policies that encourage people to have kids not because they want kids, or would be good parents, but because we’re effectively paying them to do so”.

            That’s assuming a unicausal model, which is common but wrong.

            If you put the argument that way the natural rebuttal is that the payment isn’t enough to cover the costs, in money and time, of having kids.

            There are both costs and benefits to having kids. Anything that lowers the costs or increases the benefits will move some potential parents from “not quite worth doing” to “just barely worth doing.” It isn’t “poor people are having kids to get money,” which is both insulting and, in most cases, false. It’s “getting money makes having kids a little easier for people who want kids, so some do who otherwise wouldn’t.”

        • Plumber says:

          @albatross11,
          “NIMBY” is pro family in this area as it preserves neighborhoods and housing fit to raise children in, whereas “the market set free” has those houses and neighborhoods replaced with block after block of studio apartment tower warrens filled with childless adults.

          The great and glorious “Levittown” suburbs of the mid 20th century, as well as the broad based middle class were deliberately made by government policies that encouraged them, now both “neo-liberalism” and “new urbanism environmentalism” are tag teaming the destruction of family friendly places.

          • Hoopyfreud says:

            Somehow people in poorer places make the shitty housing work for raising families, as long as it’s cheap. I think price is a bigger deal than size, honestly. If nothing else, more 10-story shitty apartment complexes means you can build more nice homes a few miles away instead of propagating an ocean of even shittier subleased houses with half a driveway and no yard.

          • Plumber says:

            @Hoopyfreud

            “…you can build more nice homes a few miles away instead …”

            That sounds good in theory, but it doesn’t fit observed reality.

            I see no “nice homes a few miles away” being built, only destroyed to make way for tower blocks.

    • sclmlw says:

      Given these are not fixed categories, and individuals can sort themselves as they choose, I think it’s not as simple as just calculating birthrates – even if you do give some room for caveats. For example:

      Strategy 1 (S1) is only an obviously losing strategy from a genetic point of view. From an ideological viewpoint it might be a good strategy for improving an ideology’s representation in the population (even if it does contribute to overall decline in the population). If S1 birthrate + S1 conversion rate > replacement, its ideas will grow. If S1 growth rate is greater than S2/S3 growth rates, it will increase its representation in the population. As it gets large enough, it will shrink overall population, assuming conversion rate stays high enough.

      One interesting effect of this oversimplified interpretation is that if you could take a snapshot genetic survey of S1 and make comparisons over time, the genetic makeup would drift. New genetic markers would come in, while old ones faded out, like a genetic conveyor belt.

      Not all conservative/religious favor S2. Only subgroups do. In some places, like Utah, it’s a treadmill from S2 to S1 where religious reproduction feeds into sec-prog numbers slowly over time – as you describe. However, in other areas, such as the South and Midwest, S2 reproduction is continuous with S1 non-sec prog conservative communities. In these areas, I would hypothesize we see a larger flow from S2 to S1, since there are no ideological barriers to movement. We still see strong S2 religious communities, but their influence isn’t as pronounced as in places where there is a much stronger delineation between S2 and S1.

    • Anthony says:

      With differential fertility, and no change in the conversion rate, eventually the group with lower fertility will end up at about the same percentage of the population as the conversion rate. But it will take many generations to get there, depending on the difference in fertility and the conversion rate. I’ve read earlier that the conversion rates from Secular to Religious and vice-versa are about 20%, so the end state, ignoring the Disorganized Poor, is 20% secular.

    • I’m not that worried about the “disorganized poor”. We’re still transitioning to a state where few people use birth control to nearly everybody. The disorganized poor are just on a lag. Even now, birth rates for teenagers have been steadily dropping.

      • Aapje says:

        The poor often get children young by choice, not merely due to a lack of birth control.

        • theredsheep says:

          Very true, in my experience. They have an interesting mixture of traditional and modern values, and many of them still want kids. Also, other roads to fulfillment, like career success, are frequently closed to them, and they know it.

          • Plumber says:

            @theredsheep,
            “Lower class” values are good, true, and beautiful, “Careerism” is an abomination, the company and “your portfolio” won’t visit you in the nursing home, it’s “UMC” values that are anti-family, and anti-humanity and should be opposed, as those values are only appropriate for a few scholars as a sub set, and forcing most people into that mold is unnatural and wrong.

            It’s obvious where the “new economy” leads – endless towering warrens (but with granite countertops!) inhabitanted by single childless “knowledge workers”, while outside in tents those not deemed “cognitively fit” subsist on trash picking and begging, “By there works shall we know them” and plain as day that sick future is what our rulers have planed.

            Over that I prefer the Amish.

          • brad says:

            I like vanilla ice cream.

          • theredsheep says:

            If you mean vanilla ice cream by way of personal fulfillment, they do that too, along with chips, burgers, alcohol, tobacco, marijuana, and lotto tickets. It’s not really long-lasting. Kids represent something like an accomplishment, or at least seem like they might.

          • brad says:

            Sorry if it was obscure, my point is that Plumber’s first and third paragraph didn’t actually say anything. They are the equivalent of my post.

    • secondcityscientist says:

      I grew up in a pretty conservative community overall, and it was still pretty rare to see parents with >3 kids when I was young. We knew a couple of big families, but the vast majority of people with kids had 1-3 kids. I doubt that it’s different now.

      One of the biggest changes, I think, has been in what’s considered adequate space for your kids. My parents both came from big families (six kids on one side, ten on the other) and I’ve seen the houses they grew up in. They were not large. I imagine it was “all of the girls in this room, all of the boys in this other room”. My wife and I went to an open house recently and the realtor said that the older couple who had lived there raised eight kids in the three-bedroom one-bathroom house. I expect that she got some surprised reactions from that.

      It is hard to imagine middle-class families living like that today. Every kid now needs their own bedroom. Even the larger families that I know usually have separate bedrooms for each kid, they just have larger houses.

      • SamChevre says:

        The “more space” is definitely something I observe. I’m the oldest of 9, and lived at home until I was 22. I never had my own room that was primarily my bedroom. (I slept in the living room for several years, because the boys room was too crowded.) In my corner of the world, at least, people shared beds too, not just rooms (only same-sex, though). (And not only with siblings: if you were visiting somewhere and stayed overnight, you were likely to get to share a bed with whoever was closest in age.)

        My observation of current state, though, is different–most of the large (5+ children) families I know still don’t have a bedroom for each child.

        • Le Maistre Chat says:

          As i recall, there’s a funny account that survived of a man complaining that his wife had grown frigid and didn’t want to have sex with him any more. So instead of sleeping next to him in the bed, she instead started sleeping across from their three children. The implication here being that in the normal course of a affairs, husband and wife would have sex on one side of the bed, while their children tried to ignore them on the other side of the same bed.

          I’ve long wondered how couples observed the taboos around having sex in front of others (to my knowledge, no tribe has condoned boinking in the middle of the village while the neighbors stepped around) when they lived in houses with no privacy. I guess your own children didn’t count.

      • paulharvey165 says:

        The “more space” is an issue my wife and I ran into when house shopping. I was happy to sacrifice space to live in a nice area; I am white and American but spent the vast majority of my formative years living in large cities in Asia. I shared a room from shortly after I was born to when my brother graduated from high school and moved on to college. To me, a 4 bedroom house was perfect. One for the girls, one for the boys, and one guest room. Smaller spaces means more time in family areas and a sense of coziness and togetherness I don’t get from large homes.

        For my wife, every house felt small and she thought that if we could make it so each child could have their own room, we should. In general, my conversations with Americans make me feel like the fundamental value is size.

        We plan on having 4 children, if everything works out; that probably makes us an outlier to the general public.

      • Lillian says:

        For an even longer perspective, in the middle ages housing space was at even more of a premium, and the beds were so expensive any given household was likely to have only a single one. Thus at the time not only did the whole family share the same room, they were likely to share the same bed. Generally this meant the parents and younger children would get the bed, while the older ones would sleep wherever they could get comfortable.

        As i recall, there’s a funny account that survived of a man complaining that his wife had grown frigid and didn’t want to have sex with him any more. So instead of sleeping next to him in the bed, she instead started sleeping across from their three children. The implication here being that in the normal course of a affairs, husband and wife would have sex on one side of the bed, while their children tried to ignore them on the other side of the same bed.

      • Plumber says:

        @secondcityscientist,
        Life must be far different where you live.
        My grandparents house was bigger than my parents house who had the same number of kids, my parents house is bigger than my house and I have the same number of kids, and the only people I know who have more space than what they grew up in have multi hour commutes (or travel before dawn and sleep at work) and those who live far from work are usually divorced – if you’re never home soon you can’t go home, your ex-wife and the courts won’t let you.

        • cassander says:

          @Plumber

          Housing per person is going up. House size is up 20% just since 2000, and housing square footage per person has almost doubled since 1970, since houses have gotten bigger and families have gotten smaller. Now, I don’t have statistics for the bay area, and given that it has the most dysfunctional housing market in the country, I’m sure it differs from the national average, but even if housing per person hasn’t gone up, it’s important to realize that you live in a huge exception and that your personal experience in this matter doesn’t generalize.

    • theredsheep says:

      Okay, I typed a massive followup to this which got eaten when I hit Post. Don’t have the heart to retype it all, but the short version is “the existence of the internet, and the proliferation of subcultures it encourages, is going to make converting S2s increasingly difficult.”

      Bah.

      • theredsheep says:

        I probably hit a taboo tripwire with the post. Don’t know which, though.

      • Nick says:

        Sorry to hear that. =/ I have a habit of ctrl+A ctrl+C before posting anything, which may stop that for you in the future.

        • Aapje says:

          I use the Typio Quick Access plugin for Chrome, which usually preserves what you typed.

      • theredsheep says:

        Okay, to elaborate: the internet, obviously, makes it easier for weird subcultures to flourish. Like-minded people can meet up, exchange ideas, and promote their views with minimal gatekeeping, where before they had few options if they couldn’t talk over a newspaper editor or TV producer. Hence, for example, people think white supremacy is growing, when really all that’s happened AFAICT is that existing racists are now much better-equipped to share their POV than they were in the days of xeroxed newsletters. This has clear value for resisting conversion, and could eventually lead to robust parallel communities forming.

        I view the recent spasms of woke censorship as, essentially, an obsolete system making a last-ditch effort to reimpose itself on a world that’s outgrown it. Trump, little though I like him, has shown that it’s possible to bulldoze over and past respectable opinion. Unless (and I don’t think this is likely) America adopts a China-like model of online repression, I don’t see our boisterous cultural mess going away. I don’t know what life looks like in an internet age where cultural conformity can’t be gently enforced, but it will have to be tolerant.

        At the same time, public education seems to be losing some credibility, with the rise of charter schools, voucher programs, et al. If educational choice becomes a real thing in this country, conversion rates will be badly reduced. Where do we go from there?

        • brad says:

          I view the recent spasms of woke censorship as, essentially, an obsolete system making a last-ditch effort to reimpose itself on a world that’s outgrown it.

          What period of time was the heyday of this “obsolete system”, when it’s power was hegemonic?

          • theredsheep says:

            The time when the big three TV networks were powerful, and newspapers told us everything, and some could tilt left and some could tilt right but there was a limit to how weird you could be because advertisers didn’t want to be associated with anything extreme. Before TV it was radio. It’s been a long, slow decline, starting with cable news in the nineties, but the flow of information is decentralizing, and as American culture fragments the left and right are both growing more extreme. Don’t know where it’s going, but I think it’s not “back.” Though there have been alarms about [misleading information dissemination] and the like, I can’t see anyone putting the genie back in the bottle for good.

            I’m vaguely aware that there was an age of yellow journalism, but I gather it ended well before the sexual revolution and the resulting differential decline in fertility. So we’re in uncharted waters. Or so I, with my mediocre understanding of twentieth-century history, think.

          • brad says:

            I guess I still don’t see it. It seems like the 1920-1950s would be the heart of what you are describing but there was no kind of nationwide elite consensus during any of those decades with the singular exception of the war years.

            The pill only dates to 1960, and I don’t see how anyone would call the 1960s, 1970s, or 1980s as some kind of halcyon days of moderation enforced from the top down.

            It seems like “the internet” broadly speaking places undo emphasis on a partly or largely imagined 90s. I guess that this is a similar phenomenon to how many people of roughly the same age in the late 70s and early 80s thought about the 50s.

          • theredsheep says:

            While I wasn’t alive during the sixties, etc., I’m not talking about perfect concord. There’s always been controversy and always will be. That’s part of healthy democracy. What I’m talking about is splintering identity at a more basic level, now that we’re all receiving information from different sources and have largely stopped good-faith discussion across the aisle. Modern political discourse increasingly consists of middle-school gossip about how stupid the other people are.

            There’s another factor feeding in here, in that former unifying elements have died away; the country hasn’t been functionally Christian for a very long time, and is rapidly ceasing to pretend, and patriotism ain’t what it used to be after all the wars and disillusionment. And then there’s the whole “Bowling Alone” aspect …

            I may be getting the timeline off since, like I said, I’m not much for US history.

          • brad says:

            I think Scott has a post about the 70s. We aren’t talking about some healthy controversy. There were actual riots and domestic terrorists groups that robbed banks to fund themselves. And that period had nothing on the labor strife and anarchist violence earlier in the century.

            It’s fine not to like history, but if so you don’t get to make these kind of pronouncements.

            There was no golden era, not even the 90s.

          • Eugene Dawn says:

            Yeah, gonna have to back up Brad here: the era of the Kennedy, Kennedy, and MLK assassinations, “impeach Earl Warren”, the resurgent KKK, the Weather Underground, the urban riots, the anti war movement, etc was orders of magnitude more chaotic and violent than what we have now; the idea that there was more good faith political debate in the era of Lyndon Johnson’s Daisy ad, “in your guts you know he’s nuts”, and “JFK: wanted for treason” than today strikes me as… unsupported.

          • theredsheep says:

            I think we’re arguing about different things, and I’ve confused matters a great deal by including ambiguous statements and emotive value judgments. And some flagrantly wrong things, because I’m tired and got carried away. I’m sorry. Let’s back this up.

            Original, intended point: the shaping of public opinion used to be much more centralized than now, as reflected in the LBJ quip (paraphrased from memory), “shit, if I’ve lost Cronkite, I’ve lost America.” It used to be that there were trusted sources of information who commanded the broad respect of a lot of people. Everyone basically watched the same news, read the same papers. There was always a contrary narrative, but it remained a relatively small and radical counterculture; people didn’t disagree on basic facts so much as what ought to be done about them. Such is my understanding, anyhow.

            That started to change with cable news and the like in the nineties. Now we have reached the point where lots of people can express something fantastically false–vaccines cause autism, for example–and there’s no gatekeeper to stop them, nobody to say they’re not respectable. It’s not just radicals believing that, but ordinary domestic bourgeois suburbanites. And the dominant culture is forced to accommodate them.

            The same is true elsewhere; I know a lot of college-educated professionals who say things that used to be exclusively uttered by extremist cranks. Some are liberal, some conservative. But they aren’t radicals in the sense of standing outside the mainstream. There really isn’t a mainstream. There are multiple, and multiplying, mainstreams.

            In some cases, the things they say can be readily disproven, and are, but since nobody reads the other side’s arguments it doesn’t happen. I don’t know if there was that same basic level of separate-factual-worlds going on in the sixties or thereabouts, but I can’t recall hearing about it. I’m talking about things like hordes of progressives believing that an “assault weapon” is a coherent category, or conservatives believing that a progressive pol has explicitly spoken badly of veterans or Christians or what-have-you (I’ve seen this happen multiple times). Has that always been a thing?

            This phenomenon makes it really, really hard to change people’s minds, as plenty of people have noted. If it continues to be the case, converting S2 to S1 is going to become a losing option, and S1 will cease to be viable.

            Again, sorry. I’m running on short sleep today. I hope that was more coherent.

        • Thomas Jorgensen says:

          Civil war. If there are no gentle prods towards some kind of consensus, and everyone can silo themselves into info-bubbles, sooner or later one of those bubbles is going to convince itself that all the rest of society is a bunch of sissies that will fold at the first whiff of gun powder, and ignore all voices that tell them that the numbers and optics do not favor them in any shape, way or form.

          For historical parallel, see: Confederacy fire-eaters, who not only professed that any southern man was worth five northerners in a fight, and that the world would kneel before king cotton and aid them on the field of battle, but appear to have based their war plan on those two things being true, and then marched off to war against an opponent they had zero chance of beating. That sprang from a society with very heavy censorship, but voluntarily filtering all opposing views from your sight could easily do the same thing.

          Counter-moves. Well, this will probably not be popular here, but just rolling right over the private school and home schooling movements like a metaphorical tank battalion is probably the best option – Insisting very, very hard that everyone who can absolutely will attend and be taught some common ideas about consensus reality as kids.

          • theredsheep says:

            Well, it’s a democracy, so rolling over doesn’t work, especially since public schools are mostly a train wreck and most well-to-do liberals don’t care to have their kids rubbing shoulders with the plebs. To the extent that upper-crust kids go to public schools at all, they go into gifted programs that exist as virtual schools-within-a-school, and even that gives them incidental exposure to poverty. You could probably fix some of it, sort of, but it’d take a lot of money and most folks would rather pay for their own kids to go to fancypants academy than throw extra tax money at a really dysfunctional system.

            Violence could be the potential outcome, but I suspect that, in today’s media environment, it won’t. Don’t want to effortpost why on the digression.

    • brad says:

      What makes you think that linear extrapolation is a good predictor of the future when it would back-test terribly? Channeling my inner Robin Hanson I don’t think discussions about demographics are really about predicting the future.

    • Plumber says:

      @theredsheep

      “…Obviously, the above is unsustainable….”

      For thousands of years, maybe not, but for decades?

      Why not?

      Have you seen San Francisco?

      Despite lower birthrates and out migration of people born here, the roads are more crowded, housing is increasingly unaffordable (yet is being built at an every increasingly fast rate), and if birthrates dropped till they were less than a quarter of what they are now I imagine our rulers would pop champagne bottles open to celebrate the savings on funding schools and just recruit more of “the best and the brightest” from out of town, state, and nation, and select smarter, healthier, and more energetic people from elsewhere and congratulate themselves on the disruption “making the world a better place”.

      It’s designed for wealthy extended adolescents, families with children are a burden to be outsourced, and still the young adults from elsewhere keep coming here….

      • theredsheep says:

        I don’t think you can scale that up to a whole country; we’re speaking of the US as a whole here. In a hypothetical USSF, you’d require immensely high immigration to keep numbers up, any immigrant workers who failed to assimilate to S1 would produce a hostile subculture, and any halt in the flow of immigrant workers (improved conditions back home, or whatever) would cause that minority to grow quite rapidly as a percentage of population.

    • Paul Brinkley says:

      I do sometimes wonder if fertility rates are hitting some sort of temporary wall that goes away once the population has “shifted gears” and is ready to expand again. As in, theoretically, the earth could likely support twice as many humans as today, at US living standards, if John McCarthy was to be believed, but if you suddenly dumped 7 billion new humans all over the planet, I think you’d wind up with a lot of casualties.

      If this is true, what’s the temporary wall, economically speaking? Is it as simple as a surplus of labor in certain sectors (while others are scarce)?

  26. S_J says:

    In light the series of States in the United States passing laws intending to restrict abortion…

    I feel like revisiting something else posted here at SSC.

    In several posts about moral value of animals and cortical neuron number, Scott attempted to deduce if there was an association between the typical number of cortical neurons in an animal and the moral value that humans place on instances of such animals.

    This leads me to a series of questions:

    1. Is there a point at which a developing human fetus has a known number of cortical neurons equivalent to a small rodent?
    1.a. Is there a point at which a developing human fetus has a known number of cortical neurons equivalent to a lamb?
    1.b. Is there a point at which a developing human fetus has a known number of cortical neurons equivalent to a cat?
    1.c. Is there a point at which a developing human fetus has a known number of cortical neurons equivalent to a bird?

    1.d. At which of these steps does the moral valence of the fetus rise to be equal to that of a new-born baby?

    2. Is some other point of fetal development important in deciding moral valence?
    2.a. Does the possession of a distinct genetic code, different from that of the mother, count as a point of moral valence?
    2.b. What about the possession of a distinct cardiac muscle, with a detectable rhythmic ‘beat’?
    2.c. What about the possession of a distinct musculo-skeletal structure?
    2.d. What about the ability to exhibit some form of apparently self-directed movement (within the confines of the mother’s uterus?)
    2.e. Tied to the question of cortical neurons above…what about the presence of measurable activity inside the developing brain of the fetus?

    2.f. At which point in the development does the fetus have the moral valence equal to that of a new-born baby?

    I’m curious what the commentariat thinks.

    • Faza (TCM) says:

      At which point in the development does the fetus have the moral valence equal to that of a new-born baby?

      An obvious first guess would be: “when it is born”.

      • J Mann says:

        If you’re determining moral valence by number of cortical neurons, then the question probably breaks down to “when does a human stop adding cortical neurons.”

        Since the answer seems to be that we keep adding neurons until about eighteen months after birth, I guess that by this standard, humans continue to increase in moral valence from conception through about a year and a half after birth.

        • S_J says:

          @J Mann: per that link,

          The development of the human brain during gestation is a highly complex project on a tight schedule. In this 12- to 14-week-old embryo, nerve cells are proliferating at the rate of about 15 million per hour. The physical bases for perception are beginning to emerge: one can make out an eye (the black dot) and the future site of the ear (the white area just above). Source: National Institute of Child Health and Human Development.

          This appears to give some kind of baseline.

          There is also this:

          Brain cells proliferate according to a scheme that combines order with enormous productivity. In the ventricular zone, a small number of precursor cells divide in two; then, in another cycle, each precursor cell divides again, perhaps several more times. The effect of each cycle at this stage is to double the number of cells; therefore, adding even a single cycle, for example by extending the duration of this early proliferative stage, could make a great difference in the overall size of the brain. As it happens, the difference in size between a monkey’s cerebral cortex and that of a human can be accounted for by just a little more than three such cycles. And indeed, the entire neuron-generating stage, including these early cycles that immediately double the number of cells as well as later cycles in which multiplication proceeds more slowly, does seem to follow this rationale in its timing. The neuron-generating process in both monkeys and humans begins on about the 40th day after conception; the process ceases in monkeys on about the 100th day but continues in humans for about another 25 days.

          Neuron development numbers can be clouded by the fact that neurons then migrate to regions inside the brain, so full brain function may not be present even if most neurons are present.

          Eyeballing that base rate of 15 million per hour, and using the table here, a human fetus goes from moral valence of approximately a Lobster to approximately a Chicken in about 3 hours. Further steps are: Cow (additional 16 hours after Chicken), Pig (additional 8 hours after Cow), Elephant (additional 345 hours after Cow; ~14 days after Cow), Chimpanzee (additional 40 hours after Elephant), adult human (653 hours after Chimpanzee; ~27 days after Chimpanzee).

          This doesn’t agree with full-neuron-count at 18 months after birth. However, the numbers in the table I’m referring to are cortical neurons, not total neuron count.

          • Another Throw says:

            This is complicated by pruning. An adult has less than half the neurons of an infant.

            The pruning processing, starting sometime after birth and continuing into the early 20’s, is associated with the great bounds in learning achieved throughout that time frame. The relative difficulty of learning thereafter being associated with the cessation of that process.

        • fion says:

          This matches an opinion I’ve had for a long time but rarely admit (at least IRL) that babies gain moral value as they age.

          Though it’s probably coincidence, because my thought was more along the lines of increasing memories, experience, relationships, character etc. and probably continues to apply long after 18 months.

          • Plumber says:

            @fion,
            Good point.

            That’s why it’s less immoral to draft 18 year-olds than 26 year-olds.

    • Eternaltraveler says:

      This is not straight forward as a newborn has around ~100 billion neurons (close or higher than adults) but very little synaptic connectivity in the cerebral cortex. I don’t think its easily to translate across species normally, but during early deveopment its a fool’s errand.

      • albatross11 says:

        Yeah, this is finding a number we can measure and pretending it answers our question because we want an answer but don’t have any better ones.

      • J Mann says:

        I do have the strong moral intuition that we should protect human lives more as they cognitively start to move closer to my model of a baseline mental human. (For example, that the literally irretrievably comatose shouldn’t have rights that we promote as aggressively as we do the people walking around.)

        I agree that it’s difficult to make definitive measures of cognitive development, but it may still be helpful to try for some people’s moral analysis.

    • S_J says:

      From the article linked by @J Mann, there is an interesting line

      The fetus itself, in kicking, turning, and (by the fifth month) even sucking its thumb, stimulates the growth of synapses.

      Which puts an upper bound on question 2.d. above.

  27. nose26 says:

    So I’m looking to do a simple survey among AP students (Advanced Placement, for those non-Americans) analyzing birth order effects. Specifically, I am investigating if firstborns are over represented. To see that, of course, I need to know what the proportion of the population consists of firstborns. That’s been the sticking point, as I can’t seem to find useful data anywhere. I’m probably just an idiot, but can anyone point me in the right direction?

    • acymetric says:

      Are you including only children (as firstborns) or throwing them out and looking only at people with siblings?

      • nose26 says:

        I am including only children, in hopes of finding psychological rather than biological effects (Since presumably the biological conditions of the birth of a firstborn and an only child will be the same.)

    • Anthony says:

      Proportion of first-borns equals 1/(average number of children).

      It’s a little more complicated if you want to exclude only children.

      • Aron Wall says:

        Your formula only works if the reported average excludes families with no children (as opposed to counting them as 0 in the average).

    • Aron Wall says:

      nose26,
      Your proposed survey design is likely to be confounded by the possibility that AP correlates, not with firstborn effects, but with family size. (E.g. if richer parents both have fewer children and push them to take AP classes, then you will see more firstborns even if there are no birth order effects because the rich also push their younger children to take AP.)

      A better choice might be to compare the firstborns among your population to the family sizes in the same population. So forget national data, and just ask your AP sutdents both: 1) Are you a firstborn, and 2) How many sibilings do you have? (Make sure it is clear to them whether they should count themselves, and what to do about half-sibilings, step-sibilings, and twins. If you want more data, ask them both about elder and younger sibilings).

      Now each student will have told you that they belong to a family of N children (N = 1+number of sibilings). Plot the number of students in each N bracket and then divide each bracket by N to get the distribution of family sizes in the population your students are taken from. The last step is critical since a family of N children has N times as much chance of being in your sample, simply because they have more children, and this must be corrected for. For example, if you live on an island with 100 only children plus one family with 100 children, you will have an equal number of both kinds of children in your sample, then the mean family size is approximately 2, not approximately 50.

      Now 1 divided by this weighted averaged N will be the proportion of firstborns in the sample of *people whose family background is similar to those in your sample, but who may or may not have taken AP courses*.

      It occurs to me after writing this, that this protocol is bad for measuring any effects due to being an only child, since the number of only children in the sample population I’ve defined is tautologically the same number as that in the deduced population of families whose kids take AP. But it should be able to identify birth order effects among families with multiple children. So maybe just exclude the only children?

      • nose26 says:

        I did indeed ask my participants how many siblings they have, but need to include only children to have a reasonable sample size. That being said, I will use your suggestion to eliminate the possible confounding variable of family size.

  28. johan_larson says:

    The teaser trailer for “Terminator: Dark Fate” dropped a few minutes ago.

    https://www.youtube.com/watch?v=jCyEX6u-Yhs

    They got the band back together for this one. Schwarzenegger and Hamilton are on screen, and Cameron is the producer. A lot of entries in this franchise have been disappointing, but I’m willing to give the originals one more chance. I’ll probably go see this one opening night.

    • Conrad Honcho says:

      I’m down with this. Hamilton and Arnold look surprisingly good.

    • baconbits9 says:

      One part of T1 that was important for making it a gripping movie was their ability to show Arnold taking damage throughout the movie, he is not invincible, only extremely tough for two people on the run with only access to civilian tools to fight him with. From the 2nd movie on they have abandoned this and it has made all of the movies weaker, but T2 gets by because it is visually stunning for its time and the audience can get wrapped up into how crazy the liquid metal terminator is.

      These later movies seem to be focused on how to make a newer, scarier robot, and that makes them uninteresting to me.

      • JPNunez says:

        The problem for me is that the story basically ended with T2, which also was the greatest action movie for a long while -only dethroned for me by Fury Road-. I didn’t like T3, but at least the ending is all of the plot extension I am interested in.

        After that…yeah, not really into John Connor in the future, not really into Doctor Who making Apple the new Skynet, and not interested in a tv show about Terminator.

        I am done. Sarah Connor destroyed the Terminator and all its tech from the present, and then Judgement Day proved inevitable anyway and they showed us John Connor starting the resistance, and thus the story for me is more than done. Think of something new.

        • baconbits9 says:

          The ‘save the world’ genre movies are very limited for serializing. T1 established that if you save Sara(h?) Conner you save the world, Reese says something like ‘we had smashed the defense grid, we had won’, and Arnold gets sent back as a last ditch effort. Each additionally movie adds to the feeling that winning is a temporary reprieve.

          This is something that 80s movies were good at setting up. If Ripely gets off the planet then she is safe, if Arnold kills the predator then he is safe, if Kevin can hold off the thieves then he is safe. Action movies rely on that tension to be good and serializing them diminishes it, characters can be serialized much more easily.

    • proyas says:

      I think it looks awful, and that there should have been no more Terminator films after T2.

      Don’t let James Cameron’s return to the franchise convince you that “T6” will be any good: Cameron had the same level of involvement (writer and producer) with “Alita: Battle Angel,” and it was mediocre.

  29. Machine Interface says:

    I have noticed a “pattern” where areas that have a high amount of burried landmines tend to be de facto nature preserve with high bio, because no human activity takes place there, animals are too light to trigger antitank mines and/or the animal loss rate caused by anti-personnel mines is negligible compared to what would be incured by human activity.

    If I noticed this, it means many other people already have.

    From this, how plausible do you think it is that, in the near future, we will see environmentalist or native activists start using landmines as a way to deny access to particular areas to industrial or agricultural activity?

    • bean says:

      Not very. The problem is getting the mines. Even leaving aside the land-mine ban, nobody is going to sell them to an environmental group. And building their own is dangerous and will probably get them arrested before they can actually put any of them in the ground.

      • sandoratthezoo says:

        In addition to these good points, this idea fundamentally misunderstands environmentalism, which is not driven by “people who make cold-blooded cost/benefit tradeoffs to achieve their expressed goal.” Environmentalism is, for the majority of its constituents, a sentimental position. Saying, “Well, we’re gonna blow up a few fuzzy animals in a horrifying explosion of nails and fire, but it’s nothing compared to how many get hit by cars” would not fly with mainstream environmentalism, and a group that proposed that would find itself quickly disavowed by the movement/without funds or constituents.

    • John Schilling says:

      Low levels of toxic and/or radioactive contamination can have the same effect, as “Twenty years exposure and there’s a five percent chance you’ll get cancer and die” is a dealbreaker for most humans but paradise for the sort of animal whose biggest problem is an excess of humans. Bean is probably right that access to useful land mines is probably going to put this out of reach of environmental groups, and then there’s the optics of using literal weapons of war in your campaign for Fluffy Goodness. But arranging a spill of someone else’s toxic waste, might be within their reach and might be seen as a double win on “highlights the dangers of toxic waste” and “provides impromptu nature preserve” grounds.

      Even that is unlikely, but it’s more likely than land mines.

    • Aapje says:

      @Machine Interface

      Greenpeace have already used a similar tactic to fight fishing in places in the North Sea where they want a sea sanctuary: dump granite blocks of 1 cubic meter. These are sized so they can get stuck in the nets and can cause the fishing vessel to topple, although the blocks get covered up by sand fairly soon. So it seems to mostly have been a publicity stunt.

    • bean says:

      The DMZ is the spot with the highest concentration of land mines on the planet, IIRC. The difference is that these are properly emplaced and marked, so the risk to humans is pretty small.

      • albatross11 says:

        There is a fun book called _The World Without US_, where the writer traveled to various places that have been abandoned by humans (Chernobyl, the Korean DMZ, etc) to try to work out what the world would look like if humans all disappeared.

      • toastengineer says:

        What are they for, then? Are they just anti-tank mines and not for blowing up people, or are there just so many that you can’t walk around them even though you can see them, or what?

        • Aapje says:

          To prevent a rush and/or redirect an attack to the most fortified places.

        • cassander says:

          mines are largely an area denial weapon. Even if the mines are individually marked, clearing a field still takes a lot of time and isn’t risk free, meaning the enemy has to either go where you want him to (presumably towards prepared positions) to avoid them or they have to spend a lot of time and effort to go somewhere else. And while the enemy column is waiting around for the sappers to clear the field, they make an excellent target for artillery.

        • C_B says:

          I imagine they’re set up such that a small group of people moving carefully across the DMZ (like, say, an invited diplomatic party) is unlikely to get blown up, but a large group of people moving hurriedly across the DMZ (like, say, an army trying to break the truce) is very likely to get blown up. I don’t know this for sure, but it seems like it’s probably what the the DMZ is designed to achieve.

        • bean says:

          Sorry about not being clear enough there. The field is marked. The individual mines aren’t. The legal way to do a minefield is to put up flags and other markers around an area saying “don’t go in here, there are mines” and make sure all the mines are inside it. In this case, the risk to people is limited to enemy troops, anyone dumb enough to ignore the markers, and maybe the clearance team. There are probably marked lanes through the minefield, and I’m sure the North Koreans know where ours are, and vice versa. But this isn’t a huge problem, because the minefield is part of a wider defensive scheme. Anyone who uses the cleared lane, which isn’t very wide, is in full view of a couple of machine guns, and the whole thing is dialed in very carefully by the artillery. Or they can try to clear their own lane, but there are few methods which are both quick and really reliable. Ultimately, mines are more useful for terrain denial than for actually inflicting casualties.

          This isn’t how a third-world insurgency uses mines, though, and that’s what gave Lady Di and co the firepower to get the ban pushed through in the first place. Those are likely to lay them without marking them (which also makes them much harder to clear) and in places where civilians are likely to run across them. I’m not quire sure what stopping the US Army (who is going to think of the impact on civilians) from using AP mines is supposed to accomplish when insurgencies (who will also think of the impact on civilians, but in a very different way) can still buy them from Russia and China is supposed to do, but that’s modern feel-good politics for you.

        • John Schilling says:

          The individual mines aren’t.

          Also, each side is supposed to keep a detailed (albeit secret) map of where it buried each of its own mines so that when the conflict is over everyone can compare notes and dig up all the mines safely.

          This isn’t how a third-world insurgency uses mines, though, and that’s what gave Lady Di and co the firepower to get the ban pushed through in the first place.

          There’s also the issue of scattered mines, delivered by air or artillery. This is or at least was a legitimate thing for professional armies to do for very short-term area denial; the mines are theoretically just lying in the open with only a camouflage paint job to confound the enemy and/or delay clearance. If on top of that you add a 24-hour self-destruct fuze, the danger to civilians should be minimal. If the self-destruct fuzes have a 5% failure rate, then you can wind up with live unmapped mines in unmarked fields that have been buried by wind-swept sand or overgrown by brush by the time civilians reoccupy the area.

          The latter is also where you get stories about mines disguised as toys to trick photogenic children into blowing their arms off for Evil Lulz and Terror, which wasn’t actually the case. Some children did get their arms blown off, but no professional army designed or deployed mines for that purpose. Mostly, children will play with anything, so don’t litter the world with live ordnance unless you’re really really really sure the self-destruct fuze will work before the children come back.

          And the job description of “Princess” does not include detailed fact-checking before using CHA 18 + the power of a good story to mobilize the Righteous against the Wicked, so make sure you have a very good PR team before you do anything remotely like any of this.

        • bean says:

          I forgot about scatterable mines, and granted those are much much more dangerous to civilians than normally-laid mines are. My problem with the Ottawa Treaty as written is that normal land mines are about the only weapon I’m aware of that is almost impossible to use offensively. We should be encouraging people who are nervous about their neighbors to buy land mines instead of tanks. But we can’t, because they’re banned, at least in the countries which are likely to use them properly.

        • Mark Atwood says:

          We should be encouraging people who are nervous about their neighbors to buy land mines instead of tanks. But we can’t, because they’re banned,

          There are times when I suspect that this outcome was not unintended.

        • Nick says:

          We should be encouraging people who are nervous about their neighbors to buy land mines instead of tanks. But we can’t, because they’re banned,

          Man, drop the caps and misspell a few things and this could be a dril tweet.

    • DragonMilk says:

      Was it a joke or reality that people sent mad cows through possible minefields to clear the area for development?

      • Eric Rall says:

        Almost certainly a joke.

        Driving cattle through a minefield doesn’t seem like it would be reliable enough to clear the area for civilian use. It might get 90-99% of the mines, but you need to get very close to 100% for the area to be usable. And each cow who gets blown up trades one problem, an undetonated mine, for two problems: shrapnel, which is going to make conventional mine-clearance more expensive by scattering a bunch of false positives for your metal detectors to find; and bits of prion-contaminated meat, which is a fairly nasty biohazard.

        Also, clearing known minefields in peacetime isn’t all that difficult a problem. It’s slow, and it’s skilled-labor intensive, but we know how to do it reasonably effectively at a risk level of about one injury per 1000 mines cleared. Conventional mine-clearing techniques date back at least to WW2, and there are a bunch of newer (hopefully faster and safer) techniques in the pipeline.

        Where mine clearance gets really nasty is 1) when you’re trying to do it under battlefield conditions, when you’re in a big hurry and the bad guys are shooting at your sappers, or 2) when you’re not sure exactly where the minefield is or even if it’s there at all, so you wind up finding it the hard way.

  30. Jeremiah says:

    I just finished The City & The City by China Miéville and I would be interested in discussing it in a very spoilerly fashion. Particularly it’s Teen Wolf problem… (Not sure what the policy is on spoilers in this context, but I’m willing to use rot13.)

    • Plumber says:

      @Jeremiah,
      I read it a couple of years ago and still remember a few details, but “rot 13” seems like it’d be a hassle to master, and I’m a slow learner.

      Anyway, I remember the basic setting, and the “solutions” of the two mysteries, but I don’t remember the names of the characters but I could easily look that up again.

    • Nick says:

      I’m curious what you mean by its Teen Wolf problem. Is that a trope? I googled, but the series is all that’s appearing.

      Anyway, I’m going to solve your rot13 problem by just not using them. My opinion of the book is that it has a lot of neat elements, but the story is so small, and not in a good way. By the end I’m a lot more curious about the weird artifacts and stories, but I have a feeling the answer is something mundane and disappointing, just like the answer to the murder mystery was, and just like the answer to question of secret societies was.

      • Jeremiah says:

        Okay, if we have some people who might understand what I’m talking about and appreciate it here’s my rant:

        First the Teen Wolf problem: In the movie Teen Wolf with Michael J. Fox, he turns into a werewolf in the middle of a basketball game, and once it’s clear that he’s really good at basketball, everything continues kind of as normal. Which is to say the national media doesn’t show up. He’s not subject to extensive medical tests. It doesn’t make everyone question everything they once knew, etc. The movie doesn’t shy away from the consequences of him being a werewolf within his friend group, and to an extent his high school, but it completely ignores any consequences outside of that. But if you ignore all that Teen Wolf is an okay movie.

        The City and the City is the same, it examines the consequences of this separation in the lives of the citizens of the two cities, but it largely ignores what the consequences would be in a world that’s otherwise basically our world, and initially, if you ignore that problem it’s a decent book. But also it ends up being a lot more complicated than that.

        At the beginning it seems like Breach is some insanely powerful supernatural force, and that’s why things are the way they are. Which does really create a Teen Wolf problem, why isn’t every scientist in the world trying to figure out this supernatural phenomenon, but then, if I’m reading it correctly, it turns out it’s just some people who mostly have access to a lot of info from both cities, and maybe some slightly higher tech, who enforce it just because… I’m actually not sure why they do it except because it’s traditional. In which case the Teen Wolf problem isn’t quite as bad, but then suddenly the book has a whole different problem you have fo switch to forgiving, which is how on Earth does something so cumbersome continue?

        Further, as you say, you’re led to imagine that there’s all these mysteries, like the artifacts, and the civilization before the split, but this switch also seems to mean that none of that amounts to anything, either. Now in a certain respect, that fits in with the noir tradition he’s writing in, that it just ends up being some crooked official, but as you say it makes the story small, and all stuff you’re really curious about is left unanswered.

        • Hoopyfreud says:

          It’s mentioned repeatedly from early on that outsiders don’t actually believe in the bifurcation of the cities, and that small, perceptual acts don’t trigger Breach. So we know that Breach doesn’t have access to people’s heads. There are parallels between Breach and secret police like the KGB, except that the citizens of the cities are buying what Breach is selling. From the beginning I think the shape of this argument is there, and as more details about crossing over are revealed, it appears more and more constructed/arbitrary, not so much a matter of enforcement as one of belief.

          So the mystery isn’t one of “what happened to the original city” so much as it is, “where did this tradition come from, and how do the people who live it tick?” I agree that this isn’t particularly well-explained, but I think it’s incredibly well-explored. The mix of indifference and fascination and taboo is really cool to see in the mileaux of the two cities, especially through the lens of a detective. It’s like all the characters are half-blind in ways that only become obvious as the book goes on, and the gradual reveal of that is my favorite thing about the book. I’m of two minds about desiring a more explicit accounting of the “rise” of the two cities; I kind of want to know where it comes from, but I don’t think the process of bifurcation is something that can be compellingly captured. It’d undoubtedly look silly from the outside, as cultural change often does, and I think it works better as a scenario than it does as a history.

          • Jeremiah says:

            To be clear, here’s what I liked:
            -I thought it was a great noir story
            -I loved the characters, and the interactions
            -I listened to it on audible and the narration was great
            -I liked conceit of the two cities, and I agree it was incredibly well explored.

            I think the big problem is that there are books where it’s all about a big reveal at the end which ties everything together, and as I was reading it, I put it in that bucket, and then it ended up really not being in that bucket after all.

          • Hoopyfreud says:

            I think the big problem is that there are books where it’s all about a big reveal at the end which ties everything together, and as I was reading it, I put it in that bucket, and then it ended up really not being in that bucket after all.

            More than fair. For myself, I got “oh, the ending is going to suck, isn’t it?” vibes pretty early on, and given that I read Stephen King books, a reasonable ending is a nice-to-have for me at this point. ;_;

          • Nick says:

            To be clear, here’s what I liked:
            -I thought it was a great noir story
            -I loved the characters, and the interactions
            -I listened to it on audible and the narration was great
            -I liked conceit of the two cities, and I agree it was incredibly well explored.

            I think the big problem is that there are books where it’s all about a big reveal at the end which ties everything together, and as I was reading it, I put it in that bucket, and then it ended up really not being in that bucket after all.

            Yeah, I endorse pretty much all of this (though I own the hardcover, not the audiobook).

          • Eugene Dawn says:

            It’s been a while since I’ve read the book, so I can’t offer the strongest defense, but I liked the book a lot (it’s my favourite Mieville by far) so I’ll try:

            In my view, there was a big reveal. The big reveal is that you don’t need to have supernatural powers to explain the existence of two similar, partially overlapping cities whose interactions are managed by a strict code of behaviour, and where moving between the two is punished by violence. We already live in that world and we already have cities like that. The point isn’t to reveal something about the city in the book, it’s to reveal something about the world we already live in. It makes the book world more normal by showing you how surreal the real world could look if presented differently.

            In the end, the world isn’t small, or at least, it’s no smaller than our world since that’s more or less where it’s set.

            As to why anyone would enforce breach: it’s enforced for the same reason Catholics couldn’t go into Protestant neighbourhoods in Belfast during the troubles, Palestinians have to cross checkpoints in Hebron, blacks couldn’t enter white areas in Atlanta, and why you shouldn’t cross the tracks or go to the wrong part of town.
            It continues for the reason all urban segregation continues, whether that’s segregation by wealth or by race or by culture: because people who share the same city often don’t actually want to live with each other or come into contact with each other.

            Mieville (successfully in my view) tries to show you how weird that can look by tweaking the details to look like a science fiction setting so that you drop your guard and don’t at first recognize this as a completely familiar feature of the world.

          • quaelegit says:

            I liked this book and wanted to join the discussion but don’t have much anything to add.

            I also liked the characters, and I found the cultures of the cities interesting and the implication of the conceit fascinating. I agree with Eugene Dawn that it mirrors real cultural weirdnesses in strange and compelling ways.

            On one level, the lack of big reveal is kind of disappointing, but on the other hand, it’s very in keeping with the noir elements. The detective does solve the initial crime/mystery, but fails to penetrate the deeper mysteries and can’t do anything about the forces at work behind everything.

            (I saw this discussion of The Big Lebowski in the context of mystery frameworks on tumblr recently, and while I disagree with the contention that Noir is distinct from mystery as a genre, the point about the the protagonist getting swept up in the machinations of more powerful people is spot on. In the case of The City & The City, there’s also a big component of being a cog in the machine of cultural and political forces, and I actually really enjoyed how much of the plot was about navigating the bureaucracies and cultural complexities of the situation.)

        • Walter says:

          Breach isn’t a supernatural force, though? Like, I thought that was the whole point of the thing. The 2 cities are just a tradition that they indoctrinate everyone in.

          As for the outside world, this quote sums it up for me:

          “I’m neither interested in nor scared of you. I’m leaving. ‘Breach.'” He shook his head. “Freak show. You think anyone beyond these odd little cities cares about you? They may bankroll you and do what you say, ask no questions, they may need to be scared of you, but no one else does.” He sat next to the pilot and strapped himself in. “Not that I think you could, but I strongly suggest you and your colleagues don’t try to stop this vehicle. ‘Grounded.’ What do you think would happen if you provoked my government? It’s funny enough the idea of either Besźel or Ul Qoma going to war against a real country. Let alone you, Breach.”

          Nobody else cares about Breach, they are just the weird KGB of some tiny backwards city.

          • Jeremiah says:

            Yes, I said that Breach wasn’t supernatural. But I don’t think that’s apparent from the start. In fact, in the beginning, when it talks about Breach being invoked and suddenly appearing out of nowhere and the way that people are disappeared by Breach, he makes a strong case that they are supernatural. Which if nothing else creates something of a whiplash by the time you reach the scene you excerpted.

    • Plumber says:

      @Jeremiah,
      It’s been a couple of years (maybe more) since I read The City & The City, and I recall finding it pretty engrossing (I got into it right away which I never did with Miéville’s Period Street Station), the setting was more interesting than the mystery that the Inspector was trying to solve, and I never did suss out for sure whether the separation between the cities was fantastic or psychological (the second option is the more interesting one to me).

      I never felt “The Teen Wolf problem” and in some ways the description of the setting reminded me little of Borges in flavor somehow, and also a bit like another book called The Shadow of the Wind (which had a Franco’s Spain setting).

  31. fion says:

    The Econ and Tech ones are fantastic. 😀

  32. Nick says:

    15. Ginsberg still alive: 50%

    Hmm, someone should probably tell real!Scott that Ginsberg died in 1997. On the bright side, he can dedicate Meditations on Moloch to him.

    29. No more U.S.-Mexico border: 70%

    gpt2!Scott very bullish about annexation!

    36. British press again shows the European Council in the worst light, and only the BBC shows the other way: 12%

    What is the European Council?! And I’m curious whether the low likelihood is because gpt2!Scott believes the British press will favor the Council or the BBC won’t or both.

    87. My neighbor, who lives 10 to 15 minutes from me, turns out to be very nice and helpful: 20%

    gpt2!Scott planning to move to the countryside, huh.

    • bean says:

      gpt2!Scott very bullish about annexation!

      I thought the plan was to kick out all the states that border Mexico.

    • Heterosteus says:

      What is the European Council?!

      The European Council is the assembly of heads of state/government of EU member states that decides on the overall EU policy agenda and has various other powers.

      It is not to be confused with the Council of Europe or the Council of the European Union, which are completely different things that have deliberately been given very similar names to confuse you.

  33. bean says:

    Those are amazingly coherent, and a fair number of the predictions are even look to have reasonable numbers attached to them. Were these the first thing it spit out, or did you have to try multiple times?

  34. RalMirrorAd says:

    A previous OT discussed whether millennials were worse off than Boomers/GXers. Marginal revolution shared this fed paper which covers the topic somewhat but I hesitated sharing it on the CW-free open thread since some might argue this discussion is tangentially CW.

    Note I did *not* have time to read the entire article and so can/will not offer it as a slam dunk argument. I figure some people here may be interested in it. Feel free to tear it apart if you notice any methodological errors.

    https://www.federalreserve.gov/econres/feds/files/2018080pap.pdf

    Abstract:

    The economic wellbeing of the millennial generation, which entered its working-age years around the time
    of the 2007-09 recession, has received considerable attention from economists and the popular press. This
    chapter compares the socioeconomic and demographic characteristics of millennials with those of earlier
    generations and compares their income, saving, and consumption expenditures. Relative to members of
    earlier generations, millennials are more racially diverse, more educated, and more likely to have deferred
    marriage; these comparisons are continuations of longer-run trends in the population. Millennials are less
    well off than members of earlier generations when they were young, with lower earnings, fewer assets, and
    less wealth. For debt, millennials hold levels similar to those of Generation X and more than those of the
    baby boomers. Conditional on their age and other factors, millennials do not appear to have preferences
    for consumption that differ significantly from those of earlier generations.

    • Erusian says:

      Whether millennials are better or worse off than previous generations depends entirely on whether you value relative or absolute numbers. (Both perspectives, by the way, have their upsides and downsides and their absurd extremes.)

      Millennials are better off than previous generations in almost every absolute way and worse off in almost every relative way. Millennials do have lower earnings, fewer assets, and less wealth as denominated in dollars. They also have more as denominated in actual goods and services they consume. Basically, the cost of televisions has declined precipitously so they have many more and better televisions but those televisions are worth less than they would have in the 1980s. This is true of almost everything except healthcare, housing, and education. And housing is driven partly by the extreme millennial preference for urban dwelling.

      Expanding on that a little, if you want to experience the ‘old’ economy you can. There are twelve million factory jobs in the United States. There are twenty two million government jobs. They are the biggest and sixth biggest source of employment for Americans. There are also seven million construction jobs, many of them unionized. But you’re not going to get rich, you’re going to be firmly blue collar, and you have to go where the work is. And that is not a trendy coastal city.

      I’m not saying that’s a terrible choice: city life is better in a lot of ways. But the US was less urbanized and concentrated back then. People don’t want the prosperity the Boomers had. They want the high paying jobs and job security and regular work while still living in a trendy, expensive city. And they don’t just want the job to be high paying: they want it to be high paying enough to support them in a much more luxurious lifestyle than the Boomers got.

      You want a 1,750 square foot house in a mid-sized town near a factory? You’re looking at $500k at most. As in, the most expensive house in town. And you can find decent houses for under $100k. I’m not talking about Nowheresville, USA. Some of those places still give away land for free. I’m talking about a small town with multiple factories, a small airport, etc. You can even get a second-tier city like Indianapolis. Despite being a byword for backwardness on the coasts, Indianapolis is the sixteenth biggest city in the United States and the 34th largest Metropolitan Statistical Area. That’s out of three or four hundred, by the way. So Indianapolis is the top 10% of cities in the US by population. And there are plenty of jobs in both those places. That is how the Boomers lived for the most part.

      There’s also a long term trend that earnings tend to concentrate in later years, which is the effect you’d expect to see as labor becomes less important and skills become more important. Smaller versions of this effect affected earlier generations but it’s becoming more extreme. We have yet to see if the same will happen to millennials.

      Conditional on their age and other factors, millennials do not appear to have preferences
      for consumption that differ significantly from those of earlier generations.

      This is true macroeconomically and false microeconomically in my experience. Millennials drive fast casual chains and kill a lot of sit downs, for example, but both spend a roughly proportionate amount on eating out for their wealth level.

      • cassander says:

        . Millennials do have lower earnings, fewer assets, and less wealth as denominated in dollars. They also have more as denominated in actual goods and services they consume. Basically, the cost of televisions has declined precipitously so they have many more and better televisions but those televisions are worth less than they would have in the 1980s.

        This is contradictory. “Earnings” is only a meaningful in the sense of how much stuff it buys you. If group A is getting more stuff than group B (assuming that savings/debt levels are constant), then by definition, it’s earning more. If you think their incomes are lower, you’re overstating inflation.

        This is true of almost everything except healthcare, housing, and education. And housing is driven partly by the extreme millennial preference for urban dwelling.

        For all three of those goods, what people are buying today is dramatically better than what was bought 30 years ago. the average house has something like twice the square footage per person, for example. Healthcare is immeasurably better. And education is a lot more luxurious, even if it’s not actually teaching you more.

        • Erusian says:

          This is contradictory. “Earnings” is only a meaningful in the sense of how much stuff it buys you. If group A is getting more stuff than group B (assuming that savings/debt levels are constant), then by definition, it’s earning more. If you think their incomes are lower, you’re overstating inflation.

          It would be contradictory if we really had consistent definitions for comparison across time and space. We don’t. It’s why we have a variety of measures.

          Millennials are significantly more likely to own a car that is nicer than anything their parents could have had. Yet the car is worth less in adjusted dollar terms than a significantly inferior car would have been as a new car in 1980. So the millennial has less assets. Both are valid measurements: do you want to measure the amount and quality of goods the person is consuming or do you want to measure the degree of economic value it holds?

          Both can certainly be relevant. To give an example, a car that can be sold to cover a year’s rent is more valuable for that use than a car that covers half a month’s rent. But how far, fast, and safe it goes is also important.

          For all three of those goods, what people are buying today is dramatically better than what was bought 30 years ago. the average house has something like twice the square footage per person, for example. Healthcare is immeasurably better. And education is a lot more luxurious, even if it’s not actually teaching you more.

          True. They’ve gotten better but the price has risen, whereas for most other things they’ve gotten better and the price has dropped.

          • baconbits9 says:

            Yes. If we stripped out our current technology expenses- home internet, cell phone plans and streaming services and swapped them for the a land line + broadcast TV + 1 car we would have a fair amount more money to save.

          • cassander says:

            Millennials are significantly more likely to own a car that is nicer than anything their parents could have had. Yet the car is worth less in adjusted dollar terms than a significantly inferior car would have been as a new car in 1980. So the millennial has less assets. Both are valid measurements: do you want to measure the amount and quality of goods the person is consuming or do you want to measure the degree of economic value it holds?

            My point is that if your dollar adjusted price is lower for a car that is demonstrably better, then your dollar adjustment is off. The millennial doesn’t have less valuable assets, he has more valuable assets that look less valuable because the standard measures overstate inflation. I grant you that the situation gets more complicated when the prices of goods change at different rates, but the effect shouldn’t be too bad as long as they don’t change change radically (i.e. cars suddenly cost less than TVs).

            >True. They’ve gotten better but the price has risen, whereas for most other things they’ve gotten better and the price has dropped.

            If everyone in the previous generation bought fords and everyone today is buying cadillacs, I’m not sure it’s fair to say that the price of cars has gone up. Spending on cars has gone up, but not prices.

          • March says:

            Depends on whether you can still get those Fords at those prices.

            I was thinking about what baconbits9 said, and my fancy optical fiber internet with VOIP is just as expensive as land line + cable TV and no internet. (Broadcast TV has apparently gone the way of the dinosaurs where I am.)

            Getting rid of private cell phones saves about 40 bucks a month, no more streaming subscriptons another 30. We already have 1 car.

            Sure, 70 bucks a month isn’t nothing (adds up to 42k over 50 years). But it’s kinda negligible if you look at opportunity costs, especially of the cell phones.

          • Hoopyfreud says:

            The millennial doesn’t have less valuable assets, he has more valuable assets that look less valuable because the standard measures overstate inflation. I grant you that the situation gets more complicated when the prices of goods change at different rates, but the effect shouldn’t be too bad as long as they don’t change change radically (i.e. cars suddenly cost less than TVs).

            Standard measures overstate inflation (when they do) because the prices of goods change at different rates.

          • acymetric says:

            So, I have some anecdata that is very specific to this conversation. It may not be representative, but it is enough to make me stop and question how we are valuing things.

            In the late 70s, my mom worked a summer at a factory out of high school, and at the end of the summer bought a brand new Camaro cash.

            In the mid 2000s, I also worked for at summer at a factory after school. Had I bought a car (I didn’t) with that money I would have been looking at something like a used early 90s Civic with 150+k miles.

            It would be hard to convince me that my situation constitutes anything resembling “better off”.

          • baconbits9 says:

            When I do the calculation for us its between 1,000 and 2,000 a year in savings, compounding $1,500 a year at 5% interest is >$100,000 in 30 years.

          • The Nybbler says:

            Price of a base Camaro in 1978 was $4600. So your mom had to be netting $9.60/hr to get enough for a new one in three months.

            This link has wages and salaries per full time equivalent in 1978. She would have been making considerably more than average for manufacturing.

          • baconbits9 says:

            In the late 70s, my mom worked a summer at a factory out of high school, and at the end of the summer bought a brand new Camaro cash.

            In the mid 2000s, I also worked for at summer at a factory after school. Had I bought a car (I didn’t) with that money I would have been looking at something like a used early 90s Civic with 150+k miles.

            It would be hard to convince me that my situation constitutes anything resembling “better off”.

            This sounds implausible. Looking up 1975 car prices is appears that a new 1975 Camero cost $3,800-$4,000, and average hourly wages (non supervisory) were under $5 an hour. At 500 hours (full time for 3 months) and zero taxes you mom would be way short after a single summer job. After taxes she would have had to earn more than 2X the average wages in 1975 (and checking for 1978 numbers she is farther off, so 1975 doesn’t seem like a particularly bad year).

          • albatross11 says:

            I think cars are in general more expensive now (in real terms) because of mandatory safety features–airbags, pretensioners, etc. Perhaps for other reasons, too–I’m not sure. (Longer expected lifetime of the product? Environmental regulations?)

          • greenwoodjw says:

            I have to push back against the wage counter-claim. The chart includes all non-supervisory positions in all industries. Factory work has always been more remunerative than average. Definitely more than a janitor or stockboy.

          • acymetric says:

            @baconbits9

            The industry was a higher-paying sector, and while I guess if we take “summer job” literally it has to be exactly 3 months, a post-graduation summer job might reasonably last 4 months or so.

            I’ll also note that the definition they use for “Production and non-supervisory” positions is a weird one (that includes, in addition to pretty much all office staff and oxymoronically the actual floor supervisors/team leaders) and for that reason I am someone skeptical that the past or present numbers are relevant for looking at actual “on the floor” factory work.

            I trust the information from Nybbler’s link slightly more, which suggests overall manufacturing averages in the $6-8 range, with the average for top end industries being more in line with $8-10.

          • acymetric says:

            Production and related employees include working supervisors and all nonsupervisory employees (including group leaders and trainees) engaged in fabricating, processing, assembling, inspecting, receiving, storing, handling, packing, warehousing, shipping, trucking, hauling, maintenance, repair, janitorial, guard services, product development, auxiliary production for plant’s own use (for example, power plant), recordkeeping, and other services closely associated with the above production operations.
            #Nonsupervisory employees include those individuals in private, service-providing industries who are not above the working-supervisor level. This group includes individuals such as office and clerical workers, repairers, salespersons, operators, drivers, physicians, lawyers, accountants, nurses, social workers, research aides, teachers, drafters, photographers, beauticians, musicians, restaurant workers, custodial workers, attendants, line installers and repairers, laborers, janitors, guards, and other employees at similar occupational levels whose services are closely associated with those of the employees listed.

            This is the set that was included in those wages. Much to broad a data set to suggest that a specific claim about a specific type of job in the manufacturing industry is false.

          • baconbits9 says:

            This is the set that was included in those wages. Much to broad a data set to suggest that a specific claim about a specific type of job in the manufacturing industry is false.

            The implication of the claim is that an inexperienced worker taking a short term position landed a job that payed 2-4x the average wage, and that taking a similar such position now nets you far less.

            Now it is possible that a worker in the 1970s could land a well above average paying job right out of high school, they could have perhaps worked every summer there for years plus weekends, or happened to live near one of the highest paying industries in the area, or been related to someone with influence in hiring, or some combination. All of these boil down to ‘my mom was luckier/harder working than I was’, and makes the ‘my mom worked in a factory for 3-4 months and bought a new car, I worked in a factory for 3-4 months and all I got was this lousy T-shirt’ comparison false as you are no longer comparing similar things. Likewise you can probably find some 18 year old this summer who will make far more than the average because he lucked into caddying at a country club for people who routinely tip him hundreds for a round of golf.

      • RalMirrorAd says:

        Bear in mind that if *real* incomes are being compared then it means inflation was adjusted for. And inflation adjusted means, at least from the perspective of BLS or whoever calculates the inflation index the fed uses, that hedonic adjustments have already been made.

        • Erusian says:

          Bear in mind that if *real* incomes are being compared then it means inflation was adjusted for. And inflation adjusted means, at least from the perspective of BLS or whoever calculates the inflation index the fed uses, that hedonic adjustments have already been made.

          Inflation isn’t meant to have anything to do with hedons. It just looks at money, not how pleasurable money is. A hedon measurement would be very concerned with, for example, average hours worked.

          On top of that, “adjusting for inflation” is much more complex than can be practically born. Adjusting for inflation is useful for a variety of economic factors but it’s not a great way to get direct comparisons, especially over long time horizons. This can be seen in how adjusting incomes or prices, we often still get absurd results very quickly as we go into the past.

  35. g says:

    There was some discussion in a recent open thread about whether it’s correct to say that the UK government has been practising “austerity”. This link may be of interest; the UN commissioned a report on more or less exactly that question, and it concluded that “much of the glue that has held British society together since the Second World War has been deliberately removed and replaced with a harsh and uncaring ethos” and that “UK standards of well-being have descended precipitately in a remarkably short period of time, as a result of deliberate policy choices made when many other options were available”.

    I have no inside information on relevant political biases of either the author (one Philip Alston) or the United Nations.

    • brad says:

      Seems pretty bizarre to me to have that report come from a UN organ. First, what is part of the UN’s mission is this report even remotely supposed to support exactly? Second, how does a global organization use a definition of poverty that’s so parochial?

      • g says:

        what part of the UN’s mission is this report even remotely supposed to support exactly?

        This part:

        To achieve international co-operation in solving international problems of an economic, social, cultural, or humanitarian character, and in promoting and encouraging respect for human rights and for fundamental freedoms for all without distinction as to race, sex, language, or religion

        That’s from Article 1 of the UN’s Charter. More specifically, the introduction to Alston’s report says:

        The purpose of the visit was to report to the Human Rights Council on the extent to which the Government’s policies and programmes relating to extreme poverty are consistent with its human rights obligations and to offer constructive recommendations to the Government and other stakeholders.

        Articles 22-25 of the UN’s Universal Declaration of Human Rights indicate the UN’s view on the relationship between poverty and human rights.

        Everyone, as a member of society, has the right to social security and is entitled to realization, through national effort and international co-operation and in accordance with the organization and resources of each State, of the economic, social and cultural rights indispensable for his dignity and the free development of his personality.

        (The bit about “in accordance with the … resources of each State” is the justification for calling things “extreme poverty” in the UK that might count as luxury in much poorer countries.)

        Everyone who works has the right to just and favourable remuneration ensuring for himself and his family an existence worthy of human dignity, and supplemented, if necessary, by other means of social protection.

        (The UN considers that there need to be “means of social protection” that ensure that everyone gets “an existence worthy of human dignity”. Again, that’s inevitably relative to the living conditions of other people nearby.)

        Everyone has the right to a standard of living adequate for the health and well-being of himself and of his family, including food, clothing, housing and medical care and necessary social services, and the right to security in the event of unemployment, sickness, disability, widowhood, old age or other lack of livelihood in circumstances beyond his control.

        To summarize: the UN’s mission includes protecting human rights, and for a long time the UN has considered human rights to include some degree of economic security. The degree of economic security it’s reasonable to expect depends, among other things, on the wealth of the country that’s supposed to be providing it and of the other people around you. The UK is quite a rich country and many of its people are very well off, and that’s why the report counts things as “extreme poverty” that wouldn’t be considered unacceptable in a very poor country.

        • Aapje says:

          The UN considers that there need to be “means of social protection” that ensure that everyone gets “an existence worthy of human dignity”. Again, that’s inevitably relative to the living conditions of other people nearby.

          I don’t see how that is “inevitably relative”, at all. If a major disaster happens that disrupts the food supply and causes most of society to end up starving in the streets, will it then suddenly become dignified to starve in the streets? After all, everyone does it…

          Ultimately, what is or is not extreme poverty vs just poverty vs not well off is extremely subjective. What right & justification does the UN or a person mandated by the UN have to lecture countries on what level of inequality they should consider to be reasonable?

          Doesn’t the very fact that they feel themselves obliged to lecture countries show that their values are not universal. After all, if they were, these countries would already have made the policy that the UN wants…

        • brad says:

          It’s things like this that make it very difficult to defend the US government covering such a large portion of the UN budget. What a colossal waste of time and money.

          Did anyone consider the question of whether the UK would still be “quite a rich country” if it took the advice of parasites like Alston?

    • Robert Jones says:

      I find it very hard to credit Alston’s report, because if you’re a UN rapporteur on extreme poverty, the UK is obviously nowhere near the top of the list of problems. I have the impression that UN organs are unfortunately prone to capture by groups pushing particular agendas.

      UK public sector spending has fallen from a high of 44.9% of GDP in 2010 to 38.5% of GDP in 2018. Most of this is due to GDP growth. In real terms, total expenditure fell from £712.5bn to £707.8bn, i.e. by 1%.

      To me “austerity” suggests something more drastic than a fall of 1% over several years. I think that language is better suited to the situation in Greece, for example.

      • baconbits9 says:

        Additionally that 38.5% is only low compared to 2010, it would be basically the high point from 1990 to 2008.

      • Perico says:

        I’m not sure total expenditure is a particularly useful metric here – at the very least , you should adjust for population. UK population increased by 6% during that period, so expenditure per capita fell by slighly more than 6% – not a catastrophic number, but more significant than the initial 1%. And then you have a lot more old people around – population over 65 increased by 18%. This should also increase public spending.

        • cassander says:

          the relevant comparison isn’t to the high point of spending during the crisis, but the pre-crisis level.

        • Robert Jones says:

          I’m not sure about this: if government expenditure remained constant in real terms, but the population increased, that might be bad, but would it be austerity?

          • HeelBearCub says:

            If we reduced SS checks by a real 5.67% would it be an austerity policy? Because that is what it would take to hold SS constant in payouts while increasing the senior population by 6%. I pretty much guarantee the seniors not getting COLA increases would feel pinched by this. Seniors are already particularly prone to notice inflation.

            The baseline budget before the austerity would have included raising expenditures for all sorts of things based on population growth. When these expenditures don’t increase, it is austerity.

            ETA: Put another way, if you stop buying 85% fat free ground beef and now only buy 70% because it costs as much as the 85% last year … that is a form of personal austerity (assuming that beef is only rising by the same general inflation rate).

          • Robert Jones says:

            This is a tangent, but is ground beef sold as “85% fat free” where you are? Clearly it’s equivalent to 15% fat, but it seems an odd way to look at it.

          • HeelBearCub says:

            @Robert Jones:
            Yes, it is sold that way, at least in my neck of the US.

            I’d like to say it’s because they like to make claims about a “fat free” product, but I think it’s simply because they like bigger numbers to indicate higher quality (and higher expense).

          • greenwoodjw says:

            I understand that to mean “85% of the weight is meat and not fat”

          • HeelBearCub says:

            Hmmm, either I am just backforming a memory, or they may have changed things up. Seems like it’s not 85% fat free but rather 85% lean ground beef.

      • g says:

        I don’t think total spending is a useful metric here. Thought experiment: the US or UK or France or whoever completely abolishes unemployment benefits, pensions, government-provided healthcare, etc., etc., etc., and transfers all that money to the military budget. Spending doesn’t change, but a lot of people are much worse off.

        To be clear, that’s a thought experiment and not a claim that the UK government has done anything much like that. But, as per my comments in the earlier open thread, it does seem as if spending whose purpose is to help needy and vulnerable people has been reduced more than you would think from looking only at the overall figures.

        • cassander says:

          that would be a reasonable point if the UK had massively increased spending on the military in recent years. It hasn’t. The overwhelming share of the money the UK, or any developed country, spends is ostensibly going to the needy and vulnerable, so overall spending is a very reasonable proxy, particularly when there hasn’t been a serious change in the minority of the budget that doesn’t go to those things.

        • baconbits9 says:

          First you need to realize that your positions are not universal. The claims of austerity/stimulus are often linked to economic theories (correct or incorrect) that total spending does matter, regardless of the sector that it is spent in.

          You also either have to accept that total spending does matter OR choose to believe that governments have near infinite ability to add on debt. The UK’s debt to GDP ratio shot way up between 2007 and 2014, and has not declined with the current ‘lower’ level of spending. Lower spending rates were likely an inevitability, and given that most spending in the UK is on social issues there was going to be ‘austerity’ relative to 2010 spending levels at some point very shortly after 2010.

        • g says:

          me (emphasis added):

          Thought experiment: […] transfers all that money to the military budget. […] To be clear, that’s a thought experiment and not a claim that the UK government has done anything much like that.

          cassander:

          that would be a reasonable point if the UK had massively increased spending on the military in recent years.

          Oh, come on.

          As I said last time around: welfare spending per capita has gone down a lot in real terms. Pensions have gone up. Education has gone down. Most other things have remained about the same.

          Welfare spending is about 1/7 of the UK budget. (See e.g. here where you can find detailed numbers too.) It can go down a great deal without looking like a huge proportional decrease in the whole budget. Which, as it happens, it has. According to the site I just linked to, central government spending on welfare (I’m focusing on central government here because that’s what the government actually controls) has gone from a little over £1000 per capita in 2012 to a little under £870 per capita in 2018; inflation from 2012 to 2018 is about 10%, so central government welfare spending is down ~25% in real terms.

          @baconbits9, I’m not claiming that total spending doesn’t matter, only that it’s not what people are generally talking about when they say “austerity”. Nor, for that matter, am I saying that what the UK government is doing is a bad idea. Perhaps it’s necessary. Perhaps all the alternatives are worse. But, necessary or not, wise or not, what the government has been doing over the last several years is pretty much what people are saying it’s been doing when they talk about “austerity”.

          • Paul Zrimsek says:

            Usually when I see someone talk about “austerity” they’re not focused on welfare clients in particular, but are instead invoking some sort of vulgar-Keynesian depressing effect on the economy as a whole, based on a decrease (real or imagined) in government spending as a whole.

          • cassander says:

            Oh, come on.

            Not oh, come on. We know that didn’t happen, so saying that it’s not mathematically impossible to happen in some way isn’t a serious objection.

            (I’m focusing on central government here because that’s what the government actually controls)

            I believe that the UK central government control much of the money that goes to localities in the UK. Even London gets 1/4 of its budget as grants and subsidies.

            >Welfare spending is about 1/7 of the UK budget.

            that figure includes income support and unemployment insurance payments. It going down as unemployment does isn’t austerity. And, again, you’re comparing figures after a massive increase to their current levels, not the to before the crisis started. that effectively declares increasing benefits temporarily (either automatically or as a result of deliberate policy) as “austerity”, which it clearly isn’t.

          • g says:

            Paul: Apparently we have different experiences of people talking about “austerity”, then. I certainly have heard instances with the emphasis you describe; e.g., this Paul Krugman piece. But most of the time when I hear it it’s complaining about welfare cuts. (Often by people directly affected because they, or people they know, are recipients of some sort of welfare that’s been cut.)

            cassander: We know that the specific thing I mentioned as a thought experiment didn’t happen, which is why I emphasized that it was only a thought experiment. I’ve provided links to two different sources that break down UK government spending and claim that welfare spending has gone down much more, proportionally, than overall government spending has: in other words, that something analogous to the thing I described has in fact happened.

            You’re correct that local spending isn’t entirely uninfluenced by central government.

            I take your point about spending in 2012 possibly being elevated because of the aftermath of the 2008 crisis, etc. Let’s compare against shortly before that crisis; say 2006. (No particular reason for choosing 2006 rather than 2005 or 2007; I haven’t compared the numbers or anything and am not cherry-picking.) Unemployment in 2006 was 5.2%-5.4%. 2016 was about the same. So let’s compare 2006 and 2016. According to ukpublicspending.co.uk, central government welfare spending in 2006 was £717 per capita, made up of £353 for “family and children”, £238 for “social exclusion n.e.c.”, £66 for unemployment, £50 for “social protection n.e.c.”, and £11 for housing. Again, choosing central rather than local or total because — though, as you point out, central government does influence local — I think central government spending gives the clearest indication of what the Westminster government is trying to do. Actually, the only one of those figures that isn’t zero for local government is the “social protection” one, which is £624.

            OK, how does that compare to 2016? First of all, £1 in 2006 ~= £1.33 in 2016. (I’m using the Bank of England’s inflation calculator.) So 2016 equivalents of the figures above would be £954 total, £469 F&C, £317 soc exc, £88 unemp, £67 soc prot, £15 housing. (And £830 for local soc prot.) Actual 2016 figures: £829 total (down 8%), £239 F&C, £481 soc exc (I think some things must have been transferred from one of these to the other, since these changes are so huge; housing expenditure is up 4x, presumably also the result of some sort of reclassification; sum of these three is down ~6%), £41 unemployment (halved since 2006 despite almost equal unemployment rates), £58 soc prot (down 13%). Local government spending again zero except for “social protection”, up a couple of percent at £849. I interpret the small increase as local government trying to fill holes left by central government cuts; of course I could be wrong about that and will welcome corrections.

            If I’m reading the ONS data correctly, after adjusting for inflation GDP per capita in 2016 was just marginally (~1%) higher than in 2006.

            These changes are smaller than the ones since 2012, suggesting that indeed the very large 2012-2018 changes are partially explained by falling unemployment and/or 2012 figures still being large after the response to the 2008 crisis. But an 8% reduction is not at all negligible.

            Incidentally, if we look at 2018 instead of 2016, central government welfare spending is down 15% on 2006 and local government welfare spending is down 7%. And while unemployment was indeed lower in 2018 than in 2016, which presumably means the government is paying less in unemployment benefits (note: as you can see from the figures above, those aren’t a huge fraction of welfare spending), it is well known that the UK has a very high rate of underemployment and I don’t think it’s safe to assume that reductions in unemployment rate translate directly to reductions in need for government assistance other than unemployment benefits themselves.

            Again, to be clear, I’m not making any judgement on whether these reductions are a good thing or a bad thing on net. Just pointing out that they are real and not negligibly small.

      • g says:

        if you’re a UN rapporteur on extreme poverty, the UK is obviously nowhere near the top of the list of problems

        The UN doesn’t work only on things near the top of the list of problems. Maybe it should, but it doesn’t. (Nor do most nations, organizations, or individuals. Here I am, writing a comment on SSC, which is certainly nowhere near the top of any reasonable list of most efficient ways to improve my life.)

        • Aapje says:

          Indeed, they prioritize the interests and biases of the people who grabbed power, with minimal democratic legitimacy.

    • cassander says:

      Any report that uses language like “much of the glue that has held British society together since the Second World War has been deliberately removed and replaced with a harsh and uncaring ethos” is basically self refuting. There isn’t even a pretense of objectivity at that point.

      • rlms says:

        It’s certainly not objective (as in neutral) but that doesn’t mean it can’t be true.

        • cassander says:

          It’s demonstrably not true. As has been pointed out, the british government is basically where it was pre-crisis in terms of social spending. I’d bet the author has been saying the same thing since thatcher was elected.

          • rlms says:

            If you assume the author is making false claims about spending, then indeed they are making demonstrably false claims. But you can make more charitable assumptions: it is possible to make a system more “harsh and uncaring” without decreasing spending, for instance by shifting spending from actually giving poor people money to doing more stringent checks on deservingness. Unfortunately the UN website seems to be down so we can’t read the actual report.

          • cassander says:

            @rlms

            I’m little inclined to be charitable with people who describe those they disagree with politically as motivated by evil, which is more or less what the author here is doing. The “massive disinvestment” he claims is apparently not massive enough to show up in government spending figures, and if he’s going to be that hyperbolic in his prose, I have little faith he’ll accurately report on other issues.

  36. johan_larson says:

    Is Toronto some sort of major center for Hasidic jews, the fellows with the black suits, broad-rimmed hats, beards and sidelocks? I always see some of them in the airport security lines. I haven’t noticed anything similar when I’ve flown out of SFO, San Jose, or Seattle.

    • brad says:

      As far as I know Montreal has the largest orthodox population in Canada, but Toronto is number two. BTW the word you want is haredi. Hasids are a subset of haredi and it’s somewhat difficult for outsiders to distinguish among different haredi groups (it’s all about the hat).

      In the United States, the greater NYC area has far and away the most, with Baltimore and LA having non-trivial communities. Aside from the occasional lubavitch on mission, I haven’t seen or heard about any haradi in the PNW.

      • dndnrsn says:

        Toronto now has double the Jews Montreal does – where once Montreal was the Canadian city with the largest Jewish population. I don’t know what the balance of different denominations is, though. Wikipedia claims Orthodox Judaism was relatively stronger in Montreal than Toronto, but doesn’t give any figures to show how it is today.

  37. johan_larson says:

    The final two seasons of Game of Thrones were widely considered sub-par. The criticism starts at “abbreviated” and gets steadily worse from there. Time for a do-over.

    Let’s assume you have permission and budget to completely remake seasons seven and eight of the show. All the principals are willing to return. That means two full seasons of ten episodes if you’re careful with the budget, or maybe half that if you want an all-singing all-dancing CGI parade of wonders.

    At the end of season six, in the episode “The Winds of Winter”, Cersei consolidated her position as queen of the Seven Kingdoms by blowing up the Great Sept, including the Faith Militant and key figures of the Tyrell family, namely queen Margery and Mace Tyrell. In response, Highgarden and Dorne united with Daenerys against the Lannisters. And Daenerys set out for Westeros with a massive fleet. (A more detailed account of the episode is available here.)

    Then what happens?

    • Nick says:

      It’s unfixable—the rot had already set in. Re-adapt starting back another season or two with proper CGI of everyone once the technology catches up; we can even let the budget collect interest until then. Probably best to also wait until we have at least the sixth book, or more likely, until a competent medium can quiz him on the details.

      If we did start earlier:
      Xrrc Nrtba nf n punenpgre. Unir uvz gnxr Xvat’f Ynaqvat nsgre Fgbez’f Raq. Guhf Qnrarelf’f shel naq fhqqra oheavat bs Xvat’f Ynaqvat vf n erfhyg bs frrvat n qvssrerag Gnetnelra ba gur guebar. Erzbir gur cerfrag qnl Avtug Xvat, naq unir Rheba hfr gur erny Ubea bs Jvagre, juvpu oevatf qbja gur Jnyy; ur cebonoyl zrrgf ure naq gevrf gb gnxr ure qentbaf njnl sebz ure, orsber be (zber yvxryl?) whfg nsgre gur oheavat. Zrnajuvyr va gur Abegu, Zryvfnaqer oevatf onpx Wba, naq gur Onggyr bs gur Onfgneqf vf jba jvgu gur fnzr gvzryl vagreiragvba ol Fnafn naq Yvggyrsvatre. Gurl ghea gur gnoyrf fbzrubj ba Yvggyrsvatre. Qnrarelf, sbyybjvat gur oheavat bs Xvat’f Ynaqvat, urnqf abegu gb svtug gur qrnq nf n fbeg bs cranapr. Fur nyyvrf jvgu gur fpnggrerq erzanagf bs Fgnaavf’f nezl, gur jvyqyvatf’, naq gur Avtug’f Jngpu, naq gurl punyyratr gur Bguref ng Jvagresryy.

      Prefrv qbrf trg xvyyrq ol Wnvzr. V qba’g xabj jura; znlor fur tbrf penml orsber Nrtba neevirf. V guvax ur naq Oevraar, vs rvgure bs gurz fheivir jung tbrf qbja va Xvat’f Ynaqvat, jbhyq svtug jvgu gur yvivat nf jryy.

      Oena tbrf vagb gur cnfg naq orpbzrf ng yrnfg bar, vs abg obgu, bs gur Avtug Xvat naq Oena gur Ohvyqre. V guvax Oena zvtug unir gevrq gb orpbzr gur Avtug Xvat gb cerirag guvatf, ohg fperjf hc naq pnhfrf gurz. Oena gur Ohvyqre vf uvf frpbaq nggrzcg, n jnl bs zvgvtngvat uvf erfcbafvovyvgl sbe gur Avtug Xvat. V guvax ur fubhyq cebonoyl or cerfrag sbe gur onggyr jvgu gur yvivat, ohg V qba’g xabj jung ur qbrf.

      Neln znxrf vg sbe gur onggyr jvgu gur yvivat. V qba’g xabj jung ure ebyr fubhyq or; V’z abg fher gurer fubhyq rira or n Avtug Xvat va gur cerfrag, ohg vs gurer vf, vg frrzf nccebcevngr gung fur jbhyq trg gur xvyy. Creuncf jrnevat gur snpr bs bar bs gur qrnq, rira, vs gur Avtug Xvat unf gur bccbeghavgl gb envfr gurz yvxr va gur fubj.

      Gurer vf ab xvat nsgrejneq; pregnvayl abg nalbar jvgu n “pynvz” gb gur guebar, yvxr n Gnetnelra be Onengurba. Gur frira xvatqbzf fcyvg naq gur Veba Guebar vf zrygrq, vs qentbasver unqa’g nyernql qbar gur wbo.

      • brad says:

        I didn’t read the rot13, but agree that was too late. As soon as they went away from the books the quality went way down hill. It was the partnership between GRRM and D+D that worked. Take out one leg and we were left with crap.

        I’m not just saying that because I always side with the books. I thought the extended edition of The Fellowship of the Ring was quite good notwithstanding leaving out poor Tom Bombadil.

        • cassander says:

          Definitely this. D&D were fantastic adapters, and poor authors. And to be fair, they weren’t supposed to end up being authors.

      • jaimeastorga2000 says:

        Re-adapt starting back another season or two with proper CGI of everyone once the technology catches up

        Looking at this ad, I think the CGI might already be there. I honestly thought the models were the actors at first.

      • Gobbobobble says:

        Oena tbrf vagb gur cnfg naq orpbzrf ng yrnfg bar, vs abg obgu, bs gur Avtug Xvat naq Oena gur Ohvyqre. V guvax Oena zvtug unir gevrq gb orpbzr gur Avtug Xvat gb cerirag guvatf, ohg fperjf hc naq pnhfrf gurz. Oena gur Ohvyqre vf uvf frpbaq nggrzcg, n jnl bs zvgvtngvat uvf erfcbafvovyvgl sbe gur Avtug Xvat. V guvax ur fubhyq cebonoyl or cerfrag sbe gur onggyr jvgu gur yvivat, ohg V qba’g xabj jung ur qbrf.

        Oh sweet Seven, please no. Gvzr geniry cheapens everything it touches.

        • Nick says:

          Maybe, but I was trying to execute Martin’s vision here, and I’m not sure how else we reconcile gur pbasvezngvba sebz Znegva gung gur Ubqbe guvat jnf va uvf bevtvany cynaf gjragl lrnef ntb. Gung pbasvezf Oena pna qb guvf, naq V qba’g xabj ubj ryfr gb erpbapvyr gur ercrngrq rzcunfrf ba “zlguvpny” svtherf yvxr Oena gur Ohvyqre (naq gur znal bgure napvrag Oenaqba Fgnexf) rvgure. Naq gou V yvxr gur gurbel gung Oena vf fbzrubj cnegyl erfcbafvoyr sbe gur Avtug Xvat qhr gb zrqqyvat, gubhtu V’z abg fher vg’f pburerag rabhtu gb or pnaba.

        • greenwoodjw says:

          If it’s used as an undo button, it breaks the story. Some uses (like in the original Red vs. Blue) actually work really well.

    • Tatterdemalion says:

      Exactly the same things, but differently paced in season eight
      I really liked the narrative beats that the last two seasons hit, but
      towards the end somee of them were too close together and others too far apart. So I’d still tell the same two stories: one about the living coming together to defeat the dead, and the other about Daenerys taking on Cersei, going bad in the process and then being killed by Jon, but rather than devoting three episodes to one and then three episodes to the other, I’d have preparations to fight the dead hampered by Cersei’s treachery and Dany’s slow darkening. King’s Landing burns in episode three or four, the battle of Winterfell in four or five, and then the last episode or two of Dany going mad, Jon killing her and aftermath.

    • dndnrsn says:

      The show just wasn’t as good after it left the books behind, but there were ways it was bad it didn’t need to be. The pacing didn’t need to be as bad as it was. They didn’t need to have the “rules” of the world change constantly. They didn’t need to do slow-mo will-important-character-die (answer: usually no) stuff. The show didn’t need to introduce over-the-top one-note villains who don’t really fit with everything else (Ramsay, and to a considerably greater extent Euron). I’m sure better writing could have saved things. A similar plot but with less contrivances and spread out more – the seeds of Daenerys going burny-burny were set, but never developed; they suddenly burst into full-grown plants and it causes whiplash – would have worked just fine.

      • Nornagest says:

        Ramsay and Euron are both in the books. Ramsay’s just as bad there, and Euron might be worse. What we don’t get in the books, though, is a season’s worth of Theon getting tortured on-screen in loving detail; he’s captured, we don’t see him for a book or so, and the next time we do he’s a thoroughly broken man calling himself Reek.

        Also, Ramsay and Sansa never meet; Ramsay does marry a “Stark” to cement his claim, but it’s allegedly Arya, not Sansa, and in fact it’s neither girl, but a dressed-up steward’s daughter. (This isn’t much of a spoiler; it’s never really in doubt to the reader, if not to all the characters.)

        • dndnrsn says:

          @Nornagest

          I’ve read the books. Both Ramsay and Euron are in the books, sure. Ramsay is, IIRC, never “onscreen” in the books. Euron shows up and is a weird spooky guy who turns up and causes trouble.

          The problem in the show is they’re one-note. They can’t think of making Ramsay evil in any way except dialling everything up to 11. The writing and acting for the character gets over-the-top after they leave the books and he goes from creepy torturer guy to Local Threat #1. He’s portrayed as unstoppable – commando so great at burning supply wagons or whatever you never see how he does it, able to fight off multiple enemies shirtless, superlative archer, always one step ahead. They have him kill Roose – a far more interesting character, better written, better acted – trading a more interesting for a less interesting character.

          Then, having painted themselves into a corner, they have him almost win a battle until the cavalry arrive (in the process, keeping Jon Snow uninformed about coming cavalry so the viewer will be surprised by their arrival, when there’s no reason for him not to know; that sort of stuff is all too common in the later seasons. Not to worry, though. Bolton, at the minimum a capable general, has just completely forgotten to set sentries or anything as he toys with Snow in a way optimized to present his rear to any enemy reserves or reinforcements.

          The further something is from the book, the more narrativium it runs on. The show also starts “baiting” a lot more often – the “will beloved character die? Of course not” shots. Internal consistency is lost.

          Euron is similar. His acting just doesn’t fit – he struts around the stage like an (admittedly good) frontman in a rock band. His accent is Danish actor trying to do English and landing somewhere in Dutch, maybe. He’s incapable of restraint. He’s unbeatable – until he is. His fleet goes from a legitimate threat that can take out a dragon by surprise and force another to retreat, to being unable to even hit the dragon. He’s unstoppable until he isn’t.

          With both of them, they feel more like “forces of nature” than leaders in a civil war.

    • eyeballfrog says:

      Honestly I’d have to go back to at least rewrite Dorne in season 5. The show was excellent when D&D were just adapting Martin’s work, and they deserve credit for that, but as soon as they tried any of their own writing it was terrible. Rather than try to cram the remaining 4 books into 3 seasons, they should have listened to HBO and GRRM and done 10 seasons so they could actually do AFFC and ADWD justice.

    • The Nybbler says:

      I agree with the others that this is too late. You have to start re-doing it soon after the Red Wedding — time then to start moving the characters towards a conclusion. I’ve heard it said that what Martin did is try to “write what the characters would do”. That gets you a soap opera, not a narrative with a conclusion. Alas I am not a writer and could not do it right.

      You probably have to make some changes earlier than that; the conclusion to the High Sparrow’s story was great, but in the beginning he kind of appeared out of nowhere. Maybe that was not true in the books.

    • John Schilling says:

      As others have noted, season 6 is probably too late to do what (presumably) Martin had planned. In particular:

      The “Mad Queen Daenerys” arc is probably unsalvageable after having been largely dropped in S5/6 and particularly after Cersei’s spectacular ascension in the S6 finale. Yes, Dany will take what is hers with fire and blood, but pretty much every credible rival for What Is Hers is pure unrepentant evil and Good Queen Dany has been pretty reliable on keeping the fire and blood narrowly focused. Takes a bit of nudging from her advisers at times, yes, but we’ve seen what happens when they nerf Tyrion and Varys in the name of the plot.

      Reintroducing Brandon Stark as a credible political actor is almost certainly right out, as discussed a couple of OTs ago; at this point that would be like having Merlin take the British throne after Camlann.

      The Night King has been basically MIA since Hardhome, so either the Dead can’t be plot-critical or we need to spend most of a season getting that arc back on track

      The Game of Thrones proper is basically done; Cersei is on the Iron Throne, but Dany is coming for her with an unstoppable armada carrying a Mongol Horde and a Spartan army, with three dragons flying cover, two of the cleverest schemers in all the world by her side, Dorne and Highgarden already allied. One spectacular but inevitable so dramatically flat curbstop battle and it’s done. You can delay that a couple of episodes with the preliminaries, but again we’ve seen what happens when they nerf Tyrion, Varys, and Rhaegal to stretch it out for two seasons. There are no other credible contenders, because see above, and the finest assassin in Westeros is standing ready to tie up any loose ends.

      So, while I can appreciate what I think Martin was aiming for and what Benioff and Weiss tried to pull off, I’d go the opposite tack: Ask for one full season to tie up the story as it is.

      That means the trite “Good Queen Dany trounces Wicked Queen Cersei” ending, but done with proper craftsmanship and on the appropriate scale. That takes three episodes of material, setup, curbstop, and cleanup, probably spread over the first half of the season. Arya delivers the coup de grâce, Dany takes the throne.

      Along the way, we have to reintroduce the Night King as a serious menace, and we don’t have the idiotic “Get Cersei Lannister Another Pet Zombie” plan as a way to do that. Somehow, we have to find a way to make him a real threat south of the wall. Has it been established, by S6 in the show and treating the books as a separate continuity, that the intact wall is metaphysically impassable to the Dead?

      However we do it, this sets up the real political conflict of the last season – the Dead were the real threat all along, but now Dany is insisting that she rules the seven kingdoms while Sansa and probably Jon are insisting that the North must remain free. All of them have legitimate reasons for this, all of them are genuinely good people, and their legitimate disagreements over how to secure their various good ends are going to result in all of Westeros being consumed by the Dead while they bicker. Also, there’s still fallout from Cersei’s defeat getting in the way, and Littlefinger is going to amplify the chaos on general ladder-climbing principle.

      The Long Night involves much better tactics, but with the Living not being really united at the outset – quite possibly Dany’s forces don’t arrive until the last act – there are also much heavier casualties. As I said at the time, that was a superb episode at the detail level, but fell apart at the big-picture level and in being timed to end the Night King Menace too early in the season. Fix those, and it’s a fine climax.

      Then the Plot Fairy can dispense happy endings to the survivors, hopefully in a less lame fashion than we saw a few days ago. If Good Queen Dany hasn’t died in battle, she nominally rules Seven Kingdoms that have been broken by years of war among the living and with the dead and she’s fine with that because Wheel. The happy endings pretty much write themselves, starting with Jon and Dany incestuously marrying each other to start a lineage of Good Targaryens, except the casualty toll will have been high enough that some of these turn bittersweet.

      Bran gets to be the Merlin, not the King.

      That’s the best I can do.

      • cassander says:

        this is in line with my thinking, the trouble is dragging out the ending too much, not ending too quickly. Though I’d have danny and the dragons definitely die in the battle for the living.

      • gbdub says:

        I believe the books established, and the show strongly hinted, that the Dead cannot cross an intact Wall. However there’s a magical artifact (the horn of something or other) that can supposedly break the Wall, plus the mark of the Night King on Bran does something to break one curse or another.

    • gbdub says:

      How about a simpler challenge? Redo the last episode, but assume you have to keep the major story beats intact (Martin’s “bullet points” presumably).

      Or a medium challenge – same thing, but redo season 8.

      SPOILERS AHEAD

      For the first challenge, I was okay with the main beats but I think they could have been a lot more effective. There was plenty of time and plenty of fat to trim to make room for some extra development.

      First, Grey Worm and the Unsullied are a problem after the death of Dany and “put em on a boat” is unsatisfactory and unbelievable. So, make Grey Worm complicit in Dany’s death. Have Jon convince him that a lifetime of unquestionably following Dany’s orders into atrocity after atrocity is no freedom at all. This works because Grey Worm was made the Master of War by Dany – the remaining Unsullied can be offered freedom and any still in “unquestioning obedience” mode will follow Grey Worm.

      Second, Jon getting thrown in jail and his true name never being mentioned is dumb. With Grey Worm’s support, he can declare openly that he is King, then voluntarily abdicate to go North. This seems in keeping with his character, and his exile is more powerful if he takes it voluntarily a la Aemon. This also neatly sidesteps the awkwardness of the council scenes – Jon chooses Bran as his heir, declares the North independent or First Among Equals or whatever so Sansa can style herself Queen In The North. Hell, he can even push semi-democratic succession, pointing to the tradition of the Night’s Watch.

      • John Schilling says:

        Keep the finale the same until Drogon is mourning Dany; that part was superb. Then have the Last Dragon come to terms with the Last Targaryen and exit Stage Up. Jon’s story ends with him sitting on a sunny hillside in the shade the Wall, the only place he’s ever been really happy, with Ghost on one side and Drogon on the other (well, as much of him as will fit in frame). Tormud and a mixed crew of Free Folk and ex-Watchmen nervously trying to figure out how this is going to work out.

        Jon doesn’t bother to say goodbye to anyone except maybe Arya, who was waiting in the wings to deal with Dany if Jon wouldn’t. Jon is done with Sansa, whose selfish oathbreaking helped set up this tragedy. Arya is done with killing (but still keeping her Needle).

        The Seven Kingdoms are fully broken. Dorne and the Iron Islands have declared independence, we don’t need to see a lame “new prince of Dorne” and Yara’s story finished back in S8E1. Edmure Tully and Random New King #1 don’t bother to show.

        Team Stark (including Sam and Gendry as allies) do show up at King’s landing to get their Tyrion back, and tie up other loose ends. Grey Worm has absolutely no reason to give them Tyrion, and so doesn’t. The man who betrayed their beloved Queen Dany (well, one of the men and the only one they can lay their hands on) gets the Missendei treatment from the tallest remaining rampart in King’s Landing. Sansa is pissed, but the Northern armies aren’t going to fight the Unsullied for the ruins of King’s Landing. Grey Worm of No House gets to be King of the Ashes, long may he choke on them, and will probably subsist for a time by raiding and slaving in the nearby (and non-Northern so Sansa doesn’t care) communities. Likewise the Dothraki, but in lands offscreen because they don’t have a viewpoint character.

        Everybody else gets approximately their happy endings as described. Sansa is aided by Brienne and Davos in her role as Queen of the North. Bran gets to be the Merlin. Arya is not Christopher Columbus, and might make a go as Lady Baratheon after all. Bronn gets neither Highgarden nor Harrenhall, having bet everything on people who can no longer pay their debts, and doesn’t get to end the series by joking about brothels.

        • gbdub says:

          I’d be okay with that, although I kind of think it would be more “Thronesy” for Jon to tearfully give Drogon the Old Yeller treatment somewhere north of the Wall, ensuring the true death of the failed Targaryen line. Longclaw through the back of the head, comes out as flaming Lightbringer?

          • cassander says:

            I like that, but for it to work, you’d have to have made prophecy a lot more important in the show than it has been.

        • The Nybbler says:

          Tormud and a mixed crew of Free Folk and ex-Watchmen nervously trying to figure out how this is going to work out.

          Probably depends on just how much livestock is in the Gift and the southernmost Far North.

          It’s too bad they’re more into beheading than hanging, because the idea of a scene where Tyrion does the Belkar Bitterleaf act really amuses me. Even if it wouldn’t actually work.

    • Chlopodo says:

      V jbhyq unir znqr gur “Tnzr bs Guebarf” fubj or nobhg n tnzr bs guebarf, naq unir Xvat’f Ynaqvat punatr unaqf ng fbzr cbvag ORSBER gur frpbaq-gb-ynfg rcvfbqr bs gur jubyr frevrf.

      Zber frevbhfyl: V jbhyq unir znqr Wba orpbzr gur arj ehyre bs Qnrarelf’ Arb-Inylevna Rzcver va Rffbf. V qba’g xabj rknpgyl ubj V’q qb guvf, ohg vg jbhyq ng yrnfg ~fhoireg gur rkcrpgngvbaf~ bs uvf yvarntr erirny va n jnl gung xrrcf vg npghnyyl eryrinag. Vg jbhyq nyfb znxr ure frnfbaf bs angvbaohvyqvat bire gurer znggre sbe fbzrguvat zber guna whfg n jnl sbe ure gb fcnz Oneenpxf naq Fgnoyrf sbe bar ovt unvy znel nggnpx. Bgure guna gung: ab vqrn.

    • The original Mr. X says:

      As others have said, having the big battle against the army of the dead occur halfway through the last series is anti-climactic. So reverse the order: have Daenerys go and take King’s Landing first, have a bit of political intrigue as she struggles to establish herself as queen in a country she’s never been to before. Then the Night King breaches the Wall somehow, and Daenerys et al. go up north to help Jon beat off the threat. The battle against the dead could take place either in the last episode, or in the penultimate one, leaving the last episode for tying up any loose ends.

      Or, if you really want to subvert expectations, have Jon become King of the Seven Kingdoms, and Daenerys go north to take the black. How, you may ask? Simple. Daenerys still goes all Mad Queen and burns down King’s Landing, but instead of becoming Westerosi Hitler afterwards she realises what a terrible thing she’s done. Realising that she’s more like her father than she thought and that she can’t trust herself to take political power, she abdicates and goes north, defending the Wall with Drogon as a penance for what she’s done. Jon Snow is reluctant to become King, but if somebody (Daenerys? Tyrion? Sansa?) convinces him that it’s his duty, he’d probably agree.

      Alternatively, I quite liked this suggestion.

      (TL;DR for those who don’t want to watch the video: Rirelguvat tbrf zhpu gur fnzr nf va gur erny fubj hc gb gur Onggyr bs Jvagresryy. Va guvf irefvba, ubjrire, gur yvivat ner qrsrngrq naq unir gb syrr fbhgu nsgre gnxvat urnil pnfhnygvrf. Jvgu n mbzovr nezl znepuvat fbhgu Prefrv svanyyl erpbtavfrf gur arrq gb havgr ntnvafg gurz, naq nterrf gb wbva sbeprf jvgu Qnrarelf — ba gur pbaqvgvba gung Qnal choyvpyl oraq gur xarr gb ure. Qnrarelf vf haqrefgnaqnoyl eryhpgnag gb qb fb, ohg hygvzngryl qrpvqrf gb chg gur tbbq bs Jrfgrebf orsber ure bja nzovgvba naq npprcgf Prefrv nf ure dhrra. Zrnajuvyr Oena unf orra jnetvat bhg gelvat gb qvfpbire ubj gurl pna qrsrng gur Avtug Xvat. Ur qvfpbiref gung, vs gurl qrfgebl gur gerr jurer gur Avtug Xvat jnf perngrq, guvf jvyy qrfgebl uvz. Vg jvyy nyfb qrfgebl nal zntvp va gur jbeyq, yrnqvat gb gur qrngu bs Wba Fabj, fvapr ur jnf oebhtug onpx sebz gur qrnq ol zntvpny zrnaf. Wba, orvat n frysyrff naq aboyr punenpgre, gryyf Qnal gb tb naq qrfgebl gur gerr jvgu Qebtba juvyfg ur naq Eunrtny uryc gb ubyq bss gur Juvgr Jnyxref. Juvyfg gur ovt pyvznpgvp onggyr vf tbvat ba, Qnal syvrf bss gb qrfgebl gur gerr, ohg gur Avtug Xvat thrffrf jung fur’f hc gb naq syvrf bss ba Ivfrevba gb fgbc ure. Gurl unir n ovt svtug, ohg va gur raq Qebtba vf noyr gb ohea qbja gur gerr, qrfgeblvat gur Avtug Xvat naq uvf nezl. Qnrarelf ergheaf gb Xvat’f Ynaqvat naq jr frr ure jrrcvat bire Wba’f yvsryrff obql.

      Zrnajuvyr Wnzvr yrneaf gung Prefrv vf cynaavat gb zheqre Qnal naq znffnper nyy ure sbyybjref va beqre gb przrag ure cbfvgvba ba gur Veba Guebar. Ernyvfvat jung n zbafgre fur vf, naq hanoyr gb gnyx ure bhg bs vg, Wnzvr xvyyf ure, va n fvghngvba erzvavfprag bs uvf zheqre bs gur Znq Xvat nyy gubfr lrnef ntb. Fvapr rirelobql ryfr vf abj qrnq, Qnal svanyyl trgf gb fvg ba gur Veba Guebar. Gur raq.)

      Also, is it just me, or does ROT-13 look kinda like the Black Speech from The Lord of the Rings?

  38. DinoNerd says:

    Your mission, if you choose to accept it, is to increase the number of people who have as many children as they want. You aren’t allowed to reduce the number they want (that’s too easy) or massively penalize not having children. Spending tax money on facilitating having children is OK; paying people more than is needed to offset the costs (including opportunity cost) is not. And your books need to more or less balance – you don’t get to provide free everything, without the money coming from somewhere.

    Pick your own geographical and cultural boundaries – country, city, state/province, or even the whole world.

    • Radu Floricica says:

      Uber for childcare?

      But seriously, promote cultural changes that dramatically reduce the cost of childcare. There are many things that are economically possible but unlikely socially – like the way children are now taken to school by car even when they’re perfectly capable of walking, in an environment probably twice as safe as 20 years ago.

      For example, offer childcare certifications to parents (free and reasonably easy to obtain) that would allow neighbors to trust them with babysitting-pools.

      Offer free/cheap standardized child-care kids, inspired by the Finnish hospital boxes – but covering a longer period.

      Obviously, massive education reform. If you can pull this off, the reduced waste would more than pay for everything else.

      • Deiseach says:

        Offer free/cheap standardized child-care kids, inspired by the Finnish hospital boxes – but covering a longer period.

        I’m not sure if you meant “kits” there, but if we could create cheap standardised child-care kids that might solve the problem for many potential parents 🙂

        • Le Maistre Chat says:

          I’m not sure if you meant “kits” there, but if we could create cheap standardised child-care kids that might solve the problem for many potential parents

          Hiring teenage girls you know is cheap, just not standardized.

          • Nick says:

            Living near your parents is also a way to get free childcare (and help at home).

          • John Schilling says:

            Living near your parents involves substantial opportunity costs in that e.g. the best-paying (or just best) job you can find near your parents is probably inferior to the best-paying (ditto) job you can find anywhere.

            We could “fix” this, at least for the working class, by doing away with meritocracy and making it so that you can only get good jobs via personal networking. Even that doesn’t work for white-collar professionals, who usually form their networks at college and rarely have their parents or their parents’ friends find good jobs for them. But, OK, maybe “fix” that by reducing the middle class to ~20% of the population and have them keep their numbers up by plucking the best of the (now localized and fertile) working class via academic merit scholarships.

            “Fix” in scare quotes for the obvious reasons.

    • johan_larson says:

      How big is the gap between how many children people say they want and how many they end up having, right now?

      • Aapje says:

        In the US, the median is two and the mean is about 2.6, as people more often want bigger families than the median, than smaller ones.

    • The Nybbler says:

      I mutter about “revealed preference”, spend a little more tax money on infertility research, and declare victory.

    • Nancy Lebovitz says:

      Make reproductive tech, both plus and minus, easily available.

      Find the biochemical basis of babies sleeping through the night, and make that easily available.

      • Hoopyfreud says:

        Find the biochemical basis of babies sleeping through the night, and make that easily available.

        What’s the bet that this will increase the risk of SIDS?

    • Edward Scizorhands says:

      Tackle the cost disease of raising kids. Normalize, somehow, that it’s okay to not spend a lot of money on your kids.

      Ban credentialism.

      Normalize/subsidize the outsourcing of internal housework.

      Free health care for kids.

    • Plumber says:

      DinoNerd

      “Your mission, if you choose to accept it, is to increase the number of people who have as many children as they want….”

      With pleasure! 

      While they’re probably examples of other developed area’s that still have high birthrates that may be learned from and imitated, where’s the sport in that?

      First, I invite you to read The Age That Women Have Babies: How a Gap Divides America essay from The New York Times. 

      Notice anything? 

      Where housing is cheap the age when women first give birth is younger on average, thus they’re physically able to have more children in their lifetimes, and where housing is more expensive less people have children at all.

      Extrapolating from me and my wife, if we had our house earlier we would have had our kids earlier, and if we had a bigger house in a quiet  and safer neighborhood at younger ages we would’ve had more than two kids, some of which would’ve had kids of their own by now if they were prosperous enough. 

      There’s an obviously false story about the problem being supply and if more housing was built then prices will go down. 

      Nope.

      I can see for myself that there’s more new housing built than ever before and yet prices keep climbing but San Francisco (and much of the adjacent area) has a fewer percentage of the population that are children then ever since records began, because our rulers don’t want people they only want “the best and brightest” workers, in this land full of many Teslas and few toddlers, but l’empereur has solutions that don’t involve gulliotines (but if needed…).

      All this will require a large arsenal of nuclear weapons as a deterrent to prevent stockholders from sending in what remains of the U.S. Army to seize back property, but SPECTRE should have some available for sale.

      Use eminent domain. 

      Where housing prices are highest seize the property, convert it to public housing, publicize the Hell out of doing it, and watch property prices plummet as the owners fire sale and flee from the fear of having their properties be seized or be neighbors to public housing before they can sell.

      Use taxes.

      Tax the Hell out of those wealthy enough to bid up housing. 

      Tax the Hell out of and crush “Silicon Valley” and hear the lamentations of the “best and the brightest”, eager, ambitious, and childless young adults from elsewhere, and drive them before you to go back to their hometowns to raise families instead of “Making the World a better place” and bidding up housing here, find Apple and Google headquarters, force them empty with bayonets, burn them down, exhault in their destruction, open up the barrels of grog for the troops, and then set them loose to plunder Cupertino and Mountain View before bringing the loot home to distribute amongst their wives and children. 

      If the survivors try to rebuild, permit them to build detached houses, schools, churches, parks, and playgrounds where the ruins stand, but at any hint of office parks and semiconductor chip factories send in the troops to destroy those before that blight spreads!

      Rebuilding the shipyards and non “high tech” factories, so that they’re jobs that pay a “family wage” to those without lengthy educations is important, as is seizing farmland so a growing population may eat.

      Give tax credits to those couples with enough children and that have between them have enough free hours to actually raise them is important, and if grandparents and other relatives donate time spent babysitting allow the tax credits to increase.

      Prevent foreigners from staying long unless they’re married to and making children with citizens, in which case permit them citizenship. 

      Pay well (perhaps with citizenship) those who “drop a dime” on those who employ or otherwise shelter childless adults from elsewhere.

      Tax away cramped buildings filled with “studios”, especially those where sounds travel easily, and use zoning laws and taxation to encourage the building of family housing. 

      Eliminate or shorten High School.

      Create many more civil service jobs that don’t require lengthy educations, and give tax breaks to those private employers who pay family wages and don’t require lengthy educations, and extended overtime, tax the rest even higher till they’re brought to heel!

      Tax breaks for self-employed bookstore, tavern, and delicatessen owners  (probably doesn’t actually help people have bigger families, I just went more places to get books, ale, and corned beef sandwiches).

      Encourage people to convert to Catholicism, Mormonism, and Orthodox Judaism, have schools affiliate with those faiths, and also with labor guilds where students may learn trades that pay a family wage.

      Tax away incomes that look to be enough to pay for building studio apartment blocks, subsidize wages to be enough to build and buy family homes.

      Send troops into Marin and San Mateo Counties to seize large lots, in order to build even more family homes, schools, churches, and playgrounds.

      Encourage family-friendly “Public Houses” on the British and Irish model.

      I’ve mixed feelings on my plans to have video game consoles destroyed without mercy as some commenters have indicated that they play those games with their kids, some way to discourage their use among childless adults must be devised that still permits use among families.

      Maybe permit some existing Universities and research centers to continue in order to develop geo-engineering, so that the empty land inland has weather that isn’t Hellish, thus making further conquest of the territory of North America for settlements worthwhile, but you have to watch those researchers closely as they’d be the type to rebel and lead back to the previous anti-children regime.

      (That all pretty much came from reflecting on learning that when my parents bought the house me and my brother mostly grew up in, in 1973, property prices were low because a pro rent control city council had been elected, and then I thought of “How do you recreate 1973?”, and then I got Bonapartist and carried away. Still I like the story).

  39. Aapje says:

    Dutch ‘populist’ politician Thierry Baudet wrote a book review/essay of Houellebecq’s Sérotonine that mirrors discussions we had here about low reproduction, replacement, atomization, the emptiness of promiscuity, etc. It’s a direct attack on individualism/liberalism and a defense of communitarian or even (proto-)fascist ideals: sacrificing individual desires and liberation for the group.

    The response by the Dutch press was very disappointing, accusing him of being against abortion, euthanasia and for wanting to force women back in the kitchen. The first two are misreadings and the last is an exaggeration (he believes that women (and men) are happier with a more traditional role). However, my main issue is that these accusations miss the forest for the trees, by not addressing the gist of the essay. However, anti-individualism/liberalism is outside the Dutch Overton Window and probably not even within the inferential window (in other words, most/all of the critics cannot understand him and can thus do no better than to rage against decontextualized parts of the essay whose purpose within the argument they don’t understand).

    Baudet’s party became the biggest in the recent provincial elections and it’s quite likely that he’ll repeat this performance in the European elections that will be held today (in my country).

    • Radu Floricica says:

      Good read, thank you. I very much agree with the diagnostic, but slightly disagree with the cause and very much disagree with the treatment.

      Personally, I’d say the cause is the lack of need of a husband’s resources in raising kids. We have a mixture of contraceptives, women working and earning, welfare and child support that take away the need to look for and maintain a relationship. Probably the starkest difference between older books and our times is when a man is considered “a catch” – it went from having a good career, to “doctor or lawyer” some 20 years ago, to … probably millionaire now?

      As for the cure… I think we already know that a too restrictive/conservative/religious society is bad. Society is often wrong, and one that forcefully imposes its rules will end up cutting a lot of innocent heads. I’d go in the other direction: promote genuine diversity. Accept that some women want to have an abortion, but stop shoving it in everybody’s faces like it’s a merit badge – at the very least, it’s a sign of very poor planning. Be ok and accepting with people that chose to live religious lives, and stop treating them like they’re in a cult. Etc.
      It’s not easy. An obvious dilemma is how you treat differently people that want their children to go to church, and people that want their children to skip vaccines? But I still think it’s better than trying to turn back time.

      There’s also a more professional treatment of the topic, btw. It ends up rather pessimistic – these things are here to stay, and the only major benefit we get (both men and women) is much greater access to sex – at least if we keep fit.

      From the amorous point of view, Véronique belonged, as we all do, to a sacrificed generation. She had certainly been ca­pable of love; she would have wished to still be capable of it, I’ll say that for her; but it was no longer possible. A scarce, artificial and belated phenomenon, love can only blossom under certain mental conditions, rarely conjoined, and totally opposed to the freedom of morals that characterizes the modern era. Véronique had known too many discothèques, too many lovers; such a way of life impoverishes a human being, inflicting sometimes serious and always irreversible damage. Love as a kind of innocence and as a capacity for illusion, as an aptitude for epitomizing the whole of the other sex in a single loved being rarely resists a year of sexual immo­rality, and never two. In reality, the successive sexual experiences accumulated during adolescence undermine and rapidly destroy all possibility of projection of an emotional and ro­mantic sort.

      I actually had a dispute on reddit about this – I think on purplepill? I did some deep digging in google scholar and expected to prove the same thing… and I ended up with the reverse. Living the “discotheque” life is indeed very bad for women, associated with depression and substance abuse – that much was confirmed in multiple studies. However, it is 100% temporary, and more interesting there was absolutely no hint that it harms capacity for pair bonding. The only correlation I found was between number of sexual partners and likelihood of divorce, but I don’t think it’s the same thing. For one thing it’s correlation not causation, but mostly because what the numbers show is that very low counts (0-1 previous sexual partners) are associated with low divorce rates, but there is much less correlation afterwards. So it’s more like “growing up religious or in a small town leads to longer marriages”, and not at all “going from 5 partners to 30 partners fucks you up”

      • Faza (TCM) says:

        The only correlation I found was between number of sexual partners and likelihood of divorce, but I don’t think it’s the same thing. For one thing it’s correlation not causation, but mostly because what the numbers show is that very low counts (0-1 previous sexual partners) are associated with low divorce rates, but there is much less correlation afterwards. So it’s more like “growing up religious or in a small town leads to longer marriages”, and not at all “going from 5 partners to 30 partners fucks you up”

        My guess, based on experience and observation, is that the two issues have a common cause and it is roughly this: unrealistic expectations of fulfillment.

        The matter has come up before in these threads, so a brief recap will suffice: modern sexual mores, as far as women are concerned, is at least partly driven by the successive waves of feminist thought we’ve seen. The message, roughly summarized, is that women deserve a rewarding (materially and spiritually) career, a partner that will love them unconditionally and contribute – equally, for preference – to both household income and household chores/family raising, sexual gratification, etc. etc.

        In other words, what women are hearing is that they can have it all – and if they aren’t getting it all, someone’s doing them wrong.

        If you’ve absorbed such a message, your go to response when you find your circumstances don’t actually offer it all is to change your circumstances – hence, divorce.

        Me, I’m cynical, so my view of love that can last is more along these lines.

        Of course, there’s also a second explanation that also fits my observations: the free-wheeling lifestyle often attracts a very specific type of woman, who is primarily identifiable by her poor judgement in men; the kind that will, without fail, pick out the abusive asshole (sometimes downright criminal) in the room and go with him. Such women seldom have fulfilling relationships, for obvious reasons.

        • Radu Floricica says:

          Hmm. If you want to point a specific part of the message that’s most harmful, I think that would be the timing. I’ve had several conversations that went like “yes, that’s the type of guy I want to marry… after I’m 30”. It’s not like their compass is totally broken, they still know what makes a good husband (vs a good lover). It’s just that they’re told they can wait.

          Except it’s not 100% true. I’m guessing peak marriageable age for a woman would be 25-27, and I think they also know it – they’re absolutely untouchable for a relationship at that age. They could be looking for the best possible husband then, and they’d get him. But they’re postponing it until well after 30, when two things conspire to make their life miserable: their expectations have only grown, and the kind of guy they could have married at 26 is… well… looking at 26 year olds. Plus just plain old habit – they’ve already had a decade of the same kind of life. So they start looking – in theory – but in practice the mix of expectations/possibility and not trying very hard makes things go slow. And it takes just another 5 years to completely skew the equation.

          This would be their trap. Ours… well, the essay describes it pretty well.

          • Faza (TCM) says:

            Is timing a part of the message, though? I’ll admit to not being a devoted follower of popular feminist messaging, but I don’t think I’ve ever encountered any advocacy for delay in starting a family. Quite the contrary: women waiting too long to have kids seems a commonly recognized problem, as far as feminist-aligned media are concerned.

            I expect that what you describe is more a matter of women thinking along the lines of “yes, I want to find a husband and have kids after I’ve done this, that, and the other”. It’s not entirely irrational, given that having kids would probably put the kibosh on those things.

          • albatross11 says:

            A fundamental fact here is that women and men age differently w.r.t. both attractiveness and potential for children. A 25-year-old woman is close to her peak attractiveness, and also is very likely to be able to have two or three kids without a lot of difficulty or expensive medical intervention. A 35-year-old woman is well past her peak attractiveness, and her window of fertility is closing soon.

            There are obvious evolutionary reasons why sexual attractiveness corresponds pretty well to potential to have lots of healthy kids. But it’s important to realize that this is biology, not morality.

            Playing the field until you’re 35-40 and then settling down works a lot better for men than for women. Similarly, maxing out your career commitment/earning potential until you’re 35-40 (made partner, got tenure, finished your residency and fellowship) and then starting a family works a lot better for men than for women.

            We wrap these bare biological facts in a lot of ideological baggage and moralizing, but they’re just the way our biology works. Perhaps in a few years, we’ll be able to do something about that with artificial wombs/rejuvenation treatments, but right now, biology constrains womens’ choices in these areas a lot more than it does mens’ choices. And that means that women who eventually want a husband and children probably would do better to try to start working on that in their mid to late 20s.

          • Radu Floricica says:

            @albatross11

            Speaking of moralizing, I chose my words carefully: 25-27 I consider to be peak marriageable age. Strictly from a sexual point of view it’s probably 16, or whatever age 90% of guys stop seeing them as children.

            @Faza (TCM)

            To be honest, I don’t follow feminist media – but I don’t remember any instance of saying marriage is good 😀

          • March says:

            @albatross11,

            I don’t know, having 1 or 2 kids still counts as ‘having a family,’ and the odds for any single woman are pretty good even if you start thinking about that at 30.

            The problem isn’t an individual one (unless you’re one of the people with bad luck) but a societal one.

            Also, the framing is so short term! “Women, your best fertile years are in your 20s, so get hitched young and have kids young. And the perspective for the other 60 or so years of your life? Meh. You’ll always be ‘behind’ compared to women who waited and basically all the men, so don’t even bother.”

            There are two competing stories for your 20s – you should have your kids but you should also establish yourself professionally. If you don’t do that, your chances at either are shot forever. The kids thing has a biological reality behind it, but there are plenty of policy ideas that could help change the second story to “and there are plenty of opportunities to establish yourself professionally once the kids are in school.”

            I’m skeptical about the odds of a 22-year-old woman to land a good husband, though, especially if society is going to promote big age skews. In terms of fertility, sure. But in terms of mutual respect and trade-offs? 22-year-olds of any gender aren’t the best strategists, and power differentials (which 10 years and all the earning capacity definitely brings) don’t always bring out the best in people.

          • March says:

            @Radu Floricica,

            There’s gotta be a difference between ‘peak perky breasts and butts you can bounce a quarter off’ and ‘peak marriagable age’.

            16-year-olds make worse moms, have worse pregnancy side effects, make worse partners. Sure, they may be hot if you swing that way. But marrying strictly from the sexual point of view is no good for anything.

          • albatross11 says:

            I think there’s a huge cultural support element to this, too. In some subcultures (LDS/Mormons being the obvious example), there are lots of men and women ready to marry in their early 20s, including many of the most desirable ones. In most of the US, though, there aren’t all that many men and women looking to marry at 22, and the ones who are probably aren’t the most desirable ones overall.

            One thing that’s really striking is how many women would *really* like some kind of intellectually challenging, professional part-time work, so they can be involved in their kids’ lives without having to put their kids in full-time daycare and only get to see them on nights and weekends. That looks like a big failure of the market to exploit a resource–smart professional women with lots of education and ability who would jump at the chance to have a 15-30 hour a week job that was reasonably family-friendly and predictable. I’m not sure why we don’t see more of that.

          • Le Maistre Chat says:

            There’s gotta be a difference between ‘peak perky breasts and butts you can bounce a quarter off’ and ‘peak marriagable age’.

            16-year-olds make worse moms, have worse pregnancy side effects, make worse partners.

            Aristotle said a woman shouldn’t have her first baby until she’s 20. There’s probably no biological reason to delay it later than that.
            Which isn’t to say I’d actually encourage my daughters to do that, but if we could change society it should be in the direction of shortening the K-Bachelor’s education track by at least two years (look at how young boys used to enter OxBridge!) and restructuring the role of higher degrees. If you’re going to get a doctorate, maybe that should be seen as something like a vocation (you used to have to take Holy Orders to be an adult academic!)

          • albatross11 says:

            I’m not sure exactly what age is optimal for childbearing in biological terms, but surely by 18 or so you’ve hit it. My guess is that an 18 year old will do the physical parts of parenting better than a 28 year old (the baby will likely be healthier, there will be lower risks of complications, she’ll recover from childbirth faster), but probably won’t do the mental parts as well, since she won’t be as mature.

            You can imagine some society in which everyone has their children and does their schooling between 18-25, and then has their professional career afterwards–when their kids are mostly in school and they’re still pretty young. I don’t know how we’d get there or whether such a society would work very well, but I imagine there would be more and healthier kids there.

          • albatross11 says:

            Le Mastre Chat:

            Lots of smart kids start taking college classes while still in high school, they just call those AP classes. High school is set up largely for babysitting; I bet we could shift the whole getting-a-BA/BS process a year or two younger without any noticable loss in actual learning or competence. That would basically give a million or so kids another two years of adult life from now on, so it’s hard to imagine it wouldn’t be a positive thing.

          • John Schilling says:

            16-year-olds make worse moms, have worse pregnancy side effects, make worse partners.

            Marrying at 16 doesn’t, except in the case of shotgun weddings, mean being a 16-year-old mother. It means roughly being a 19-year-old mother, a 22-year-old mother, a 25-year-old mother, and a 28-year old mother.

            That actually looks pretty good if the other half of your society’s women are going to be 35-year-old “wait can I still get married and be a mother at this age?” I’d probably tweak it a few years higher, but you definitely want marriage to lead the best motherhood years by a fair margin.

          • Randy M says:

            It’s not entirely irrational, given that having kids would probably put the kibosh on those things.

            It may not be irrational, but partly misinformed, because those other things also put the kibosh on having kids, or at least interfere with them to a similar extent. I think a lot of women succeed in not thinking about how fertility diminishes with age, and sex appeal along with it, and the choosing a partner is not an instantaneous.
            A lot of pregnant women we encounter are quite offended and surprised to be considered “geriatric” when expecting at 35.
            It really is important to consider priorities and put the higher ones first.

          • Randy M says:

            My guess is that an 18 year old will do the physical parts of parenting better than a 28 year old (the baby will likely be healthier, there will be lower risks of complications, she’ll recover from childbirth faster), but probably won’t do the mental parts as well, since she won’t be as mature.

            If only there were a pool of older women around looking for an opportunity to put their wisdom and mental capabilities towards, like mentoring a daughter/niece/neighbor in motherhood.
            This is where I start being sympathetic to leftist critiques of capitalism as harming people by transforming everything to a commercial transaction.

            That would basically give a million or so kids another two years of adult life from now on, so it’s hard to imagine it wouldn’t be a positive thing.

            Our entire society is orientated around prolonging and recapturing childhood. Good luck selling that “jump start on adulthood” thing.

          • rlms says:

            Marrying at 16 doesn’t, except in the case of shotgun weddings, mean being a 16-year-old mother. It means roughly being a 19-year-old mother, a 22-year-old mother, a 25-year-old mother, and a 28-year old mother.

            I’m skeptical about this, but the only data I can find is from 1960 (although for what it’s worth it does claim the majority of married 16-year-old girls then were pregnant).

          • John Schilling says:

            (although for what it’s worth it does claim the majority of married 16-year-old girls then were pregnant).

            Right, and I don’t think that anyone is arguing for a reproductive norm of unmarried teenaged girls getting pregnant at 16. The question is one of when young women ought to get deliberately married with the expectation of getting pregnant later.

            16 is probably too young on the grounds of our society not letting anyone have any practice with adult decisionmaking at that age, and then hitting them with a major adult decision and transformation two years later. But 16 isn’t a whole lot too young, and we need to be clear that “what age should women get married” is not the same as “what age should women on average give birth”.

          • albatross11 says:

            I think the main problem with marrying at 16 is that you’re likely to change and grow a great deal between, say, 16-25, so that you may very well have quite different ideas and beliefs and desires and goals at 23 than you did at 16, and that’s pretty likely to lead to either an unhappy marriage or a divorce. If you married at 16 and had kids at 18 and 20, deciding at 23 or 25 that you married the wrong person and bailing out is pretty rough on those two kids.

          • albatross11 says:

            Randy M:

            That lack is a failure of our society, but I don’t think it’s much of a failure with capitalism. We’ve become more and more atomized and mobile, and we have smaller average family size, so it’s a lot less common now for a young mother to have two sisters, a sister-in-law and a mother and mother-in-law to turn to for advice and assistance, just across town or a short drive away. Instead, it’s likely to be a mother and mother-in-law each in different faraway states, plus a sister-in-law with cats instead of kids and a couple distant cousins you see once in a while.

          • albatross11 says:

            rlms:

            In a society where young marriage is discouraged, I’d expect a large fraction of 16-year-old brides to be pregnant–there’s got to be a reason to violate that norm, and pregnancy is one pretty good reason. I’d expect it to be very different in a society where marriage at 16 was commonplace and unremarkable.

          • Jaskologist says:

            @albatross11

            That is a very common objection, but one that I think has things completely backwards. People don’t change into some unforeseeable form from 16-25, such that we can only wait until the end of the process to see what slot to put them into. Rather, they are most plastic during those years, and so are shaped much more readily by their environment. Put them in an environment of partying and hooking up and they will shape into something more fit for that. Put them in a marriage and that flexibility will shape them into a match for the other person. Adjusting to marriage requires a lot of flexibility.

            tldr; The blacksmith doesn’t look at red-hot iron and say “Oh it’s going to change so much, I’ll wait until it cools down to work on it.”

          • The Nybbler says:

            I think the reason we don’t have much “intellectually challenging, professional part-time work” is simply because in the current environment it’s not cost-effective for employers. _Lots_ of people would like that sort of thing, not just young mothers.

          • albatross11 says:

            Jaskologist:

            So, what evidence can you offer for this claim?

          • Randy M says:

            That lack is a failure of our society, but I don’t think it’s much of a failure with capitalism.

            I don’t think the marxists have a better solution in practice, but it does seem like capitalism provides a tendency towards devaluing what isn’t measured monetarily.
            The woman who works part time so she can help raise her grandkids or whatever is contributing to society in significant ways, but it doesn’t increase the GDP in the same way that working full time and putting them in day care would.

            I think the reason we don’t have much “intellectually challenging, professional part-time work” is simply because in the current environment it’s not cost-effective for employers. _Lots_ of people would like that sort of thing, not just young mothers.

            Wasn’t this what our high tech economy was supposed to produce?
            Can we fix this by shifting our tax and regulatory practices? In theory, it should not be much more burdensome to have 2 at twenty than 1 at forty, right?

          • albatross11 says:

            The Nybbler:

            Maybe it’s not workable, but I have my doubts. A lot less of the world is optimized than is imagined by the strong form of the efficient markets hypothesis.

            I could easily imagine some clever person doing for bored stay-at-home parents what Uber did for unused cars/drivers with spare time and AirBnB did for unused rooms in houses. If they manage, they’ll make a zillion dollars, and they’ll deserve it. There are a *lot* of women with advanced degrees and first-rate minds who decided to stay home with their kids and now would like some interesting work to do, but who aren’t all that excited by either unpaid volunteer work or low-wage file-clerk/sales-clerk type work. Finding a way to get 5% of them working in something like their field would create a lot of wealth, and siphoning off a small fraction of that wealth would make you filthy rich.

          • March says:

            @Jaskologist,

            Except we’re talking about hooking these 16-year-olds up with men who have completed an education and are either on the verge of or already in promising careers, so they can support a family with kids in 2 or 3 years.

            That is, guys whose window of peak plasticity is closing or has already closed. Not to mention, guys who have spent their peak plasticity years in party environments. (Even if there were less hookups.) That doesn’t sound like the start of a peer relationship. If the woman is always molding to the man and the man not to the woman, you’re in full helpmeet territory.

          • albatross11 says:

            There’s a kind of error common among many pro-market types–they turn to the market as a guide to morality, rather than simply a guide to what tradeoffs are available/will work. I think that’s really what you’re running into here–the market can tell you that if you want a McMansion and two 40K cars in the driveway, you’ll need to live in a suburb with a two-hour daily commute and both parents will have to work full time. It can’t tell you whether you should prefer the McMansion to a smaller house and older cars + staying home with your kids until they’re in high school.

          • Randy M says:

            @albatross11
            Mind, you have to channel all that educated girl power into something profitable–which means a one size fits all approach that utilizes the skills of the non-working nurse, lawyer, history professor, and editor probably isn’t feasible.

          • The Nybbler says:

            @albatross11

            What Uber and AirBnB did was that they cheated. They either found ways around the regulations designed to prevent them from existing, or found efficient ways to allow people to violate those regulations. Uber hasn’t made a profit yet, and in both areas the regulators are catching up (there’s a moratorium on new “ride share” cars in NYC, for instance, and AirBnB is totally illegal there). I don’t expect someone to find an employment startup that works the same way, as labor law is far better tested than those other regulations.

            (and this has nothing to do with the strong EMH)

          • baconbits9 says:

            One thing that’s really striking is how many women would *really* like some kind of intellectually challenging, professional part-time work, so they can be involved in their kids’ lives without having to put their kids in full-time daycare and only get to see them on nights and weekends.

            If you want intellectually challenging work that is flexible around your schedule you can go and get a master’s degree. If you want something that pays you well at the same time then you are SOL for fairly obvious reasons. Why would I pay you to do an interesting and flexible job when a huge amount of the workforce would love that job? What extra are you bringing by being part time? Any intellectually challenging job that sees improved performance with experience (roughly 100% of them) would be better off hiring 1 full time worker over 2 part time workers.

            Part time workers simply aren’t very productive compared to full time workers.

          • dick says:

            If you want intellectually challenging work that is flexible around your schedule you can go and get a master’s degree. If you want something that pays you well at the same time then you are SOL for fairly obvious reasons…

            I don’t think you’ve thought this out all the way. Adding a part-time person to a team of full-timers is challenging in a lot of ways, e.g. meeting scheduling, but those employers who figure out how to do it get much better candidates than they would otherwise because the job-seeker:job ratio is much higher. I don’t have a citation, but anecdotally I’ve seen it happen in academia and government jobs. I’ve lobbied (unsuccessfully) to add part-timer slots for programmers, and the reasons I got shot down were 100% to do with institutional challenges (e.g. “we’re not really sure how we would handle the benefits”) and 0% to do with concerns about the applicants.

            And what does a Masters degree have to do with any of this? Is that a reference to a specific job or field? I don’t have a specific job in mind, but am using “professional job” to mean the sorts of jobs where typically everyone is full-time (which is why we’re discussing this, obviously) – programmer, accountant, sales, etc.

            Part time workers simply aren’t very productive compared to full time workers.

            Are you saying that the sort of people who want part-time jobs are less productive than the sort that want full-time, or that a given person is more productive in FT than PT roles? Or something else? Both of those seem trivially wrong so I assume I’m missing something.

          • Aapje says:

            The Netherlands actually has a culture where women can have that “15-30 hour a week job that [is] reasonably family-friendly and predictable.” In fact, Dutch women typically get those part-time jobs before they have children and keep those jobs after their children leave the home. They do work more when not having a partner or kids than when they do, but substantially less than men in the same situation.

            Then they spend the time they work less with caring, cleaning, shopping, etc. When they do get children, they spend 10 hours more per week on caring, which mostly detracts from their spare time and ‘personal care’ (sleeping, getting dressed, eating*).

            * Dutch women spend more time on each of these than men

            The consequence of this is that The Netherlands has one of the highest gender earnings gaps and lowest percentages of women at the top (which requires longer working hours). This in turn results in feminist anger & increasingly strong calls for discrimination against men.

          • albatross11 says:

            There are some jobs for which being full-time is necessary for full productivity. For example, I suspect that programmers benefit from being full-time and even working extra-long hours, because there’s some overhead to getting your brain around a programming problem, and you’re more efficient if you can minimize the fraction of your worktime that you’re spending on the overhead instead of on the full-productivity solving of the problem. OTOH, there are many other jobs for which this doesn’t obviously seem to be true. Editing, software testing, reviewing grant proposals, providing medical advice/services via phone or internet or in-person (for people with medical training), tutoring/teaching–all those seem to be things that can be done part-time without much loss of productivity. My impression is that they’re not done so often part-time because there’s administrative overhead for doing them–a school needs a teacher there every day from 8-4 to babysit kids, it’s easier to handle salary and benefits for a full-time employee, it’s easier to supervise people who all come in at the same time and leave at the same time, etc.

            The interesting thing about Uber and Airbnb is that they did cheat by violating local laws/regulations, but that those local laws/regulations were 99% successful rent-seeking that locked shitty service in place and prevented innovation or competition. As best I can tell, Uber has made the world a much better place overall–it screwed over some cab companies and drivers who were previous beneficiaries of the rent-seeking regulations, but made riders far better off, delivered massive improvements in quality of service, broke the dumb pattern that kept you from calling a cab in NYC, etc. The new drivers were also made better off. If you could sum it all up, it was a massive win for mankind.

            I wish we could have about a dozen more instances of “cheating” that led to the world becoming far better in similar ways.

          • The Nybbler says:

            @Aapje

            Tellingly, it doesn’t seem to have done squat for the Netherlands fertility rate.

          • Aapje says:

            @dick

            In programming and many other jobs, it’s well known that coordinating and sharing/acquiring information is quite costly. So two people doing a job together are not twice as efficient as one person doing one job. Furthermore, the part time worker will fall behind in job experience more and more. Fixed employee costs are also relatively more expensive for the part time worker.

            The returns to long working hours have increased (see the graph at a third of the article), suggesting that the advantages of long work hours are becoming more important.

          • albatross11 says:

            I wonder how much the fixed costs per worker are the result of policy rather than something inherent.

          • baconbits9 says:

            Editing, software testing, reviewing grant proposals, providing medical advice/services via phone or internet or in-person (for people with medical training), tutoring/teaching–all those seem to be things that can be done part-time without much loss of productivity.

            A lot of them can, but you don’t get most of the intellectually stimulating/well paying portions of the equation then. Again the question is- if you had an intellectually stimulating, well paying and flexible job opening why would you prefer a part time worker (as in for what types of positions?) because people who want to work full time are going to be beating down your door for those positions.

          • albatross11 says:

            How about the way markets usually work: You can get those things done cheaper and better by offering part-time jobs, because there’s a large pool of people who would happily take those jobs at below-current-market rates, or without benefits (if their husband has medical insurance, say).

            Perhaps you’re right and the administrative overhead/hassles/loss of productivity would be so great that it would swamp this effect. But I doubt it, and I’d love to see someone give it a try. I genuinely think this is a place where there’s a significant amount of low-hanging fruit lying around waiting to be picked.

          • Aapje says:

            @albatross11

            I wonder how much the fixed costs per worker are the result of policy rather than something inherent.

            You can make policy to reduce these costs, but these often make things less nice for the part-time worker and sometimes for all workers. For example, quite a few offices now have flexible working spots, which workers tend to hate. Even then, you need fewer desks & chairs for full-time employees than for part-time employees that work the same hours in total, unless the latter are going to stagger their working hours perfectly, which they won’t.

            Other costs are simply fixed. Your administrative costs for salary payments aren’t going to be significantly lower for a person who works 20 hours or 40 hours.

            But I doubt it, and I’d love to see someone give it a try.

            Again, the statistics show that employers have started paying a premium for overtime, where they didn’t do so in the past.

            Either employees are idiots who pay those who are more productive less than those who are not as productive, or this reflects an increased productivity (for modern jobs) from longer working hours.

          • baconbits9 says:

            How about the way markets usually work: You can get those things done cheaper and better by offering part-time jobs, because there’s a large pool of people who would happily take those jobs at below-current-market rates, or without benefits (if their husband has medical insurance, say).

            Right, that is kind of what I am getting at, the women in question have to sacrifice one of the three (interesting, flexible or well paying). If you drop the well paying then its going to get close to neutral in dollar terms once they pay for child care, meal prep etc that it is costing them, and if you go for moms whose kids just started school then you are looking at people who have probably been out of the industry for 5-10 years.

          • Aapje says:

            Also, note that women seem to be substantially less interested in a higher wage compared to men, relative to other benefits, which makes sense given the incentives that exist. So if you want to attract women to part-time jobs, it probably works way better to keep the perks of a full-time job at the expense of the salary, rather that cut the perks and keep the salary high(er).

          • dick says:

            I suspect that programmers benefit from being full-time and even working extra-long hours, because there’s some overhead to getting your brain around a programming problem…

            Yes, that is a plausible-sounding reason why part-timers might be less productive than full-timers. I can think of plausible-sounding reasons why the opposite might be true as well. But, lobbing plausible-sounding arguments back and forth isn’t very useful – “I can think of a plausible-sounding argument for this” is the absolute minimum amount of evidence for anyone arguing anything.

            I suspect that the actual story is somewhere in between, and very dependant on the company and its policies. I’m not aware of any research on it, but compare it to a very similar question which has gotten lots of research, the question of whether remote employees are as productive as on-site. I think it’s fair to say that the correct answer to that is, “Yes, if it’s the right kind of job and you have done the infrastructural and process changes necessary to make remote employees successful, otherwise no.”

            Anyway, all I came in here to say is that the reason part-time professional work is so rare is primarily because they present organizational challenges to the employer, and not because of a (real or presumed) deficiency in productivity. And the upside I’m proposing – the idea that you get better candidates when you have a wider pool – is a lot more than plausible-sounding.

          • AG says:

            the idea that you get better candidates when you have a wider pool

            A wider pool is more work to winnow down the candidates. More money spent on HR labor, more money spent on non-HR people not doing their normal jobs because they’re reviewing candidates, more money spent on flying candidates in, more money lost to the labor not being performed by the new hire because the process takes longer.

            Are the returns on employee quality worth all of that?
            Credentialism has been a way to make the whole process more convenient by narrowing the pool, and if the returns on rejecting it were that good, then we should be seeing companies that don’t use it out-competing the dinosaurs.

          • acymetric says:

            I mean…there are places available for this kind of thing. Some of these may be duplicates of each other or obsolete because I haven’t looked at it in a long time, but off the top of my head (I’m sure there are many others):

            Odesk
            Freelancer
            Upwork
            Guru
            Fiverr

            You might argue that these aren’t accessible enough, or aren’t properly optimized, but it seems like this is already a thing, isn’t it?

          • dick says:

            Are the returns on employee quality worth all of that?

            Companies certainly seem to think so, they’re turning down 90+% of applicants while paying recruiters $20K+ for candidates that get hired. That would seem to be evidence that place a very high value on finding slightly better candidates.

            Credentialism has been a way to make the whole process more convenient by narrowing the pool, and if the returns on rejecting it were that good, then we should be seeing companies that don’t use it out-competing the dinosaurs.

            Yes, and that happened; it’s still the case that a good young programmer doesn’t need a degree to get a good job. You just don’t hear about it much these days, because it’s much less common for good young programmers to not have them.

            [How does this differ from freelancers on Fiverr and such?]

            Freelance work is sometimes a good option for the sort of people that would like there to be more part-time professional jobs, but it’s also a very different sort of work and a lot of people don’t like it for various reasons.

          • Viliam says:

            Is timing a part of the message, though? I’ll admit to not being a devoted follower of popular feminist messaging, but I don’t think I’ve ever encountered any advocacy for delay in starting a family.

            I suppose no one says explicitly “women should have kids after 30, ideally even later”, but it is a consequence of insisting that women need a university and a career. Completing the university takes some time, getting promoted in the job also takes some time, and suddenly you wake up and realize it’s your 30th birthday.

            If a woman completes the university and wants to have kids before she is 30, she simply doesn’t have time to progress high enough at the career ladder. And the popular feminist message is that she should.

            (There is an alternative strategy: Have kids immediately after university, and slowly start your career when you are cca 35. You have still enough time left to get up the ladder and ultimately realize how pointless it is, and you will no longer face the choice between continuing your career or having a family. Somehow, no one talks about this option publicly.)

          • AG says:

            @acymetric and dick:

            Programming seems to be the exception that proves the rule.

          • albatross11 says:

            What I have seen of hiring/HR in various organizations does not leave me with the impression that this whole process is super-optimized. I think most of the world is satisficing, not optimizing[1], and thus that there are likely to be substantial gains available for people who find better ways to do things.

            [1] That is, instead of tweaking the parameters of the system until we get to optimal performance, we tweak the parameters of the system until we get to acceptable performance, and then mostly stop messing around with it.

        • DinoNerd says:

          @albatross11

          When I was in college, back in the dark ages, my proposed solution to some of these issues was for childbearing to be done at the physically optimum age – and childREARING to be done by someone else. In my thought experiment, daughters promptly handed their newborn (or close to it) children to their own mothers to raise – those mothers being presumably more mature etc. – and having had a chance to do the things that many people delay children for (getting established in careers, travel, etc.)

          • Randy M says:

            Is it age that provides the maturity for better childrearing, or experience?
            And how good is a first time mother at age 45 going to be at waking up with the baby for feeding and changing and so on?

          • DinoNerd says:

            Is it age that provides the maturity for better childrearing, or experience?

            No idea, actually. But it seemed plausible at the time.

          • March says:

            The study I read about that (ages ago, though) compared age. First-time mothers of all ages, not a first-time 25-year-old mom vs a 35-year-old with 10 years of parenting experience.

            The thing with handing the babies to the grandparents is that you immediately add 20+ years to the age of the primary caretakers, while 5-10 might be a better idea.

          • DinoNerd says:

            The trouble with adding only 10 years, is that it’s often not enough to get to a senior career position. I.e. 30 is too young to slack off in the rat race, and still potentially too poor to afford child care etc. while continuing to work at top ambition/focus. (You may not even have managed to retire your student debt by then.)

            OTOH in my thought experiment there was no reason to wait until age 21 for a woman to have the first of the children her mother was going to raise – she could start as soon as her body was mature enough not to have extra risk factors in pregnancy. (Yes, I know that would be utterly taboo in our culture… I was thinking in terms of science fiction at the time.)

          • acymetric says:

            Seems like the problem would be getting this started, as one of the generations is going to end up getting squeezed (having had to raise both their children and their children’s children) to start the pattern.

      • Aapje says:

        @Radu Floricica

        Personally, I’d say the cause is the lack of need of a husband’s resources in raising kids.

        I strongly disagree with this. Raising children has gotten immensely expensive. The American middle class now spends about a quarter million dollars per child. Very interesting in this respect are the disaggregated inflation numbers, which strongly suggest that parenthood expenses have very high inflation, while childlessness has become cheaper.

        Women on their own don’t actually tend to have these resources, especially with the strong tendency for women to choose low-earning professions. For many women, child care expenses are about equal to their income, so their work is effectively a hobby with their salary being little more than reimbursement of expenses. The women who do earn (a lot) more seem to have expectations that are based on a two-earner household. This shows in partner expectations, where high-earning women demand that their partners are better providers than low-earning women. This is different from high-earning men, who have less expectations that their partners provide than less earning men. It also shows in fertility, which negatively correlated with income. There is no point where income is so high that people actually have as many children as they say they want. Instead, it seems that the gap between income and an acceptable investment in the kids is larger the more income people have.

        Note that when people/mothers are asked about why they don’t/didn’t have kids, the main answers are finances and the lack of an (acceptable) partner.

        It has been an issue for a long time that wages tend to increase by age, while people’s need for money peaks when they raise children & women have a limited fertility window. So it makes a lot of sense that people would then try to have children at the end of that fertility window (and for women to partner with slightly older men). This is increasingly the case as relationships became less stable. With high marriage rates and low divorce, gambling on partner potential made more sense than now. However, waiting a long time to have children is a very risky strategy. In Dutch we have a saying which boils down to: delaying things often causes them to never be done. Even if it doesn’t cause childlessness, it still tends to cause people to have fewer children than they would want.

        Traditional norms for women (get married young and have children soon) may be the only effective way to counter the incentives on women to delay having children and thereby reduce the risks to mothers and children of late motherhood, to bring fertility closer to people’s desires, etc. However, a norm of having children young without stronger pressure to get better relationship stability can cause the issues we see most prominently in black Americans: lots of single mothers and patchwork families.

        PS. Note that increased schooling and increases in lifespan & delayed pensions have shifted the period where people earn money, which makes it harder to have children at a young age.

        It ends up rather pessimistic – these things are here to stay, and the only major benefit we get (both men and women) is much greater access to sex

        Studies pretty clearly show that sex happens mostly within long term relationships. People are spending an increasing part of their life without a long term relationship and thus presumably have less sex. They do have more sexual partners, but whether that’s a benefit is debatable.

        I agree that the correlations of promiscuity to divorce is likely to be a selection effect.

        • Radu Floricica says:

          @aapje

          I don’t think we disagree as much as you think.

          Personally, I’d say the cause is the lack of need of a husband’s resources in raising kids. We have a mixture of contraceptives, women working and earning, welfare and child support that take away the need to look for and maintain a relationship.

          Only one of my reasons was women earning more (and they do earn more than in the time of lifelong marriages, even calculating inflation. Work may be a hobby now, but 100 years ago the alternative to a husband or extended family could well have been miserable poverty).

          Contraception makes it possible to wait, and to focus on having sex/fun. Biologically, all women want to have sex with hot guys – and there’s nothing wrong with that as long as it’s part of a balanced strategy. But make hot guys available without the need for restraint, and you end up just like a population eating fast food every day: fat.

          The other two reasons were how somebody else is paying the bills. It may or may not provide a comfortable lifestyle, but there is no danger of starving even without a job. And if you divorce the right guy…

          I pretty much agree with the rest of your comment, except “It also shows in fertility, which negatively correlated with income”. That’s explained very well by the correlation between education and income on one hand, and education and number of children, on the other.

          I especially liked the disaggregated inflation numbers. It explains a lot. I could go on a limb and say that it makes a stable relationship even less appealing: it can’t offer optimum conditions for raising children, and less-optimum conditions are available in other forms. But that’s a stretch.

        • March says:

          For many women, child care expenses are about equal to their income, so their work is effectively a hobby with their salary being little more than reimbursement of expenses.

          At a particular time, perhaps.

          If you’re a woman who has her first kid at 25 and her fourth at 35 (with a nod both to the ‘get an education!’ and the ‘have many kids and start young!’ sides of the aisle), they’ll all be in school full time by the time you’re 40.

          That leaves you easily another 25 years before you’re no longer considered ‘working age’.

          Say you’re in one of those ‘hobby’ careers. Either way, whether you quit working until the youngest is in school/work parttime to cancel out some daycare/work full time to cancel out loads of daycare, you’re looking at 15 years similarly high expenses. (Daycare is expensive, quitting your job is expensive.)

          But if you keep working, at least you’ll have a work history to fall back on/jump off from by the time the kids don’t need you around as much. Which, perhaps unfortunately, matters a lot.

          —-

          Personally, I blame increased moblity for a lot.

          ‘High school sweetheart’ relationships are an awesome idea if it can be assumed that you both stay in the same city after high school. Ditto for college relationships. Staying in the city both parents were born in also means you’re likely to have two sets of grandparents around and a bunch of extended family, which is great for young families.

          Instead, lots of people leave town to go to college, and a 5-year LDR isn’t great for 17-year-olds. Besides, Skype sex doesn’t get anyone pregnant. In college, you meet guys and girls from different cities/states/countries, almost guaranteeing that even if you stay together and move ‘home’, you only have 1 set of grandparents around. More likely that you’re going to end up moving elsewhere, though. And also more likely the college relationship will fail, since both partners may want to make that first good career step where they are most likely to succeed, not as a follower.

          Then you’re in the current situation, where 26/27-year-olds start looking for LTRs; even if they’re lucky and find someone quickly (which is harder if you’re in a city you don’t know anyone), they still take a year or two to figure out if they want to get married, year of wedding planning, perhaps a year of ‘enjoying being newlyweds’, then kids. (Or they skip the whole marriage thing and just hang out in a LTR for a couple of years and then have kids.) And if you’re unlucky, the grandparents also move around for work.

          And, tying it back with part 1 of this comment: there’s not much else to do BUT work if you’re in your 40s living somewhere without relatives around. If you all live in an extended family situation, it makes sense for a woman to not ever have a career, since there is plenty of work around in caring for elders, niblings, grandkids, family homes and gardens, what have you. If it’s just the nuclear family, how else are you going to translate your efforts into value for the family except through paid work? Besides, you’ve got 4 kids’ worth of college expenses coming up.

          The incentives are all wrong.

          • Aapje says:

            @March

            Yes, staying in the work force creates better future earnings, even if not today. However, my point was that there is a mismatch here: the choice for the woman to work generates (real) income before there are children and when the children are no longer dependent*, but not so much when young children need to be taken care off. So at the time when they can use the double earner income the most, they effectively are a single earner household. This encourages waiting until that single, male income is high enough and/or until a certain wealth has been built up (or a lack of (college) debt).

            * Although the mother may increase her working hours when the kid is older, but still in school & college, so then the parents would still have much higher costs due to their children, but would already benefit from being a two-earner (or probably actually 1.5 earner) household.

            I also agree with you that increased education & mobility, combined with fairly logical other incentives, mean children are going to be delayed for the well-educated even if they prioritize children. However, a lot of people don’t do that, so then it very often becomes a race against the clock, once they decide they are ready for children.

          • March says:

            Small kids are cheap, though, as long as you don’t go the private-violin-toddler-wunderkind route. If you make it through the first decade of parenthood on effectively 1 full-time salary, you’re all set to pour the extras into the expanding kid expenses without too much resentment once the daycare costs let up. 😉

          • Thomas Jorgensen says:

            The hilarious answer to this, is, of course, for colleges to exercise their prerogative to give preferences for anything under the damn sun they please as long as it is not racially or gender biased, and make “applying as a couple” worth several hundred SAT points, or a ticket to free housing.

          • albatross11 says:

            This ties in with Steve Sailer’s ideas about affordable family formation as an important policy goal. (And one that likely benefits Republicans–places with more affordable family formation costs tend to vote Republican.).

            In world #1, the public schools are generally pretty good and safe, there’s economic opportunity for your kids if they study hard and go to State U (or just graduate high school and get a job at a factory or something), housing prices close to work/school are reasonable, free stuff like parks and libraries are plentiful and well-policed and safe, the crime rate + social convention makes it acceptable to let your kids walk to school/play outside till dark, families tend to stay in the same area so there’s usually free babysitting available from Grandma and Grandpa, etc.

            In world #2, lots of public schools are a mess, so if you want your kids to get a decent education/not get beaten up by thugs at Gangland High, you shell out a bunch of extra money for a private school or you move to a super-expensive suburb that has good schools because only rich peoples’ kids go there. The economy is increasingly winner-take-all, so your kids may not have much future if they don’t get into a top university and a super-elite career track. Houses in decent neighborhoods are incredibly expensive, so that most families need both parents working full time to afford their house payments. Parks and libraries have mostly been taken over by homeless people and small-time criminals, so moms with young kids have to spend time with their kids in paid venues. Crime rate and social convention makes letting your kid walk to school risky (either he will get messed with or you’ll get a visit from a social worker). Families are all atomized so Grandma lives in California, Grandpa lives in Florida with his second wife, your one sibling has cats instead of kids so there are no cousins to play with, etc.

            It seems almost inevitable that world #1 has a lot more kids being born into it than world #2–at every step of the process, kids cost more (in money, time, life disruption) in world #2.

          • Nick says:

            This ties in with Steve Sailer’s ideas about affordable family formation as an important policy goal. (And one that likely benefits Republicans–places with more affordable family formation costs tend to vote Republican.).

            My impression is that Rubio is trying to be the guy for this. Notice for instance his recent paid family leave plan. I would personally love for good, successful policy in this area to be pushed, stuff that helps families and boosts the birth rate, but 1) I’m not a wonk so I don’t know whether his or others’ plans are good, and 2) I’m not sure how much support we can expect from the White House or other Republicans.

          • Le Maistre Chat says:

            @Nick: Rubio’s relatively young, isn’t he? He can try to seize the White House in 2028.

          • BlindKungFuMaster says:

            In world #1, the public schools are generally pretty good and safe, there’s economic opportunity for your kids if they study hard and go to State U (or just graduate high school and get a job at a factory or something), housing prices close to work/school are reasonable, free stuff like parks and libraries are plentiful and well-policed and safe, the crime rate + social convention makes it acceptable to let your kids walk to school/play outside till dark, families tend to stay in the same area so there’s usually free babysitting available from Grandma and Grandpa, etc.

            Except, that’s a pretty decent description of the Germany I grew up in and it seems your conclusion is just dead wrong.

          • albatross11 says:

            Interesting. I’m assuming that the driver here is partly economic, but it’s possible it’s mainly cultural–if your culture is pronatalist, you’ll have lots of kids despite economic barriers; if not, you mostly won’t have lots of kids despite favorable economic conditions.

          • BlindKungFuMaster says:

            @albatross11:
            From what I have read the main driver is cultural (or possibly just aggregate individual preferences). The gap between kids you want and kids you have isn’t usually all that big. In Europe it seems to be around 0.3 kids, so if the numbers mentioned earlier in the thread are correct that gap is somewhat larger for the US.

            That’s one of the problems: Allowing people to have all the kids they want to have wouldn’t even reach replacement fertility in many European countries (and I expect that to be the future of the US as well).

      • Thomas Jorgensen says:

        I do not even think this is a traditional upbringing thing. This is “Your first relationship worked”. If you have been sanding each others edges down since you were seventeen, you are not very likely to divorce under any circumstances whatsoever, and this is, in fact, quite common. 30 percent of everybody? I cant remember the source for that, but it was high, and not overly correlated with anything else, – this happens to gay / lesbians too, and that causes a pretty large sub group who are not overly invested in the gay “community”, because, well, they dont date.

    • DeWitt says:

      I don’t have a way to know what Baudet’s personal beliefs in anything are, but the man is cowardly. People criticise him, and he defends himself with ‘it’s juuuust the book’s viewpoints’; people get upset about policy proposals, and he says he’s oooonly commenting on things, it’s not a policy proposal. That’d be fine and well if he were still a journalist paid to be very smug about himself, but taking politicians’ beliefs and statements to be clues at their preferred policies is hardly something new, and the man pitching a fit every time someone takes issue with your words got old years ago. By now it’s downright embarassing.

      • MorningGaul says:

        It’s a review essay (or it’s titled as such, and spends a lot of time commenting on Houellebecq’s books), “it’s just the book viewpoints” seems like a valid response to ideological criticism.

        • March says:

          True.

          Also true:
          – This is an upcoming politician, head of a party that is now suddenly the biggest in the country.
          – His party’s actual policy proposals still need lots of hammering out, since they’re so new and have never had influence on this scale before.
          – The previous party that rocketed to prominence like this got destroyed through infighting within a couple of years, and people on all sides are worried of (or looking forward to) that happening again.
          – He is known to take political inspiration from literature and express himself in roundabout ways.
          – Nobody forced him to read all those books or write this essay. (And even if he had been, that doesn’t mean that essay had to come out like that. Personally, I can’t stand Houellebecq, so if I had been forced to read all those books and write an essay tying their message in with current society, I’d have written a very different essay.)

          In this context, it’s a bit disingenious to claim it’s ‘just’ a book review.

          If this guy just really likes writing book reviews and don’t want the country or the world to think they can learn anything about his views from them, perhaps he should just sit on them for the next two years or so and give the country a chance to learn about his views through his work in politics.

          • Radu Floricica says:

            I have no idea who the guy is, but… I think this essay is a pretty straightforward way of expressing an opinion on complex issues. For one thing – he’s a politician, so it’s pretty unfair to single him out for not being straightforward on a controversial topic. It pretty much comes with the job. For another, it’s a complex topic and it’s very very likely that expressing his opinions in short sentences would make it incredibly easy for opponents to attack it (misquotes, taking things out of context etc). Plus just the plain fact that his true opinions may not be easy to express in short sentences.

          • March says:

            Oh, I’m not singling him out for not being straightforward.

            I’m singling him out for what DeWitt said:

            People criticise him, and he defends himself with ‘it’s juuuust the book’s viewpoints’; people get upset about policy proposals, and he says he’s oooonly commenting on things, it’s not a policy proposal.

            This essay is totally his opinions on complex isssues. So people read his essay to learn more about his opinions. But then he says ‘those aren’t MY opinions.’

          • DeWitt says:

            The essay isn’t nominally representative of Baudet’s views, as he very quickly went ‘uhhh I totally support abortion, promise, you guys.’ If he had written it with a full endorsement of it indeed being an essay on his political views, that’d be one thing. Writing longwinded essays of the kind you’ve been writing and pretending they’re totally just thought experiments is already a little dubious for most people. When you’re also fronting a major political party, you shouldn’t really fault people for mistrusting half-hearted denial of it representing his actual views.

          • DeWitt says:

            If our host were to only ever go on about demotism and what have you, then yes, he would be a demotist. As it stands, though, he has argued against the topic very vehemently, hasn’t spoken on the topic in years, has a slew of posts of highly different philosophies, and bans people of the related ideology.

            Baudet, in the meantime, writes essays or think pieces like the one we’re speaking of constantly; if he’s ever cared to write a defence of anyone to his left, I’m not seeing it.

            Why should I believe him when he claims these arem’t his beliefs when his entire work points to the copout being a lie?

          • Conrad Honcho says:

            he very quickly went ‘uhhh I totally support abortion, promise, you guys.’

            There are a significant number of people who are both pro-life and pro-choice: they think abortion is wrong and wouldn’t make that choice themselves, but don’t want to make it illegal for others who choose differently.

          • albatross11 says:

            Conrad:

            I think there are also a substantial number of people (probably most people) who are broadly either:

            a. Pro-Life, but would be okay with some early-term abortion and some exceptions for special cases.

            b. Pro-Choice, but would be okay with some restrictions for late-term abortion and particular stuff like sex-selection via abortion.

            Those aren’t the easy places to debate from or to signal your purity to the troops from, but they’re fairly common in practice.

          • DeWitt says:

            B. is already the default position here, so that one’s not really the issue.

          • Deiseach says:

            There are a significant number of people who are both pro-life and pro-choice: they think abortion is wrong and wouldn’t make that choice themselves, but don’t want to make it illegal for others who choose differently.

            Oh yeah, the “I am personally against but…” crowd.

            And nobody believes them; the pro-life side notices that it’s awfully convenient that being all “Imma keep abortion legal” just so happens to enable them to get elected, and the pro-choice side knows that they’re only paying lip service to “Imma not pro-abortion, honest guize!”

            Imagine this in any other context: “I am personally opposed to slavery/paederasty/oil companies raping Mother Gaia but…” and see who is convinced the one saying this is acting out of deep principle rather than worldly convenience.

            Granted, there are some people who do hold such principles: “I am personally opposed to recreational drugs, but I am against criminalising people for possessing and using such”, but they tend to be private individuals not making public statements.

          • yodelyak says:

            @Deiseach
            My 2c, but the “I’m against it personally but do not think the government should have the power to use compulsion to try to prevent it” position is a principled position w/r/t abortion, and one some politicians sincerely hold. Anyway, it’s often been my position.
            I have held the “but not the government” view at many points in my life as my level of trust in government institutions has waxed and waned. Some of those times included times when I was privately checking with friends whether I could succeed in running for minor office. I have never been pro-abortion in my personal life or personal advocacy, outside the usual hard cases list (save the life of the mother, extreme fetal deformity, rape, incest).

          • Clutzy says:

            @yodelyak

            Not really. Unless you are an anarchist. If you would oppose state regulation/intervention with regards to stopping intentional activity directed towards ending human life, what state intervention can you support?

          • brad says:

            From the pro-choice side, I have no problem with a politician thinking abortion is morally wrong but I’m suspicious as to why he is sharing that little factoid with me. Does he consider himself a moral authority figure?

      • Aapje says:

        @DeWitt

        Baudet has never been a journalist. He was a columnist, which my (center-left) newspaper stridently argues is not the same thing (and thus large parts of the newspaper don’t have to adhere to journalist standards, according to my newspaper, despite many readers apparently being unable (or unwilling) to distinguish between columnists and journalists).

        Anyway, my experience is that it’s generally very unproductive to debate people who are merely scanning your words for objectionable content and who (thus) read what you say in bad faith. It results in a ‘debate’ that rapidly veers away from the main argument, as well as from the common case, to focus on minutiae (and then minutiae of minutiae). It tends to result in toxoplasma: the debate centers around the 1% of your opinion where a large inferential distance and/or conflict of preferences results in lots of anger by the other, while the 99% of your beliefs where you can find common ground or mutual understanding is ignored. In the absence of objectionable (enough) content, the other person usually simply stops responding, rather than agree.

        The typical debate then goes like this:
        Me: Claim with argument A1-A100.
        Other: Argument A62 is horrible because [misreading]
        Me: You misread me, let me explain what I mean in more detail with argument B1-B100.
        Other: Argument B37 is horrible because [misreading]

        Me: Wait, we starting off debating the mating preferences of Warblers and are now discussing societal disapproval of nose picking. How did we get here? Why am I having this conversation? Why am I still awake at 05:00?

        I’m personally a bit of a sucker in this respect, personality wise, letting myself get drawn into such debates, but I completely understand why others would shut that down ASAP.

        What you are asking for is for Baudet to mold himself into the kind of politician who is not willing to do the things in public that most people need to evolve their opinion: posit something that they are not perfectly sure about and don’t know the limits or reasonableness of, so other people push back with arguments that they didn’t come up with themselves. Then the actual policies they favor come from drawing conclusions based on this investigation of possibilities.

        Most politicians learn to investigate possibilities away from the public, which makes them very vulnerable to only getting feedback from people with similar biases, partially negating the very benefit of getting outside opinions. Or worse, they completely lose a sense of self and mostly become slaves to ingroup or societal opinion (see prime minister Rutte for an example).

        Baudet’s style may reflect his voters: there seems to be a strong agreement by them that society is going in the wrong direction, but no clear answer to what alternative direction is better.

        • DeWitt says:

          What you are asking for is for Baudet to mold himself into the kind of politician who is not willing to do the things in public that most people need to evolve their opinion: posit something that they are not perfectly sure about and don’t know the limits or reasonableness of, so other people push back with arguments that they didn’t come up with themselves.

          No, I don’t want him to be one more politician lying through his teeth in every appearance that he makes – a tall order, admittedly.

          If he didn’t want to get bogged down, he could say so. He could refuse interviews with the usual suspects and find someone amenable enough to his party not to screw him over for the interview. Instead, he writes an essay which fits very neatly into all he’s ever spoken about, only to very unconvincingly profess not to believe in it.

          Again, cowardice.

          • Aapje says:

            I don’t see him denying that he has this ideology, though. He doesn’t deny that he thinks that it’s better for women to be in more traditional roles, for people to have fewer abortions or be less eager to do euthanasia.

            However, that doesn’t mean that he wants to make laws to reduce choice. Baudet is very (right) libertarian.

            Your confusion is what I mean when I argue that due to inferential distance, lots of people can’t understand his position. To them, there are only two possibilities:
            – in favor of abortion and legalization
            – against abortion and legalization

            Then they get confused when there is someone with the third option:
            – against abortion, but in favor of people having choices

            When Baudet argues for that first part, these people (and you) pattern match him to those who want to ban abortion and thus assume that he wants laws against abortion. Then when he says that he doesn’t want laws against abortion, Baudet gets pattern matched to those who are in favor of abortion.

            So then Baudet’s statements get perceived as constantly flip-flopping, when in reality, he has a consistent position: I won’t ban it, but I’ll tell you that you are ruining your life.

            Of course, you can criticize Baudet for not making this more clear, but in his defense, those who don’t believe that a third option is possible need a major shift in their world view to accommodate this third option, which is very hard to achieve.

          • DeWitt says:

            I don’t believe him, is the issue. It’s very easy to claim tolerance or not wanting to ban such things, but letting them slide later. To disapprove of something you don’t want banned is very normal; I suspect it’s the default position on simple matters like smoking or slinging random insults at people. Baudet’s commitment to abortion I find extremely dubious, however, and I don’t blame anyone for doubting him similarly.

          • Aapje says:

            It certainly is true that people do not always say what they really believe, although this can be anti-deceptive as well.

            Being a leader of an movement that tries to create change means that you try to achieve an agenda, but this agenda is usually not the (full) agenda of the leader. Leadership often is a mandate to define & execute a common agenda, not a mandate to execute a personal agenda.

            Since observers often have difficulty distinguishing the common from the personal agenda, leaders often hide the latter, which then actually results in a more accurate assessment of the movement by observers.

            A fairly common, but quite unfair habit by opponents of the leader, is to examine the leader for (real or misinterpreted) evidence about their personal, rather than common agenda & then to claim that this is the real agenda of the leader if not the entire movement/party.

            This is essentially the same mechanism that commonly feeds conspiracy theories, where real or false evidence about people who are considered to be leaders or to have great influence is used to argue for a conspiracy.

            Of course, conspiracies & deception are sometimes real, but Western democracies are designed to make them very hard to execute, so I think that a great eagerness to assume they might exist is unwarranted.

      • J Mann says:

        DeWitt, do you have a link to some sample exchanges.

        At an abstract level, I think that:

        1. If someone writes a book review (or “essay”), that means they think the things they discuss about the book are worth considering, but not necessarily that they agree with them.

        1.a. On the other hand, things the review author didn’t discuss about the book are murkier. It might be they were so important that most reasonably author would ignore them, but then we’re in a rabbit hole of are we right that they’re that important, and if so, what did this specific review author intend by ignoring them.

        2. The opinions the author expresses specifically in the review are the author’s.

        Point 1 can be tricky – IMHO, it’s not unreasonable in the abstract to say “I thought it was important that people consider the argument, but I’m not sure where I come out yet” or “I do like some aspects of the underlying book’s points, but don’t have an opinion on/openly oppose that one.”

  40. Le Maistre Chat says:

    Interesting science, amusing headline:

    The universe may be a billion years younger than we thought. Scientists are scrambling to figure out why.

    How dare you be younger than I thought, universe! I don’t know if I can ever trust you again!