codex Slate Star Codex


Links 4/2019

[Epistemic status: I have not independently verified each link. On average, about two of the links in each links post turn out to be wrong or misleading, as found by commenters. I correct these as I see them, but can’t guarantee I will have caught them all by the time you read this.]

List Of Places With “Silicon” In The Name Because They Are Branding Themselves As The Next Silicon Valley. I knew the UK had a “Silicon Roundabout”, but I didn’t know there was a Silicon Bayou, Silicon Taiga, Silicon Fen, and even a Welsh “Cwm Silicon”.

You probably knew that sperm count and fertility were declining rapidly and mysteriously. Now scientists have narrowed down the search for a cause by finding that something similar is happening in dogs.

Econ professor on Twitter picks apart a bad study in JAMA trying to claim that car accidents spike on 4/20 because weed.

Did you know: Saudi prince Khaled bin Talal participated in a trophy hunt in South Africa in the 1990s. He regretted his actions, went vegan, became a major investor in vegan food companies around the world, and is now opening his own chain of vegan restaurants across the Middle East.

The Lancet publishes a great article on antidepressant tapering arguing that tapers should be much slower, and should be hyperbolic rather than linear. Great article, though I think it underemphasizes that 90% of people have no problem with antidepressant tapering no matter what you do, and you don’t have to put people on a drawn-out year-long taper unless they fail the usual regimen.

Some people have different theories of consciousness? And they’re going to try to test them? By experiment? Using an adversarial collaboration? Pretty weird.

Some effective altruists suggest you save up to donate later. Others warn against “value drift”, eg later you might drop out of effective altruism and not care at all. Now they have empirical data: of 22 people who were donating 10% of their income five years ago (or doing other equivalent work), only 8 continue to do so today. Moral of the story: if you’re going to put something off until later, keep in mind it will be a different you with different values who decides what to do with it.

From Less Wrong: a good post explaining how exponential curves like Moore’s Law can be best understood as a series of S-curves on top of each other.

Study suggests victimized employees are vulnerable to being seen as bullies themselves, with real bullies given passes. It concludes that we must be extra-vigilant against “victim-blaming”. This might be the right lesson from a god’s-eye view, but seems like exactly the wrong lesson when implemented by people like the subjects of the study, which it will be. The words “victim-blaming” are what people use to shut down discussion about whether you might be wrong about who the victim vs. the bully is. If studies show people are frequently wrong about this, then launching a campaign to shut down that discussion just means preventing anyone from questioning or correcting frequently-wrong people.

I was on board with the narrative that “prostitution leads to human trafficking” was a lie spread by anti-sex-worker authoritarians like Kamala Harris. But a study suggests that legalized prostitution does increase human trafficking and that “the scale effect dominates the substitution effect”. Interested to hear pro-legalization people’s perspective on this. [EDIT: here is a critique]

Reddit survey of drug users: How Much Better/Worse Would Your Life Be If [Various Drugs] Ceased To Exist? Tobacco, heroin, and synthetic cannabinoids do worst; LSD, amphetamines, and the Internet do best.

This is a great interpretation and modern translation of “Yankee Doodle”. Bonus fact: “macaroni” meant “high fashion” because everything Italian was considered cool at the time.

Against over-emphasizing behavioral economics.

I honestly thought this study had been done a long time ago, but I guess it hadn’t been, and now it is: e-cigarettes are definitely more helpful for smoking cessation than normal nicotine replacement.

A really good and deep exploration of cost disease in subway construction, though with only partial applicability for cost disease in other things.

This blog on brain size (warning: some racist language elsewhere on the blog) has a weird obsession with the size of Oprah’s head, and claims she is probably the largest-headed woman in the world.

Warning from Gwern: magnesium supplementation may make you slightly less intelligent/functional, to a degree you will never notice unless you test it. See also this r/nootropics thread.

I don’t claim to 100% understand this, but it looks like it’s an app called for designing replicable experiments, so I guess I have to link it.

Nutrition scientist Stephan Guyenet (author of The Hungry Brain, reviewed here on SSC) and his colleagues are launching Red Pen Reviews, a site where top nutritionists review and grade the latest books on nutrition.

Servant Of The People was a popular Ukranian TV comedy about a mild-mannered schoolteacher who gets elected President of Ukraine after making a video about politics that goes viral. Earlier this week, actor Volodymyr Zelensky, who played the starring role, was elected President of Ukraine in real life.

Website TheSpiritLevelDelusion has been critiquing popular-in-the-media book The Spirit Level from Day 1. Now, ten years later, they demonstrate that using the book’s own methodology none of the trends it highlights have continued to hold, potentially because they were p-hacked to fit the data as it existed when the book was published.

Related: 25 years later, Scott Aaronson reviews how John Horgan’s article The Death Of Proof has fared over the past 25 years. Summary: not well, proof continues to be an important part of math, Horgan admits he was wrong on this one.

Burger King introduces vegetarian Whopper made with Impossible Burger vs. McDonalds is main holdout against new farm welfare standards. If you’re a meat-eater who supports animal welfare, consider switching your fast food business to Burger King for a while.

French European Affairs minister denies viral rumor that she named her cat “Brexit” because “it wakes me up meowing like crazy every morning because it wants to go out, but as soon as I open the door, it just sits there undecided and then looks angry when I put it outside.”

The big politics news is, of course, the Mueller Report, and how much its finding of no illegal collusion between Trump and Russia discredits a media that had been talking rather a lot about how much illegal collusion between Trump and Russia there definitely was. The “it does discredit the media” case is made most strongly by Matt Taibbi in Russiagate is WMDGate Times A Million; for the “it doesn’t discredit the media” perspective, see eg The Atlantic‘s The Mueller Probe Was An Unmitigated Success. I prefer the pro-discreditation narrative, just because the media will never face any negative consequences for things like the constant hyping of The Spirit Level and anything else that agrees with their biases, over and over again, times ten million. So when Fate tempts us with a remote chance that the media might actually face some negative publicity for getting something wrong, I am 100% in favor of everyone being as angry and punitive as possible, even though honestly I didn’t follow the whole Mueller thing and find it hard to pay attention to. If you want people with more reasonable opinions on this, you can check the first and second Mueller Report threads from The Motte.

On March 24, Donald Trump tweeted “Good Morning, Have A Great Day!” You can learn a lot about our society by reading the 67,000 ensuing comments

I recently learned about SketchyMed, a site that makes mnemonic-device-esque videos to help medical students study. Eliezer once claimed that the only place it looked like our civilization was really exerting effort was predicting stock prices; this makes me think that “studying for medical licensing exams” is a second example. May be worth watching to see the technique in action even if you are not a med student. See eg their video about salmonella.

Deep roots: which European ethnicity settled each area of the United States centuries ago determines how much inequality it has today, with the level of inequality in the American region corresponding to the level in the European home country. Appears to be a cultural rather than purely genetic effect since it holds for black people in each area as well. See the study and the article describing it.

Related: Noah Smith twittereviews “Replenished Ethnicity”, a book theorizing that continuing Mexican immigration prevented Mexicans from fully assimilating, and now that Mexican immigration has slowed, we should expect Mexicans to assimilate to the same degree other groups like the Irish did.

Education can probably increase IQ, but only up until age 20. Does this mean the effect is something real about brain development, and not just that education helps you ace IQ tests? And some good Twitter discussion.

More on cost disease / wage decoupling / civilizational decline/ housing crisis: as per these tweets, the average rent in NYC went from 15% of average income in 1950 to 65% today.

In India, private companies can do surgical procedures for 2% to 3% the amount American hospitals charge, apparently with equally good or better outcomes. Now India’s universal health care plan is trying to cut costs further and scale the model to the entire population.

Previous studies found that variation in height across European countries was primarily due to differential selection across ethnic groups. A new paper finds that these studies were confounded by methodological error and we don’t know whether there was selection or not. Relevant partly because height is determined similarly to IQ and other socially relevant traits.

US monthly budget deficit is now largest in history.

I’ve been a big supporter of “housing-first” policies on homelessness (ie don’t try to force homeless people to become model citizens, just give them housing), so I guess I owe it to present this article pushing back against them. It argues that some small fraction of homeless people are loud or violent or defecate in inappropriate places, and that cities which have tried housing them have found that when they put them in nice apartment buildings, they ruin the apartment for everyone else (including other homeless people). One obvious solution is giving houses in nice apartment buildings to everyone except that small fraction, but I think people worry that looks too much like trying to separate the “deserving” and “undeserving” poor, so it’s politically difficult. The actual “solution” that DC (the city profiled here) has proposed is to mandate that any apartment building that accepts homeless people must provide them with lots of on-site social services. This sounds like a great way to ensure no apartment building ever accepts homeless people ever again (or that they get ghetto-ified from normal apartments with a healthy mixture of different classes into a few slums that specialize in meeting onerous requirements).

Related, from The Motte: Dueling GoFundMe Campaigns Highlight A San Francisco NIMBY Battle (one is to fight against an attempt to open a homeless shelter in a nice area, the other is to fight for the shelter).

New paper analyzes data from an unnamed online dating website (realistically, OKCupid), discusses differences among cities.

If you were into astronomy thirty years ago, you’re probably familiar with the Nemesis theory: the sun has a brown dwarf partner whose orbit sometimes sends deadly comets hurtling at Earth causing regular extinctions. But I hadn’t realized that the theory has since fallen apart as scientists failed to find it with sky surveys that should have been good enough to find it if it existed (also, extinctions don’t seem to happen that regularly).

538 analyzes their past predictions, finds they have nearly perfect calibration.

In all these years of people using “BUT WHO WOULD BUILD THE ROADS?” as their knockdown objection to libertarianism, I never realized that the Nordic countries already have privately funded roads and they work great.

The government knows how much tax you owe well enough to arrest you if you try to cheat them, so how come they can’t just tell you that number and save you the trouble of preparing your taxes? They could, but the companies that make tax preparation software have good lobbyists and have gotten Congress to ban them from doing that. Now they’re trying to enshrine this system permanently. Related: TurboTax is blocking search engines from indexing what little help that they are legally required to provide.

Boston Corbett, the man who killed Lincoln’s assassin, was a colorful character. (Puritanism level: literally named “Boston”)

The Azolla Event was a time 50 million years ago when so many freshwater ferns grew in the Arctic Ocean that when they sank into the sea, it locked up a substantial fraction of Earth’s carbon, caused an anti-greenhouse effect, and initiated an ice age. Very, very related: Rogue Geoengineer Dumps Iron Into The Pacific to Create Massive Algal Bloom (from 2012, but rogue geoengineering is always in fashion).

There will be a free Introduction To Effective Altruism workshop in Berkeley from May 18 – 19.

80,000 Hours Podcast interviews two economists working on charter cities (transcript available on the bottom). It looks like Zambia is going to go ahead with one.

VR researcher Hamish Todd lists his predictions for the future of VR/AR/MR (no individual confidence levels, but he predicts globally that 80% of them will be right).

I’m not saying Unsong was necessarily right about everything, but a spacecraft called Beresheet (Hebrew name for the Book Of Genesis) just crashed into the moon.

Posted in Uncategorized | Tagged | 591 Comments

1960: The Year The Singularity Was Cancelled

[Epistemic status: Very speculative, especially Parts 3 and 4. Like many good things, this post is based on a conversation with Paul Christiano; most of the good ideas are his, any errors are mine.]


In the 1950s, an Austrian scientist discovered a series of equations that he claimed could model history. They matched past data with startling accuracy. But when extended into the future, they predicted the world would end on November 13, 2026.

This sounds like the plot of a sci-fi book. But it’s also the story of Heinz von Foerster, a mid-century physicist, cybernetician, cognitive scientist, and philosopher.

His problems started when he became interested in human population dynamics.

(the rest of this section is loosely adapted from his Science paper “Doomsday: Friday, 13 November, A.D. 2026”)

Assume a perfect paradisiacal Garden of Eden with infinite resources. Start with two people – Adam and Eve – and assume the population doubles every generation. In the second generation there are 4 people; in the third, 8. This is that old riddle about the grains of rice on the chessboard again. By the 64th generation (ie after about 1500 years) there will be 18,446,744,073,709,551,616 people – ie about about a billion times the number of people who have ever lived in all the eons of human history. So one of our assumptions must be wrong. Probably it’s the one about the perfect paradise with unlimited resources.

Okay, new plan. Assume a world with a limited food supply / limited carrying capacity. If you want, imagine it as an island where everyone eats coconuts. But there are only enough coconuts to support 100 people. If the population reproduces beyond 100 people, some of them will starve, until they’re back at 100 people. In the second generation, there are 100 people. In the third generation, still 100 people. And so on to infinity. Here the population never grows at all. But that doesn’t match real life either.

But von Foerster knew that technological advance can change the carrying capacity of an area of land. If our hypothetical islanders discover new coconut-tree-farming techniques, they may be able to get twice as much food, increasing the maximum population to 200. If they learn to fish, they might open up entirely new realms of food production, increasing population into the thousands.

So the rate of population growth is neither the double-per-generation of a perfect paradise, nor the zero-per-generation of a stagnant island. Rather, it depends on the rate of economic and technological growth. In particular, in a closed system that is already at its carrying capacity and with zero marginal return to extra labor, population growth equals productivity growth.

What causes productivity growth? Technological advance. What causes technological advance? Lots of things, but von Foerster’s model reduced it to one: people. Each person has a certain percent chance of coming up with a new discovery that improves the economy, so productivity growth will be a function of population.

So in the model, the first generation will come up with some small number of technological advances. This allows them to spawn a slightly bigger second generation. This new slightly larger population will generate slightly more technological advances. So each generation, the population will grow at a slightly faster rate than the generation before.

This matches reality. The world population barely increased at all in the millennium from 2000 BC to 1000 BC. But it doubled in the fifty years from 1910 to 1960. In fact, using his model, von Foerster was able to come up with an equation that predicted the population near-perfectly from the Stone Age until his own day.

But his equations corresponded to something called hyperbolic growth. In hyperbolic growth, a feedback cycle – in this case population causes technology causes more population causes more technology – leads to growth increasing rapidly and finally shooting to infinity. Imagine a simplified version of Foerster’s system where the world starts with 100 million people in 1 AD and a doubling time of 1000 years, and the doubling time decreases by half after each doubling. It might predict something like this:

1 AD: 100 million people
1000 AD: 200 million people
1500 AD: 400 million people
1750 AD: 800 million people
1875 AD: 1600 million people

…and so on. This system reaches infinite population in finite time (ie before the year 2000). The real model that von Foerster got after analyzing real population growth was pretty similar to this, except that it reached infinite population in 2026, give or take a few years (his pinpointing of Friday November 13 was mostly a joke; the equations were not really that precise).

What went wrong? Two things.

First, as von Foerster knew (again, it was kind of a joke) the technological advance model isn’t literally true. His hyperbolic model just operates as an upper bound on the Garden of Eden scenario. Even in the Garden of Eden, population can’t do more than double every generation.

Second, contra all previous history, people in the 1900s started to have fewer kids than their resources could support (the demographic transition). Couples started considering the cost of college, and the difficulty of maternity leave, and all that, and decided that maybe they should stop at 2.5 kids (or just get a puppy instead).

Von Foerster published has paper in 1960, which ironically was the last year that his equations held true. Starting in 1961, population left its hyperbolic growth path. It is now expected to stabilize by the end of the 21st century.


But nobody really expected the population to reach infinity. Armed with this story, let’s look at something more interesting.

This (source) might be the most depressing graph ever:

The horizontal axis is years before 2020, a random year chosen so that we can put this in log scale without negative values screwing everything up. This is an arbitrary choice, but you can also graph it with log GDP as the horizontal axis and find a similar pattern.

The vertical axis is the amount of time it took the world economy to double from that year, according to this paper. So for example, if at some point the economy doubled every twenty years, the dot for that point is at twenty. The doubling time decreases throughout most of the period being examined, indicating hyperbolic growth.

Hyperbolic growth, as mentioned before, shoots to infinity at some specific point. On this graph, that point is represented by the doubling time reaching zero. Once the economy doubles every zero years, you might as well call it infinite.

For all of human history, economic progress formed a near-perfect straight line pointed at the early 21st century. Its destination varied by a century or two now and then, but never more than that. If an ancient Egyptian economist had modern techniques and methodologies, he could have made a graph like this and predicted it would reach infinity around the early 21st century. If a Roman had done the same thing, using the economic data available in his own time, he would have predicted the early 21st century too. A medieval Burugundian? Early 21st century. A Victorian Englishman? Early 21st century. A Stalinist Russian? Early 21st century. The trend was really resilient.

In 2005, inventor Ray Kurzweil published The Singularity Is Near, claiming there would be a technological singularity in the early 21st century. He didn’t refer to this graph specifically, but he highlighted this same trend of everything getting faster, including rates of change. Kurzweil took the infinity at the end of this graph very seriously; he thought that some event would happen that really would catapult the economy to infinity. Why not? Every data point from the Stone Age to the Atomic Age agreed on this.

This graph shows the Singularity getting cancelled.

Around 1960, doubling times stopped decreasing. The economy kept growing. But now it grows at a flat rate. It shows no signs of reaching infinity; not soon, not ever. Just constant, boring 2% GDP growth for the rest of time.


Here von Foerster has a ready answer prepared for us: population!

Economic growth is a function of population and productivity. And productivity depends on technological advancement and technological advancement depends on population, so it all bottoms out in population in the end. And population looked like it was going to grow hyperbolically until 1960, after which it stopped. That’s why hyperbolic economic growth, ie progress towards an economic singularity, stopped then too.

In fact…

This is a really sketchy graph of per capita income doubling times. It’s sketchy because until 1650, per capita income wasn’t really increasing at all. It was following a one-step-forward one-step-back pattern. But if you take out all the steps back and just watch how quickly it took the steps forward, you get something like this.

Even though per capita income tries to abstract out population, it displays the same pattern. Until 1960, we were on track for a singularity where everyone earned infinite money. After 1960, the graph “bounces back” and growth rates stabilize or even decrease.

Again, von Foerster can explain this to us. Per capita income grows when technology grows, and technology grows when the population grows. The signal from the end of hyperbolic population growth shows up here too.

To make this really work, we probably have to zoom in a little bit and look at concrete reality. Most technological advances come from a few advanced countries whose population stabilized a little earlier than the world population. Of the constant population, an increasing fraction are becoming researchers each year (on the other hand, the low-hanging fruit gets picked off and technological advance becomes harder with time). All of these factors mean we shouldn’t expect productivity growth/GWP per capita growth/technological growth to exactly track population growth. But on the sort of orders-of-magnitude scale you can see on logarithmic graphs like the ones above, it should be pretty close.

So it looks like past predictions of a techno-economic singularity for the early 21st century were based on extrapolations of a hyperbolic trend in technology/economy that depended on a hyperbolic trend in population. Since the population singularity didn’t pan out, we shouldn’t expect the techno-economic singularity to pan out either. In fact, since population in advanced countries is starting to “stagnate” relative to earlier eras, we should expect a relative techno-economic stagnation too.

…maybe. Before coming back to this, let’s explore some of the other implications of these models.


The first graph is the same one you saw in the last section, of absolute GWP doubling times. The second graph is the same, but limited to Britain.

Where’s the Industrial Revolution?

It doesn’t show up at all. This may be a surprise if you’re used to the standard narrative where the Industrial Revolution was the most important event in economic history. Graphs like this make the case that the Industrial Revolution was an explosive shift to a totally new growth regime:

It sure looks like the Industrial Revolution was a big deal. But Paul Christiano argues your eyes may be deceiving you. That graph is a hyperbola, ie corresponds to a single simple equation. There is no break in the pattern at any point. If you transformed it to a log doubling time graph, you’d just get the graph above that looks like a straight line until 1960.

On this view, the Industiral Revolution didn’t change historical GDP trends. It just shifted the world from a Malthusian regime where economic growth increased the population to a modern regime where economic growth increased per capita income.

For the entire history of the world until 1000, GDP per capita was the same for everyone everywhere during all historical eras. An Israelite shepherd would have had about as much stuff as a Roman farmer or a medieval serf.

This was the Malthusian trap, where “productivity produces people, not prosperity”. People reproduce to fill the resources available to them. Everyone always lives at subsistence level. If productivity increases, people reproduce, and now you have more people living at subsistence level. OurWorldInData has an awesome graph of this:

As of 1500, places with higher productivity (usually richer farmland, but better technology and social organization also help) population density is higher. But GDP per capita was about the same everywhere.

There were always occasional windfalls from exciting discoveries or economic reforms. For a century or two, GDP per capita would rise. But population would always catch up again, and everyone would end up back at subsistence.

Some people argue Europe broke out of the Malthusian trap around 1300. This is not quite right. 1300s Europe achieved above-subsistence GDP, but only because the Black Plague killed so many people that the survivors got a windfall by taking their land.

Malthus predicts that this should only last a little while, until the European population bounces back to pre-Plague levels. This prediction was exactly right for Southern Europe. Northern Europe didn’t bounce back. Why not?

Unclear, but one answer is: fewer people, more plagues.

Broadberry 2015 mentions that Northern European culture promoted later marriage and fewer children:

The North Sea Area had an advantage in this area because of its approach to marriage. Hajnal (1965) argued that northwest Europe had a different demographic regime from the rest of the world, characterised by later marriage and hence limited fertility. Although he originally called this the European Marriage Pattern, later work established that it applied only to the northwest of the continent. This can be linked to the availability of labour market opportunities for females, who could engage in market activity before marriage, thus increasing the age of first marriage for females and reducing the number of children conceived (de Moor and van Zanden, 2010). Later marriage and fewer children are associated with more investment in human capital, since the womenemployed in productive work can accumulate skills, and parents can afford to invest more in each of the smaller number of children because of the “quantity-quality” trade-off (Voigtländer and Voth, 2010).

This low birth rate was happening at the same time plagues were raising the death rate. Here’s another amazing graph from OurWorldInData:

British population maxes out around 1300 (?), declines substantially during the Black Plague of 1348-49, but then keeps declining. The List Of English Plagues says another plague hit in 1361, then another in 1369, then another in 1375, and so on. Some historians call the whole period from 1348 to 1666 “the Plague Years”.

It looks like through the 1350 – 1450 period, population keeps declining, and per capita income keeps going up, as Malthusian theory would predict.

Between 1450 and 1550, population starts to recover, and per capita incomes start going down, again as Malthus would predict. Then around 1560, there’s a jump in incomes; according to the List Of Plagues, 1563 was “probably the worst of the great metropolitan epidemics, and then extended as a major national outbreak”. After 1563, population increases again and per capita incomes decline again, all the way until 1650. Population does not increase in Britain at all between 1660 and 1700. Why? The List declares 1665 to be “The Great Plague”, the largest in England since 1348.

So from 1348 to 1650, Northern European per capita incomes diverged from the rest of the world’s. But they didn’t “break out of the Malthusian trap” in a strict sense of being able to direct production toward prosperity rather than population growth. They just had so many plagues that they couldn’t grow the population anyway.

But in 1650, England did start breaking out of the Malthusian trap; population and per capita incomes grow together. Why?

Paul theorizes that technological advance finally started moving faster than maximal population growth.

Remember, in the von Foerster model, the growth rate increases with time, all the way until it reaches infinity in 2026. The closer you are to 2026, the faster your economy will grow. But population can only grow at a limited rate. In the absolute limit, women can only have one child per nine months. In reality, infant mortality, infertility, and conscious decision to delay childbearing mean the natural limits are much lower than that. So there’s a theoretical limit on how quickly the population can increase even with maximal resources. If the economy is growing faster than that, Malthus can’t catch up.

Why would this happen in England and Holland in 1650?

Lots of people have historical explanations for this. Northern European population growth was so low that people were forced to invent labor-saving machinery; eventually this reached a critical mass, we got the Industrial Revolution, and economic growth skyrocketed. Or: the discovery of America led to a source of new riches and a convenient sink for excess population. Or: something something Protestant work ethic printing press capitalism. These are all plausible. But how do they sync with the claim that absolute GDP never left its expected trajectory?

I find the idea that the Industrial Revolution wasn’t a deviation from trend fascinating and provocative. But it depends on eyeballing a lot of graphs that have had a lot of weird transformations done to them, plus writing off a lot of outliers. Here’s another way of presenting Britain’s GDP and GDP per capita data:

Here it’s a lot less obvious that the Industrial Revolution represented a deviation from trend for GDP per capita but not for GDP.

These British graphs show less of a singularity signature than the worldwide graphs do, probably because we’re looking at them on a shorter timeline, and because the Plague Years screwed everything up. If we insisted on fitting them to a hyperbola, it would look like this:

Like the rest of the world, Britain was only on a hyperbolic growth trajectory when economic growth was translating into population growth. That wasn’t true before about 1650, because of the plague. And it wasn’t true after about 1850, because of the Demographic Transition. We see a sort of fit to a hyperbola between those points, and then the trend just sort of wanders off.

It seems possible that the Industrial Revolution was not a time of abnormally fast technological advance or economic growth. Rather, it was a time when economic growth outpaced population growth, causing a shift from a Malthusian regime where productivity growth always increased population at subsistence level, to a modern regime where productivity growth increases GDP per capita. The world remained on the same hyperbolic growth trajectory throughout, until the trajectory petered out around 1900 in Britain and around 1960 in the world as a whole.


So just how cancelled is the singularity?

To review: population growth increases technological growth, which feeds back into the population growth rate in a cycle that reaches infinity in finite time.

But since population can’t grow infinitely fast, this pattern breaks off after a while.

The Industrial Revolution tried hard to compensate for the “missing” population; it invented machines. Using machines, an individual could do an increasing amount of work. We can imagine making eg tractors as an attempt to increase the effective population faster than the human uterus can manage. It partly worked.

But the industrial growth mode had one major disadvantage over the Malthusian mode: tractors can’t invent things. The population wasn’t just there to grow the population, it was there to increase the rate of technological advance and thus population growth. When we shifted (in part) from making people to making tractors, that process broke down, and growth (in people and tractors) became sub-hyperbolic.

If the population stays the same (and by “the same”, I just mean “not growing hyperbolically”) we should expect the growth rate to stay the same too, instead of increasing the way it did for thousands of years of increasing population, modulo other concerns.

In other words, the singularity got cancelled because we no longer have a surefire way to convert money into researchers. The old way was more money = more food = more population = more researchers. The new way is just more money = send more people to college, and screw all that.

But AI potentially offers a way to convert money into researchers. Money = build more AIs = more research.

If this were true, then once AI comes around – even if it isn’t much smarter than humans – then as long as the computational power you can invest into researching a given field increases with the amount of money you have, hyperbolic growth is back on. Faster growth rates means more money means more AIs researching new technology means even faster growth rates, and so on to infinity.

Presumably you would eventually hit some other bottleneck, but things could get very strange before that happens.

OT126: Ovum Thread

Happy Easter and Passover! This is the bi-weekly visible open thread (there are also hidden open threads twice a week you can reach through the Open Thread tab on the top of the page). Post about anything you want, but please try to avoid hot-button political and social topics. You can also talk at the SSC subreddit or the SSC Discord server – and also check out the SSC Podcast. Also:

1. Matt Arnold is resuming his work on an Unsong audiobook. Existing chapters here, Patreon to support him here.

2. An NYC rationalist group is holding a special spring Slate Star Codex meetup Saturday, May 4th, 3-7 PM, at 180 Maiden Lane in Manhattan. All readers in and around NYC are invited. There will also apparently be some sort of optional networking thing; see here for details.

Posted in Uncategorized | Tagged | 825 Comments

Highlights From The Comments On College Admissions

HalTheWise discusses a factor I missed (until I sneakily edited it in, so you may have read the later version that included it):

One very powerful contributor that Scott did not mention is that in many cases schools are directly or indically intentivized to have a low admission rate. US news & world report released the first national college ranking in 1983, and donors and board members at various schools have increasingly been using national rankings performance, which directly includes low admissions rates, as a measure of how well a school is doing.

These rankings and metrics also heavily incentivize having high yield (a large fraction of students that are admitted end up attending) which for a fixed size applicant pool also encourages accepting as few people as possible. This has led to the death of safety schools, because they would rather reject a high performing student than admit them and have them not attend.

These factors might also be a driving force behind the rise of common app, since schools are trying to get as many applicants as possible, even if it hurts the quality of their pool.

kaakitwitaasota points out that consulting is an exception to the “where you go to school doesn’t matter” principle:A lot of top firms these days won’t even look at you if you didn’t go to the “right” college. My mother did her MBA at Northeastern, and recently had lunch with an old classmate who ended up at a top consulting firm. My mother’s classmate’s résumé would end up in the trash unread these days–Northeastern isn’t considered good enough.

So while it’s probably true on the macro level that smart kids will do just fine anywhere they end up, there is a subset of extremely prestigious, extremely well-paid jobs which will not even look at you if you didn’t get into the right institution at the age of 18–which, in practice, means that the élite are chosen on the basis of who they were at the age of 14-17. When viewed in those terms, it’s completely nuts.

I’d heard this before; my impression is that a big part of consulting is having prestigious-looking people tell you what you want to hear. If what they’re actually hiring for is prestige rather than competence per se, that could make it a special case

prunesquallor on UCs:

I believe UCs are more competitive because of cost. Personally, I got into a number of strong private schools, and chose a UC because it is significantly cheaper. In the past, this would not have been as much of a factor, because college was affordable. I’m not sure if this applies to public schools in other states. The UCs are the best public schools in the country and are able to compete with high level private schools.

This makes sense, but I’m not sure exactly what the model is. UCs are cheap, so many people across the country apply to them? (but I gave data showing out-of-staters were only 14% of UC students, plus it may not be cheap for out-of-staters). UCs are cheap, so many Californians apply? Aren’t public universities always cheap for people in the state? Maybe cheap UCs mean more top-performing Californians apply to UCs instead of private out-of-state colleges?

BlindKungFuMaster writes about a factor that could explain stories of exceptional students being rejected from everywhere they apply:

It could be the case that college admissions became more random as kids apply more often and the metrics become more vague. I.e. there used to be x acceptable applicants of which 50% were somewhat randomly selected. Now there are 2x acceptable applicants of which 25% are somewhat randomly selected. Then there are just much more kids being unlucky and missing out on all their choices, though it hasn’t really been getting harder to get in. Of course individually the remedy is applying to even more colleges.

This might increase perceptions of selectivity – if one of these bright students posts their sob story, everyone else will just think that standards are very high.

A few people chimed in with concerns about the Dale and Krueger paper. rlms wrote:

From the abstract:

“We find that the return to college selectivity is sizeable for both cohorts in regression models that control for variables commonly observed by researchers, such as student high school GPA and SAT scores. However, when we adjust for unobserved student ability by controlling for the average SAT score of the colleges that students applied to, our estimates of the return to college selectivity fall substantially and are generally indistinguishable from zero.”

So college selectivity *is* significant even after controlling for student quality as measured by SAT scores. It only ceases to be significant when you also control for some vague measure of ambition as signalled by the average SAT score of all the colleges they applied to.

And reasoner cites a 2009 Overcoming Bias post finding that a previous Dale & Krueger paper on this subject was misinterpreted to say college didn’t matter when it really did matter (Marginal Revolution also covered this). The Dale & Krueger paper I posted was an update to that one that said no, really, college doesn’t seem to matter. But I haven’t had time to look at it closely myself, so this shifts my priors a little bit.

aesthesia is also doubtful:

If I remember correctly, the Dale and Kreuger paper is somewhat limited in its conclusions. Their sample was weighted toward the high end of the achievement spectrum, so it really says something like: conditional on being accepted to Harvard, there’s not much difference in lifetime earnings between actually attending Harvard and instead choosing to attend UC Berkeley. There aren’t a lot of students admitted to Harvard who instead choose to go to the University of Southern North Dakota at Hoople, so we don’t really know what happens to them.

This is a somewhat personal issue for me: I read summaries of Dale and Kreuger’s earlier work when applying for college and decided not to bother applying to selective schools, and just went to a middling state university, thinking that going somewhere more selective wouldn’t make a difference in my future life. I’m no longer confident that was the right move. I believe I would have learned more and made better and stronger connections had I gone somewhere more difficult to get into.

But eqdw’s experience bears Dale & Krueger out:

Hello. I work at a major company involved in job search, job ads, hiring, etc. And I would like to share with you something from a quarterly status update I saw the other day.

Status update presentation had a slide outlining the results of a user study we did. We surveyed employers to find out what are the most vs least important details they look at when making a hiring decision.

Out of something like 20 different options surveyed, “where the candidate went to college” was rated dead last in importance. “Formatting of resume” was rated as more important than “where they went to college” for making a hiring decision.

This would seem to confirm conclusion #6 above

Freddie de Boer wants to remind us (another point I stealthily edited into the post later on) that we really are talking about a small subset of institutions here:

I ran the numbers myself several years ago. Out of 3000+ accredited two- and four-year colleges, something like ~150 reject more students than they accept. The large majority accept almost every student who applies. And of course only half of the population will ever enroll in college and only a third will ever finish. The people who say this is a niche problem are correct.

Gossage Vardebian on money:

Certainly the idea of getting into the best school you can, is a further manifestation of the competitiveness discussed above. But that requires another factor, the “and damn the expenses” factor. I went to State U because I wasn’t good enough to get into U of Ivy, but also because I couldn’t afford it. Well, hell, the me of 1980 couldn’t afford 2019 State U either, so hey, might as well go for Harvard too. And if my parents has enough money for State U, well then Harvard is that much less financially onerous, isn’t it? This is also a reason why more kids these days apply to out-of-state State universities, which in turn are admitting more out-of-state kids – the difference in cost is not as large as in the past. It used to be that the idea of going into tens of thousands of dollars of debt for college was insane – literally nobody I know did that. People would do it for med school or law school, and that’s it. I’m sure there were examples from other demographics where that happened, but now of course it is commonplace, as a quick Google search will confirm. Part of this is the whole “you must get into the best school possible” mindset, but part is just a “that’s just the way it is now” resignation. The mental leap to take on that kind of debt has been conditioned into families now.

Anthony adds:

The debt thing is a big change from when I went to college, but it’s driven by policy more than by parents.

The government encouraged student loans because those would get paid back while subsidizing tuition at the student end or the college end wouldn’t (directly), and since they were paid back, the had very little budget cost. With a relatively significant college premium, people were willing to take on fairly large amounts of debt (and were not very likely to default). This allowed colleges to spend more, driving up the sticker price, making loans more necessary for students who weren’t poor enough to get direct subsidies or rich enough to afford the higher price.

Unfortunately, this is a politically very difficult problem to unwind. Even though the best policy would be to stop subsidizing student loans entirely and make them dischargeable in bankruptcy, which would hugely limit the availability of student loans, that’s never going to happen.

Peffern relates their own experience:

I think point 3.4 is the most important, based on anecdotal evidence.

I’m in undergrad at a top school right now.

In high school, I was a good student – perfect SAT, good APs, reasonable GPA, etc. Despite having good accomplishments here it didn’t feel particularly effortful – I’m good at math and can structure a coherent argument, so taking the SAT was mostly just getting a good night’s sleep and studying vocabulary for a week.

I also did a lot of extracurriculars, and the work and stress load from those absolutely destroyed me. I’m not even talking about “starting homework at 11pm” kind of workload, I mean the social aspect, the cutthroat politics, the status games, and the showmanship. It’s not that I think those things are necessarily bad in and of themselves, but they’re infinitely more difficult than classwork. I’m incredibly busy these days with class and I’m not even on the same order of magnitude of stress as I was in HS.

High school students are vicious bastards. When you take the AP calculus exam, or the SAT, or even just the final for some class you take, you are only really competing against the teacher, the test, and yourself. When you do extracurriculars, you are competing against horrible entitled jerks with rich parents who make your life miserable. I would take a hundred AP exams before doing another pointless extra curricular.

I don’t know if it explains the college enrollment statistics but it certainly explains the outrage, pessimism, and anger of people my age over the process. I spent what was for past generations an exciting and important time of my life locked in a box sanding off all aspects of myself that didn’t perfectly resembke an ivy league student just to get beaten out by some kid whose father got him an internship at somewhere prestigious. That does things to people.

meh finds reason for doubting my thoughts on the Common App:

The Pew article you link has a different conclusion about the Common App:

“The expansion of the Common Application, which makes it easier for students to apply to multiple schools, doesn’t appear to be behind the increase in application volume. The Common App, as it’s called, is accepted by nearly 800 colleges and universities in the United States and several dozen overseas. Of the 1,364 institutions in our sample, 729 accept the Common App along with (or in some cases instead of) their own application forms; the other 635 use their own forms. Although one might suspect that the ease of applying to multiple schools via the Common App would result in stronger growth in application volume among those schools, there was almost no difference in 2002-2017 growth rates between the schools that used the Common App and those that didn’t.”

But adds:

But is their reasoning sound? Isn’t it possible the Common App increases applications uniformly among all schools, not just ones using the Common App?

Consider if pre-Common App, I am willing to fill out 10 applications, so I apply to 10 schools. Now with the Common App, say half of them accept it, so I only need to fill out 6 applications. I am still willing to fill out 4 more applications, so I look at my 11th choice. If they take the Common App, I apply to them for free. If not, I fill out an application, and am still willing to fill out 3 more applications. Does this lead to a similar increase in applications for both Common App and non Common App schools?

And alexhutcheson has a more complicated model for why applications per student might have increased:

I think this analysis is excellent, but I don’t think the Common App explains the increase in applications-per-admission. There’s another significant factor that causes students to apply to dozens of schools: No one can accurately forecast what a given school will cost anymore.

At some point (I believe in the late ’00s, but could be wrong), elite schools started to extend their financial aid programs to include students from middle-class families. Here’s a press release from Harvard in 2007. Yale and Princeton followed quickly, and most other elite schools seem to have done similar things, albeit with more constrained resources.

Prior to this change, a student from a family with middle-class income or above could know with reasonable certainty what a given college would cost: they would expect to pay the tuition, fees, etc. listed on the brochures. After this change, they would have no idea until after they were admitted. They might get a generous financial aid package that brings the cost down to the price of their local state university, or they might get nothing except loans. The systems used to determine financial aid packages are opaque and not well-publicized, so the outcome is unpredictable.

In 2009, I applied to a broad sample of 14 schools on the east coast. They were a mix of “elite” private and flagship public universities. My parents were comfortably middle-class. I was lucky enough to get into most of them, and so I had the opportunity to compare financial offers. In maybe 1 case out of 12, I would have had to pay the full sticker price. The rest would have been heavily “discounted”, but the discount varied widely between schools, from ~5% off the total cost of attendance, to 60% off, to 100% off (full ride). For the private universities the “discount” came from a mix of “need-based aid” and merit scholarships, while for the public universities it was exclusively from merit scholarships.

In this system, you don’t know what the financial offer is until you get in, there are possible windfalls from getting a generous financial offer, and it’s difficult to predict in advance what the financial offer will be until you get in. The incentives here are obvious: student who are conscious about the cost of their education have a strong incentive to “play the lottery” by applying to as many schools as is feasible for them, while biasing towards schools that are known to provide generous financial packages.

faoiseam says that APs might be a good objective way to track the increase in competition over time:

The College Board has lots of data on AP exams since 1996. They track how many people take exams, in what grade, by race and result, which is enough to get a sense of how people did. They also keep track of AP scholars. In 2002, 1738 people were National AP scholars, (8 APs with score 4 or more) enough to get you into HYPSM. This increased to 7k by 2005, 16k by 2010 and now is 35k, not enough to get you into a top 20 school. By this measure, college entry has got more competitive.

The number of schools offering APs has increased in that period, from 10k in 2000, to 17k in 2010, to 20k now. Looking at individual subjects, the two most taken are AP Lit, and AP Calc. 5s in lit increased 10 fold from 2000 (6978/105k) to now (61k/580k). For AP Calc BC in 2000 we have 5k 5s from 15k exams, in 2010 39k 5s, from 79k exams, and now we have 56k 5s from 139k exams. As these are the most common exams, this shows that while the number of schools barely changed in the last 10 years, the number of 5s has doubled.

In general, things have gotten twice as hard since 2010 (as measured by AP exams), and 5 times 3005, and 10 to 20 times since 2000. The AP curriculum has stayed pretty stable, so that does not explain the change. The number of schools has been pretty steady since 2010, though it did double since 2000. This does not explain all of a factor of 2, as the 10k schools that offered APs in 2000 were the better schools anyway.

This makes me question why the US does not used AP exams for college admission. If, as I estimate, the number of people who get n 5s doubles as n increases by 1, this suggests that four 4s and four 5s should be enough to get into HYPSM. The UCs should requires 8 4s. or 7 4s and 1 5, etc.

This could be done by computer when AP results come out, delaying admission until the Summer of senior year. It would be really nice to have a clear bar that was required for entry, as opposed to the current opaque system.

Steve Sailor fills in a hole in my data:

I wrote a column back in 2013 documenting that famous colleges were not expanding their freshmen class sizes at anywhere close to population growth over the last generation:

Consider the growth rate of Harvard, the world’s richest university. The number of undergraduates in its class of 1986 was 1,722. After a quarter of a century, during which the US population grew by 75 million, Harvard’s class of 2011 was 1,726: an increase of four.

This is not to say that Harvard isn’t expanding: Faculty and grad students are up, and non-teaching staff skyrocketed.

Similarly, Yale’s undergraduate student body has been the same size since 1978. Five years ago, the second-richest college announced a proposal for adding a couple of dormitories, but construction won’t proceed until another $300 million is raised.

In 2010, MIT unveiled plans to expand undergrad enrollment by six percent, which would only get it back to where it was in the 1990s.

Among the most prominent colleges, Princeton is the only one over the last generation to have actually succeeded in boosting enrollment (and that by only about ten percent) after it opened the Whitman residential college in 2007.

…and gives some fictional evidence:

Robert Heinlein’s 1950s juvenile novels are full of scenes where people ask the hero what he’s going to do for college and he says, “I dunno, I guess the day after Labor Day I’ll drive down to State and sign up for some classes.” (That was pretty much what UCLA was like when Heinlein went there for awhile in the 1930s.) But in the end of “Have Space Suit, Will Travel,” the hero gets accepted into MIT when the head of the CIA, or whatever, pulls some strings.

This concept of heading down to the college the day it opens to sign up for classes reminds me of an anecdote I rejected for the main post because it was too unbelievable. Wikipedia quotes Supreme Court Justice Byron White saying he went to Yale for law school because he was planning to go to Harvard, but got sick enough on the train to Boston that he decided to get off at New Haven and go to Yale instead. But I haven’t been able to find any source better than Wikipedia for this, so I don’t know if it’s true. Does it at least mean that this was plausible enough in the 1930s (when White was going to college) to make a believable urban legend?

graeme on HackerNews gives the Canadian perspective:

You may be interested in the view from up North. Joseph Heath, a Canadian academic has commented on how the top three Canadian schools teach more students than the top ten American schools. I’ll excerpt:


“But of course there’s a reason that it’s so difficult to get into Yale – it’s because Yale has only 5,400 students, in a country of over 315 million people! By contrast, McGill has over 30,000, and University of Toronto has 67,000 undergraduates, serving a country of only 35 million people. That means there’s roughly one spot at Yale for every 58,000 Americans, compared to one spot at McGill for every 1,100 Canadians. No wonder American life is more competitive.

Furthermore, all of the best schools in the United States are tiny. Here is a list of the top 10, as ranked by U.S. News & World Report, along with the number of students (undergraduate, I believe):

Princeton: 5,336

Harvard: 6,658

Yale: 5,405

Columbia: 6,068

Stanford: 7,063

Chicago: 5,590

Duke: 6,655

MIT: 4,503

Upenn: 9,682

CIT: 997

Dartmouth: 4,193

That means the top 10 universities in the United States – a country of over 315 million people – at any given time are educating a grand total of only 62,150 students.

By contrast, here are the rough numbers of undergraduates at the top 3 Canadian universities:

McGill: 30,000

UBC: 47,500

UofT: 67,000

So the top 3 Canadian schools are at any given time educating a grand total of 144,500 students – more than twice the total of the top 10 U.S. schools. (In fact, the University of Toronto alone has more student capacity than the top 10 U.S. schools combined.) The United States has almost exactly 9 times the population of Canada, so in order to have the same sort of capacity in higher education, the top 27 schools in the United States would have to have 1.3 million students.”

This suggests a model where people only have so many cognitive real estate devoted to remembering what the good schools are, and so if your top three schools are very large, you can have many people in the “cognitive elite”, but if your top three schools are small the elite will seem more selective. Seems like a stupid way to run an educational system 🙁

jumpinjacksplash offers a model of how an arms race would work:

I think what people may be missing is that time is a thing:

In a world where there are 10 places at Harvard every year but 9 applicants, everyone who can read Latin gets in, and no-one bothers to study harder than what it takes them to meet the minimum requirement.

In a world where there are 10 places at Harvard but 11 applicants, the worst Latin-reader doesn’t get in. But someone who realises they’re the worst Latin readers will practice more, and someone else will be the worst. That person will then practice enough to make someone else be the worse, who then practices a bit more. Given enough time, the standard keeps rising as everybody needs to stay ahead of everybody else; eventually everyone hits the ceiling of Latin literacy and starts taking up the violin, then competes on violin-skills until they all have to start climbing Everest.

The only equilibrium is where everyone exerts maximum effort on top of their abilities. It takes a long time for this to become even remotely necessary (I still don’t think we’re close to being there). But a world where the first 18 years of your life consists of maximum effort to get into college is a dystopian nightmare, hence as we trend towards it everything gets constantly worse.

This also explains the anecdotal people who expect to swim into Berkeley but get rejected from UCLA: their parents think they’ve massively overshot what’s required, but they’ve failed to realise how far down the slope we’ve slid.

I failed in my original goal for this piece, which was to present an account of competition getting tougher across society. There wasn’t enough data, so I fell victim to a streetlight effect where I concentrated on colleges, and then another streetlight effect where I concentrated on Harvard (which is the best-documented college). But the reasons why competition is getting tougher at Harvard probably don’t generalize, so I ruined my chance to have a really good post about modern competition. Still, a few people stuck to the original spirit and discussed increasing competition as it was showing itself in their own field.

anechoicmedia from the subreddit on TV:

This reminds me of a depressing perspective relayed by Brady Haran on podcast. An acquaintance had described getting started in the TV business back in what would probably have been the 1980s or 90s. Like most young people of the pre-computer world he arrived with no independent film making experience. As he describes it, he presented himself to a major TV station, applied for a job, and was accepted into an apprenticeship of sorts, in which they taught him all the ways of TV-news-making from the ground up, which he then went on to excel at.

Brady, himself a BBC veteran, remarked that such a thing could never happen today. Entry positions are ferociously competitive. Young adults applying today can be expected to have had a quality camera in their hands since childhood. There are no apprenticeships; New hires are expected to show up with fundamental knowledge already in hand.

But more than that, the modern ability to signal your ~passion~ via independent YouTube/Vimeo presence means you are really expected to show up with an entire history of volunteer video production going back years. How can you be worthy of working in the BBC if you haven’t been using the tools at your disposal to produce content for years already? You must not really want it — you’d be laughed out of the office without an extensive list of “extracurriculars” in hand. The ladder of opportunity had been yanked quite a few rungs upward.

Something like this may also be at work with the expectation that “real programmers”, especially younger ones, have many side projects, open source contributions, etc.

And jaghataikhan on consulting again:

This reminds me of my field (management consulting). It’s pretty competitive to get into the big three (McKinsey/Bain/BCG), so the level of preparation in applicants is also a runaway arms race. One of my buddies who went to MIT said he’d known people who’d prepped for case interviews for two years!

In contrast, I remember seeing some pre Recession recruiting videos at my firm where there was a girl from Yale saying something to the effect of “I didn’t even know what management consulting was before I interviewed!”

I can’t tell you how quaint/laughable that sounds vs. the extreme levels of competition for the same jobs today. Basically if you don’t have a top resume/school/marks/extracurrics/internships/cases/etc from day 1 (and still a matter of luck btw unless your dad is a partner or something), don’t even bother applying. And I’m certain the level of competition is skyrocketing daily.

KingWalrax gives the only thing I’ve heard that could possibly be a general explanation of the phenomena, though I don’t know if it’s right:

Ex-YCombinator President Sam Altman wrote a decent essay explaining how Zero-Sum Existential Competition & Conflict arise from a lack of baseline Growth in any given system. Whatever you think of Tech/YC, it’s a good read and Sam is a smart guy and he’s on-the-money here.

GDP Growth is always spiky/noisy, and highs average out with lows, making it difficult to discuss overall performance. But Sam’s graphs tell a pretty decent picture: we haven’t had a year of 10% GDP Growth since the 1980s, but our recession-year performance (i.e. 2008) is as bad as ever.

Your essay here already tied the broader economic landscape in to College a few times (Farming & Sources of employment in pre-1800s New England). It strikes me as relevant again here in this post-WW2 era.

Growth declined. Therefore: Zero-Sum Competition increased. The only other option was individual actors settled for lower individual outcomes, and you wrote the best essay out there on why that doesn’t happen.

I’ve seen conflicting data on whether long-term growth has really decreased and when that started, and you can see some discussion of this in the comments. If it has, it would explain a lot, and would mean there’s still room for things to get worse.

Posted in Uncategorized | Tagged | 224 Comments

Increasingly Competitive College Admissions: Much More Than You Wanted To Know

0: Introduction

This is from

Acceptance rates at top colleges have declined by about half over the past decade or so, raising concern about intensifying academic competition. The pressure of getting into a good university may even be leading to suicides at elite high schools.

Some people have dismissed the problem, saying that a misplaced focus on Harvard and Yale ignores that most colleges are easier to get into than ever. For example, from The Atlantic, Is College Really Harder To Get Into Than It Used To Be?:

If schools that were once considered “safeties” now have admissions rates as low as 20 or 30 percent, it appears tougher to get into college every spring. But “beneath the headlines and urban legends,” Jim Hull, senior policy analyst at the National School Board Association’s Center for Public Education, says their 2010 report shows that it was no more difficult for most students to get into college in 2004 than it was in 1992. While the Center plans to update the information in the next few years to reflect the past decade of applicants, students with the same SAT and GPA in the 90’s basically have an equal probability of getting into a similarly selective college today.

Their link to the report doesn’t work, so I can’t tell if this was ever true. But it doesn’t seem true today. From Pew:

The first graph shows that admission rates have decreased at 53% of colleges, and increased at only 31%. The second graph shows that the decreases were mostly at very selective schools, and the increases were mostly at less selective schools. We shouldn’t exaggerate the problem: three-quarters of US students go to non-selective colleges that accept most applicants, and there are more than enough of these for everyone. But if you are aiming for a competitive school – not just Harvard and Yale, but anywhere in the top few hundred institutions – the competition is getting harder.

This matches my impression of “facts on the ground”. In 2002, I was a senior at a California high school in a good neighborhood. Most of the kids in my class wanted to go to famous Ivy League universities, and considered University of California colleges their “safety schools”. The idea of going to Cal State (California’s middle- and lower- tier colleges) felt like some kind of colossal failure. But my mother just retired from teaching at a very similar school, and she says nowadays the same demographic of students would kill to get into a UC school, and many of them can’t even get into Cal States.

The stories I hear about this usually focus on how more people are going to college today than ever, but there’s still only one Harvard, so there’s increasing competition for the same number of spots.

As far as I can tell, this is false.

The college attendance rate is the same today as it was in 2005. If you’ve seen graphs that suggest the opposite, they were probably graphs of the total number of Americans with college degrees, which only proves that more people are getting degrees today than in the 1940s or whenever it was that the oldest generation still alive went to college.

(in fact, since the birth rate is declining, this means the absolute number of college-goers is going down).

I’ll go further. Harvard keeps building more dorms and hiring more professors, so there are the same number of Harvard spots per American today as there were ten years ago, twenty years ago, and all the way back to the 1800s:

I want to look into this further and investigate questions like:

– How did we get to this point? Have college admissions always been a big deal? Did George Washington have to freak out about getting into a good college? What about FDR? If not, why not?

– Is academia really more competitive now than in the past? On what time scale? At what levels of academia? Why is this happening? Will it stop?

– Is freaking out about college admissions the correct course of action?

1. A Harvard-Centric History Of College Admissions

For the first two centuries of American academia, there was no competition to get into college. Harvard admitted…

(Harvard is by far the best-documented college throughout most of this period, so I’ll be focusing on them. No, Ben Casselman, you shut up!)

…Harvard admitted anyone who was fluent in Latin and Greek. The 1642 Harvard admission requirements said:

When any schollar is able to read Tully [Cicero] or such like classicall Latine Authore ex tempore & make and speake true Latin in verse and prose, suo (ut auint) Marte, and decline perfectly the paradigmes of Nounes and Verbes in the Greek tongue, then may hee bee admitted into the Colledge, nor shall any claim admission before such qualifications.

Latin fluency sounds impressive to modern ears, like the sort of test that would limit admission to only the classiest of aristocrat-scholars. But knowledge of classical languages in early Massachussetts was shockingly high, even among barely-literate farmers. In 1647, in between starving and fighting off Indian attacks, the state passed a law that every town of at least 100 families must have a school that taught Latin and Greek (it was called The Old Deluder Satan Law, because Puritans). Even rural families without access to these schools often taught classical languages to their own children. Mary Baker Eddy, who grew up in early 19th-century rural New Hampshire, wrote that:

My father was taught to believe that my brain was too large for my body and so kept me much out of school, but I gained book-knowledge with far less labor than is usually requisite. At ten years of age I was as familiar with Lindley Murray’s Grammar as with the Westminster Catechism; and the latter I had to repeat every Sunday. My favorite studies were natural philosophy, logic, and moral science. From my brother Albert I received lessons in the ancient tongues, Hebrew, Greek, and Latin. My brother studied Hebrew during his college vacations.

By the standards of the time, Harvard admission requirements were tough but fair, and well within the reach of even poorer families. More important, they were only there to make sure students were prepared for the coursework (which was in Latin). They weren’t there to ration out a scarce supply of Harvard spots. In fact this post, summarizing Jerome Karabel’s Chosen, says that “there was no class size limit, because Harvard was trying to compete with Oxford and Cambridge for size”. They wanted as many students as they could get; their only limit was the number of qualified applicants.

These policies continued through the 19th century, with changes only in the specific subjects being tested. In the late 1700s they added some math; in the early 1800s they added some science. You can find a copy of the 1869 Harvard entrance exam here. It’s pretty hard – but it had an 88% pass rate (surely at least in part because you wouldn’t take it unless you were prepared) and everyone who passed was guaranteed a spot at Harvard. Some documents from Tufts around this time suggest their procedure was pretty similar. Some other documents suggest that if you went to a good high school, they assumed you were prepared and let you in without requiring the exams.

When did this happy situation end? Information on this topic is hard to find. I can’t give specific sources, but I get the impression that at the very end of the 19th century, there was a movement to standardize college admissions. At first this just meant make sure every college has the same qualification exams, so that one school isn’t asking about Latin and another about Greek. This culminated in the creation of the College Board in 1899, which administered an admission test that acted as a sort of great-great-grandfather of the SAT. Very gradually, so gradually that nobody at the time really remarked on it, this transitioned from making sure students were ready, to rationing out scarce spots. By about 1920, the transition was basically complete, so that nobody was surprised when people talked about “how colleges should decide who to accept” or questions like that. If you can find more on this transition, please contact me.

Acceptance was originally based entirely on your score on the qualifying exam. But by the 1920s, high-scorers on this exam were disproportionately Jewish. Although Jews were only about 2% of the US population, they were 21% of Harvard’s 1922 class (for more on why this might happen, read my post The Atomic Bomb Considered As Hungarian High School Science Fair Project). In order to arrest this trend, Harvard and other top colleges decided to switch from standardized testing to an easier-to-bias “holistics admissions” system that would let them implement a de facto Jewish quota.

Quota proponents not only denied being anti-Semitic but argued they were actually trying to fight anti-Semitism; if the student body became predominantly Jewish, this might inflame racial tensions against Jews. Harvard president Abbott Lowell, the quotas’ strongest proponent, said: “The anti-Semitic feeling among the students is increasing, and it grows in proportion to the increase in the number of Jews. If their number should become 40% of the student body, the race feeling would become intense”. Was he just trying to rationalize his anti-Semitism? I don’t think so. I doubt modern Harvard officials are anti-Asian in any kind of a hateful sense, but they enforce Asian quotas all the same. What would they say if you asked them why? Maybe that if a country full of whites, blacks, and Latinos had predominantly Asian elite colleges, that might make (as Lowell put it) “the race feeling become intense”. I see no reason to think that 1920s officials were thinking any differently than their modern counterparts.

Whatever the reasons, by the mid-1920s the Jewish quota was in place and Harvard had switched to holistic admissions. But Lowell and his contemporaries emphasized that the new policies were never meant to make Harvard selective. “It is neither feasible nor desirable to raise the standards of the College so high that none but brilliant scholars can enter…the standards ought never to be so high for serious and ambitious students of average intelligence.”

We’ll talk later about how this utopian dream of top-notch education for anyone with a foreskin failed. But before we get there, a more basic question: how come Harvard wasn’t overrun with applicants? If the academic requirements were within reach of most smart high-schoolers, how come there was no need to ration spots?

Below, I discuss a few possibilities in more depth.

1.1: Historical Tuition Fees

Were early American colleges so expensive that everyone except aristocrats was priced out?


(sources: 1, 2, 3, 4, 5, 6)

I find very conflicting accounts of colonial tuition prices. But after the Revolution, tuition stayed stable about about a third average median income until about 1990, when it increased to 1.5x median income. In other words: relative to income, historical tuition costs were about a fifth of what they are today. Some good universities seem to have not had tuition at all – Stanford had a $0 price tag for its first 35 years.

Even when tuition existed, historical accounts suggest it wasn’t especially burdensome for most college students, and record widespread effort to accommodate people who couldn’t pay.The first Harvard scholarship was granted in the 1640s. There are occasional scattered references to people showing up at Harvard without enough money to pay and being given jobs as servants to college officials or other students to help cover costs; in America, Ralph Waldo Emerson took advantage of this kind of program; in Britain, Isaac Newton did.

If you were a poor farmer who couldn’t get a scholarship and didn’t want to work as a servant, sometimes college were willing to accept alternative forms of payment. According to The Billfold:

Harvard tuition — which ran about fifty-five pounds for the four-year course of study — was paid the same way [in barter], most commonly in wheat and malt. The occasional New England father sent his son to Cambridge with parsnips, butter, and, regrettably for all, goat mutton. A 141-pound side of beef covered a year’s tuition.

1.2 Discrimination

Early colleges only admitted white men. Did this reduce the size of the applicant pool enough to give spots to all white men who applied?

I don’t think racial discrimination can explain much of the effect. Throughout the 19th century, America hovered around 85% white. New England, where most Harvard applicants originated, may have been 95% to 99% white – see eg this site which says Boston was 1.3% black in 1865; non-black minorities were probably a rounding error. So there’s not much room for racial discrimination to reduce the applicant pool.

The exclusion of women from colleges in the 1800s is less than generally believed:

(source: unprincipled sketchy attempt to combine this with this to get one measure that covers the entire period)

For every woman in college in 1890, there were about 1.3 men; this is no larger a gender gap than exists today, though in the reverse direction. How come you never hear about this? Many of the women were probably in teacher-training colleges or some other gendered institution; until the early 1900s, none of them were at Harvard. But after gender integration, the women’s colleges were usually annexed to the nearest men’s college, turning them into a single institution. Under these circumstances, it doesn’t seem that likely that integration had a huge effect on admissions selectivity. Also, admitting women can only double the size of the applicant pool, but 1800s college seemed much more than twice as easy to get into.

Overall I don’t think this was a major part of the difference either.

1.3: Lack Of Degree Requirement For Professional Schools

Nowadays college is competitive partly because people expect it to be their ticket to a good job. But in the 19th century, there was little financial benefit to a college degree.

Suppose you wanted to become a doctor. Most medical schools accepted students straight out of secondary school, without a college degree. In fact, most medical schools accepted all “applicants”, the same as Harvard. Like Harvard, there was sometimes a test to make sure you knew Greek and Latin (the important things for doctors!) but after that, you were in.

(This article has some great stories about colonial and antebellum US medical education. Anyone who wanted could open up a medical school; profit-motive incentivized them to accept everybody. Medical-schooling was so profitable that the bottleneck became patients; since there were no regulations requiring medical students to see patients, less scrupulous schools tended to skip this part. Dissection was a big part of the curriculum, but there were no refrigerators, so fresh corpses became a hot commodity. Grave robbing was a real problem, sparking small-scale wars between medical schools and their local towns. “In at least 2 instances, the locals actually raided the school to obtain a body. In 1 case, the school building was destroyed by fire, and in another, 2 people, a student and a professor, were killed.” There were no requirements for how long medical schools should last, so some were as short as nine months. But there were also no requirements for who could call themselves a doctors, so students would sometimes stay until they got bored, then drop out and start practicing anyway. Tuition was about $100 per year, plus cost of living and various hidden fees; by my estimates that’s about half as much (as percent of an average doctor’s salary) as medical school tuition today. This situation continued until the Gilded Age, when medical schools started professionalizing themselves a little more.)

Or suppose you wanted to be a lawyer. The typical method was called “reading law”, which meant you read some law textbooks, served an apprenticeship with a practicing lawyer, and then started calling yourself a lawyer (in some states you also needed a letter from a court testifying to your “good moral character”). Honestly the part where you apprenticed with an practicing lawyer was more like a good idea than a requirement. It’s not completely clear to me that you needed to do anything other than read enough law textbooks to feel comfortable lawyering, and then go lawyer. Most lawyers did not have a college degree.

Abraham Lincoln, a lawyer himself, advised a law student:

If you are absolutely determined to make a lawyer of yourself the thing is more than half done already. It is a small matter whether you read with any one or not. I did not read with any one. Get the books and read and study them in their every feature, and that is the main thing. It is no consequence to be in a large town while you are reading. I read at New Salem, which never had three hundred people in it. The books and your capacity for understanding them are just the same in all places.

Levi Woodbury, the 30th US Supreme Court Justice (appointed 1846), was the first to attend any kind of formal law school. James Byrnes, the 81st Supreme Court Justice (appointed 1941), was the last not to attend law school. It’s apparently still technically possible in four states (including California) to become a lawyer by reading law, but it’s rare and not very encouraged.

The ease of entering these professions helps explain why there was no oversupply of Harvard applicants. But then why wasn’t there an oversupply of doctors and lawyers? We tend to imagine that of course you need strict medical school admissions, because some kind of unspecified catastrophe would happen if any qualified person who wanted could become a doctor. Did these open-door policies create a glut of professionals?

No. There were fewer doctors and lawyers per capita than there are now.

Did it drive down salaries for these professions?

I don’t have great numbers on lawyer salaries, but based on this chart from 1797 Britain and this chart from 1900s America, I get the impression that throughout this period lawyers made about 3-5x as much as unskilled laborers, 3-4x as much as clerks and teachers, and about the same as doctors. This seems to match successful modern lawyers, and probably exceed average modern lawyers. This may because unskilled laborers now earn a minimum wage and teachers have unions, but in any case the 19th-century premium to a law degree seems to have been at least as high and probably higher.

The same seems true of doctor salaries. The paper above estimates physician salaries at $600 per year, during a time when agricultural laborers might have been making $100 and clerks and teachers twice that.

I conclude that letting any qualified person become a doctor or a lawyer, without gatekeeping, did not result in a glut of doctors and lawyers, and did not drive down salaries for those professions beyond levels we would find reasonable today.

1.4: Conclusions

So why weren’t there gluts of would-be college students, doctors, and lawyers? I can’t find any single smoking gun, but here are some possibilities.

Throughout this period, between 60% and 80% of Americans were farmers. Unless you were wealthy or urban, the question of “what career do you want in order to actualize your potential” didn’t come up. You were either going to be a farmer, or else you had some specific non-farm pathway in mind that you could pursue directly instead of getting a college degree to “keep your options open”.

Since rural children were expected to work on the farm, there was no protracted period of educational unproductivity. There was no assumption that your kids weren’t going to be earning anything until age 18 and so you might as well protract their unproductivity until age 22. That meant that paying to send your child to Boston or wherever, and to support him in a big-city lifestyle for four years, was actually a much bigger deal than the tuition itself. This article claims that in 1816, tuition itself was only about 10% of the expenses involved in sending a child to college (granted, poor people pinching pennies could get by for much less than the hypothetical well-off student analyzed here, but I think the principle still holds).

Another limiting factor may have been that there was ample opportunity outside of college and the professions, in almost every area. Twelve US presidents, including George Washington, did not go to college. Benjamin Franklin, everyone’s model of an early American polymath genius, did not go to college. Of the ten richest people in American history (mostly 19th-century industrialists), as far as I can tell only two of them went to college. Aside from the obvious race and gender discrimination, the 19th century was a lot closer to real meritocracy than today’s credentialist fake meritocracy; people responded rationally by ignoring credentials and doing meritorious things.

2. How Did The Zero-Competition Regime Transition To The Clusterf**k We Have Today?

Here is a graph of Harvard admission rates over time, based mostly on these data:

During the early part of the 1900s, Harvard was still in the 19th-century equilibrium of admitting most qualified non-Jewish applicants. Around 1940, the admission rate dropped from 95% to 25%. Most sources I read attribute this to the GI Bill, a well-intentioned piece of legislation that encouraged returning WWII veterans to get a college education. So many vets took the government up on the offer that Harvard was overwhelmed for the first time in its history.

But this isn’t the whole story.

You’ve seen this before – this is percent of Americans (by gender) to graduate college. It’s sorted by birth cohort, which means 1920 on the x-axis corresponds to the people who were in college in the 1940s – eg our GIs. The GI Bill is visible on this graph – around 1920, there is a spike in attendance for men but not women, which is the pattern we would predict from GIs. But it only takes college graduation rate from 10% to 15% (compared to its current 40%). And after the GI Bill, the college graduation rate starts dropping again – as we would expect of a one-time shock from a one-time war. And between 1955 and 1960, Harvard admissions rebound to about 40% of applicants.

The big spike in college attendance rates – and a corresponding dip in Harvard admission percentage – takes place in the 1938 to 1952 birth cohort. Why are all these people suddenly going to college? They’re dodging the draft. A big part of the increase in college admissions was people taking advantage of the college loophole to escape getting sent to Vietnam.

Again, this is a one-time shock, and mostly applies to men. So how come we see a quadrupling of college graduation during this period affecting men and women alike?

A standard narrative says that work has gotten more difficult over the past century, and so workers need more education. I’ve always found this hard to believe. In other countries, students still go to medical school and law school without a separate college degree first. Programming is a classic example of a high-skilled complicated modern profession, but many programmers dropped out of college, many others didn’t attend at all, and many programming “boot camps” are opening up offering to teach programming skills outside the context of a college education. And in many of the jobs that do require college education, the education is irrelevant to their work. Both of my adult jobs – as an English teacher and as a doctor – required me to have a college degree in order to apply. But my college education was relevant to neither (I’m a philosophy major). The degree requirement seemed like more of a class barrier / signaling mechanism than an assertion that only people who knew philosophy could make good teachers and doctors. I realize I’m making a strong claim here, and I don’t have space to justify it fully – for more on this, read my Against Tulip Subsidies and SSC Gives A Graduation Speech – or better yet, Bryan Caplan’s The Case Against Education.

If increasing need for skills didn’t cause increasing college attendance, what did? Again, this is based off of idiosyncratic beliefs I don’t have the space to justify (again, read Caplan) but it could be a sort of self-reinforcing signaling cycle. Once the number of people in college reached a certain level, it led to a well-known social expectation that intelligent and conscientious men would have college degrees, which made college a sign of intelligence and, conversely, not having been to college a sign of stupidity. If only 10% of smart/hard-working people have been to college, not having a college degree doesn’t mean someone isn’t smart/hard-working; if 90% of smart/hard-working people have been to college, not having a college degree might call their intelligence and work ethic into question. This cycle meant that after the shocks of the mid-1900s, there was a strong expectation of a degree in the knowledge professions, which forced women and later generations of men to continue going to college to keep up. The government’s decision to provide an endless stream of supposedly-free college loans exacerbated the problem and sabotaged the only natural roadblock that could have stopped it.

At the same time, several factors were coming together to discourage hunch-based “I like the cut of his jib” style hiring practices. Community ties were becoming weaker, so hirers typically wouldn’t have social contacts with potential hirees. Family businesses whose owners could hire based on hunches were giving way to large corporations where interviewers would have to justify their hiring decisions to higher-ups. Increasing concern about racism was raising awareness that hunch-based hiring tended to discriminate against minorities, and the advent of the discrimination lawsuit encouraged hiring based on objective criteria so you could prove you rejected someone for reasons other than race. The Supreme Court decision Griggs v. Duke may or may not have played a role by making it legally risky for corporations to give prospective hires aptitude tests. All of this created a “perfect storm” where employers needed some kind of objective criteria to evaluate potential new hires, and all the old criteria weren’t cutting it anymore. The rise of the college degree as a signal for intelligence, and the increased sorting of people by college selectivity, fit into this space perfectly.

Once society established that knowledge-worker jobs needed college degrees, the simultaneous rises in automation, globalization, and inequality made knowledge-worker jobs increasingly necessary to earn a living, completed the process.

If my story were true, this would suggest college attendance would not have risen so quickly in other countries that didn’t have these specific factors. I don’t have great cross-country data, but here’s what I can find:

College attendance in the UK supposedly remained very low until a 1992 act designed to encourage it, but it looks like part of that is just them reclassifying some other schools as colleges. I don’t know how it really compared to the US and I welcome information from British readers who know more than I do about this. Through the rest of the world, college attendance lagged North America by a long time, but the continent-wide categories probably combine countries at different levels of economic development. I don’t really know about this one.

Moving on: the graphs in the Introduction show that college attendance has been stable since about 2005. Why did the rise stop? These articles point out a few relevant trends.

First, the economy is usually to blame for this kind of thing. There was a slight increase in attendance during the 2008 recession, and a slight decrease during the recent boom. But over the course of the cycle, it still seems like the increase in college attendance has slowed or stopped overall, in a way that wasn’t true of past business cycles.

Second, birth rates are decreasing, which means fewer college-aged kids. The national population is still increasing, mostly because of immigrants, but many immigrants are adults without much past education, so they’re not as significant a contribution to the college population.

Third, the price of college keeps going up. I’m surprised to hear this as a contribution to declining attendance, because I thought it was the glut of students that kept prices high, but maybe both factors affect each other.

Fourth, for-profit colleges are falling apart.

In some cases, the government has shut them down for being outright scams. In other cases, potential students have wised up, realized they are outright scams, and stopped being interested in attending them. These colleges advertised to (some would say “preyed on”) people who weren’t able to get into other colleges, so their collapse looks like a fall in the college enrollment/graduation rate.

These are all potentially relevant, but they seem kind of weak to me: the sort of thing that explains the year-to-year trend, but not why the great secular movement in favor of more college has stopped.

Maybe it’s just reached a natural ceiling. Seventy percent of high school graduates are now going to college. The remaining 30% may disproportionately include people with serious socioeconomic or health problems that make going to college very hard for them.

Also, keep in mind that only about 60% of college students graduate in anywhere near the expected amount of time. Some economists have come up with rational-college-avoidance models where people who don’t expect to be able to graduate from college don’t waste their money trying.

3. If Number Of Students Applying To College Has Been Constant Or Declining Over The Past Ten Years, Why Are Admissions To Top Colleges So Much More Competitive?

To review: over the past ten years, the number of US students applying to college has gone down (the number applying to four-year private colleges has stayed about the same). But Harvard’s acceptance rates have decreased by half, with similar cuts across other top schools, and more modest cuts across most good and moderately-good colleges. There’s also a perception of much greater pressure on students to have perfect academic records before applying. Why?

3.1: Could the issue be increasing number of international students?

This would neatly match the evidence of constant US numbers vs. increasing selectivity.

Harvard equivocates between a few different definitions of “international student”, but I think it’s comparing apples to apples when it says the Class of 2013 was 10% foreign citizens and the Class of 2022 is 12%. These two classes bound the time period we’re worrying about, and this doesn’t seem like a big change. Also, across all US colleges international student enrollments seem to be dropping, not increasing. Some of this may have to do with strict Trump administration visa policies, or with international perceptions of increasing US hostility to foreigners.

Since fewer international students are applying in general, and even top schools show only a trivial increase, this probably isn’t it.

3.2: Could the issue be more race-conscious admission policies?

Might top colleges be intensifying affirmative action and their preference for minorities and the poor, thus making things harder for the sort of upper-class white people who write news articles about the state of college admissions? Conversely, might colleges by relaxing their restrictions on high-achieving Asians, with the same result?

This matches the rhetoric colleges have been putting out lately, but there is not a lot of signs it’s really happening. Harvard obsessively chronicles the race of its student body, and the class of 2010 and class of 2022 have the same racial composition. The New York Times finds that whites are actually better represented at colleges (compared to their percent of the US population) than they were 35 years ago, although Asians are the real winners.

The Times doesn’t explain why this is happening. It may be due to weakening affirmative action, including bans by several states. Or it may be because of a large influx of uneducated Mexican immigrants who will need a few more generations of assimilation before their families attend college at the same rate as whites or previous generations of Latinos.

What about Asians? There was a large increase in Asian admissions, but it was mostly before this period. The Ivy League probably has some kind of unofficial Asian quota which has been pretty stable over the past decade. Although the Asian population continues to grow, and their academic achievement continues to increase, this probably just increases intra-Asian competition rather than affecting people of other races.

3.3: Could the issue be increasing number of applications per student?

Here’s an interesting fact – even though no more Americans or foreigners are applying to colleges today vs. ten years ago, Harvard is receiving twice as many applications – from about 20,000 to more than 40,000. How can this be?

The average college student is sending out many more applications.

I am not Harvard material. But when I was looking at colleges, my mother pressured me to apply to Harvard. “Come on!” she said. “It will just take a few hours! And who knows? They might accept you! You’ll never get in if you don’t try!”

Harvard did not accept me. But my mother’s strategy is growing in popularity. Part of this might be genuine egalitarianism. Maybe something has gone very right, and the average American really does believe he or she has a shot at the Ivy League. But part of it may also be a cynical ploy by colleges to improve their rankings in US News and other similar college guides. These rankings are partly based on how “selective” they are, ie what percent of students they turn away. If they encourage unqualified candidates to apply, they can turn those unqualified candidates away, and then they appear more “selective” and their ranking goes up.

But increased application volume is mostly driven by an increasingly streamlined college admissions process, including the Common Application. I didn’t like my mother’s advice, because every college application I sent in required filling in new forms, telling them my whole life story all over again, and organizing all of it into another manila envelope with enclosed check. It was like paying taxes, except with essay questions. And there was a good chance you’d have to do it all over again for each institution you wanted to apply for. Now that’s all gone. 800 schools accept the Common Application, including the whole Ivy League. From the Times again:

Six college applications once seemed like a lot. Submitting eight was a mark of great ambition. For a growing number of increasingly anxious high school seniors, figures like that now sound like just a starting point…

For members of the class of 2015 who are looking at more competitive colleges, their overtaxed counselors say, 10 applications is now commonplace; 20 is taking on a familiar ring; even 30 is not beyond imagining. And why stop there? Brandon Kosatka, director of student services at the Thomas Jefferson School for Science and Technology in Alexandria, Va., recently worked with a student who wanted a spot in a music conservatory program. To find it, she applied to 56 colleges. A spokeswoman for Naviance, an online tool that many high school students and their counselors use to keep track of applications, said one current user’s “colleges I’m applying to” tab already included 60 institutions. Last year the record was 86, she said.

Does this mean increasing competitiveness is entirely an illusion? Suppose in the old days, each top student would apply to either Harvard or Yale. Now each top student applies to both Harvard and Yale, meaning that both colleges get twice as many applicants. Since each of them can only admit the same number of students, it looks like their application rate has been cut in half. But neither one has really become more competitive!

This can’t quite be it. After all, in the first case, Yale would expect 100% of accepted students to attend. In the second, Yale would know that about 50% of accepted students would choose Harvard instead, so it would have to accept twice as many students, and the acceptance rate per application wouldn’t change.

But if more people are following my mother’s strategy of applying to Harvard “just in case” even when you’re not Harvard material, then this could be an important factor. If the number of people who aren’t Harvard material but have mothers who imagine they are is twice as high as the number of people who are really Harvard material, then Harvard admissions will triple. If Harvard accepts these people, they will definitely go to Harvard, so there is no need for Harvard to increase its admission rate to compensate. Here there really is an illusion of increasing competition.

Finally, this process could increase sorting. Suppose that, for the first time in history, a Jewish mother had an accurate assessment of her son’s intellectual abilities, I really was Harvard material, and I was unfairly selling myself short. If the existence of a Common Application lets more people apply to Harvard “just in case”, and if the Harvard admissions committee is good at their job, then the best students will get more efficiently matched with the best institutions. In the past, Harvard might have been losing a lot of qualified applicants to unjustified pessimism; now all those people will apply and the competition will heat up.

And in the past, I think a lot of people, including really smart people, just went to the nearest halfway-decent state college to their house. Partly this was out of humility. Partly it was because people cared about family and community more. And partly it was because college wasn’t viewed as the be-all and end-all of your value as a human being and you had to get into the Ivy League or else your life was over. If all these people are now trying to get into Harvard, that will increase competition too.

Can we measure this?

This is the best I can do. It shows that over the past ten years, the number of students at public universities who come from in-state has dropped by 5%. This is probably related to sorting – people working on sorting themselves efficiently will go to the best school they can get into rather than just the closest one in their state. But it’s not a very dramatic difference. I suspect, though I can’t prove, that this is hiding a larger change at the very top of the distribution.

3.4: Could the issue be that students are just trying harder?

Imagine the exact same students applying to the exact same schools. But in 2009, they take it easy and start studying for their SATs the night before, and in 2019, they all have private tutors and are doing five extracurricular activities. College admissions will seem more competitive in 2019.

Any attempt to measure this will be confounded by reverse causation – increased effort might or might not cause increased selectivity, but increased selectivity will definitely cause increased effort. I’m not sure how to deal with this.

If studying harder improves SAT scores, these could be a proxy for how much effort students putting in. They changed the test in 2016 in a way that makes scores hard to compare, but we can at least compare scores from earlier years. Scores decline between 2005 and 2015 in both math and reading. This may be because more students are taking the SAT (1.5 million in 2008 vs. 2.1 million in 2018) so test-takers are a less selected population. This is kind of surprising given that college enrollment is stable or declining, but it could be that as part of pro-equality measures, schools are pressuring more low-achieving kids to take the SATs in order to “have a chance at college”, but those students don’t really end up attending. In support of this theory, scores are declining most quickly among blacks, Hispanics, and other poorer minority groups who may not have taken the SAT in earlier years; they are stable among whites, and increasing among Asians (increasing numbers of whom may be high-achieving Chinese immigrants). At least, this is the best guess I can come up with for why this pattern is happening. But it means SATs are useless as a measure of whether students are “trying harder”.

Why might students be trying harder? If there’s a ten year lag between things happening and common knowledge that the things have happened, the explosion of college attendance during the 1990s, with an ensuing increase in competitiveness, might have finally percolated down to the average student in the form of advice that getting into college is very hard and they should work to be more competitive. In addition, the Internet is exposing new generations of neurotic parents to messages that unless their child is perfect they will never get into college and probably die alone in a ditch.

Further, the decline of traditional criteria might be causing an increasing emphasis on extracurriculars, which take a harder toll on college students. Because of grade inflation, colleges are no longer counting high school grades as much as they used to; because meritocracy is passé, they’re no longer paying as much attention to the SAT. This implies increased emphasis on extracurriculars – things like student government, clubs, internships, charitable work, and the like. Despite popular misconceptions, the SAT is basically an IQ test, and doesn’t really reward obsessive freaking out and throwing money at the problem. But getting the right set of extracurriculars absolutely rewards obsessively freaking out and throwing money at the problem. Maybe twenty years ago, you just played the IQ lottery and hoped for the best, whereas now you work yourself ragged trying to become Vice-President of the Junior Strivers Club.

But all of this is just speculation; I really don’t know how to get good data on these subjects.

3.5: Are funding cuts reducing the number of college spots available?

Some people argue that cuts in public education are reducing the number of positions available at public universities, meaning the same number of students are competing for fewer spots. This source confirms large cuts in public funding:

These universities have tried to compensate by increasing tuition (or increasing the percent out-of-state students, who pay higher tuition). It looks like they’ve done this on a pretty much one-to-one basis, so that they’re actually getting more money per student now than they did when public funding was higher.

And from California:

It’s not clear that declining state support affected enrollment at all. Colleges just raised their prices by a lot.

In 2007, 2.8x as many students were in public universities compared to private ones. In 2017, the ratio was 2.9. If the problem were limited availability of public universities to absorb students, we might expect the percent of students at public universities to go down. This doesn’t seem to be happening.

Overall it doesn’t look like funding cuts to public universities mattered very much here.

3.6: Conclusions?

The clearest reason for increasing academic competition in the past ten years is the increasing number of applications per person, enabled by the online Common Application. This has doubled the number of applications sent to top colleges like Harvard despite the applicant pool staying the same size. Some of this apparent increased competition is a statistical illusion, but parts of it may be real due to increased sorting.

Other reasons may include increased common knowledge of intense competition making everyone compete more intensely, and decreased use of hard-to-game metrics like the SAT in favor of easy-to-game metrics like extracurriculars.

4. What Has Been Happening Beyond The College Level?

Competition is intensifying.

Between 2006 and 2016, the number of applicants to US medical schools increased by 35% (note change in number of applicants, not number of applications).

In a different statistic covering different years, the number of people enrolled at medical school increased 28% from 2002 to 2017. These two numbers aren’t directly comparable, but by eyeballing them we get the impression that the number of spots is increasing more slowly than the number of applicants, probably much more slowly.

As predicted, the MCAT (the med school version of the SAT) scores necessary for admission have been increasing over time.

This is also the impression I have been getting from doctors I know who work in the medical school and residency admissions process. I got to interview some aspiring residents a few years ago for a not-even-all-that-impressive program, and they were fricking terrifying.

Law schools keep great data on this (thanks, law schools!). US News just tells us outright that law schools are less competitive than in 2008, even at good programs. Here’s the graph:

And despite it feeling like lawyers are everywhere these days, law school attendance has really only grown at the same rate as the population since 1970 or so, and dropped over the past decade. This may be relating to word getting out that lawyer is no longer as lucrative a career as it used to be.

Unlike law schools, graduate school basically fails to keep any statistics whatsoever, and anything that might be happening at the graduate level is a total mystery. We know the number of PhDs granted:

…and that’s about it.

Part of what inspired me to write this post was listening to a famous scientist (can’t remember who) opine that back when he was a student in the 1940s, he kind of wandered into science, found a good position at a good lab, worked up the ranks to become a lab director, and ended up making great discoveries. He noted that this was unthinkable today – you have to be super-passionate to get into science grad school, and once you’re in you have to churn out grant proposals and be the best of the best to have any shot at one day having a lab of your own. I’ve heard many people say things like this, but I can’t find the evidence that would put it into perspective. If anyone knows more about the history of postgraduate education and work in the sciences, please let me know.

I’m also interested in this because it would further help explain undergraduate competition. If more people were gunning for med school and grad school, it would be more important to get into a top college in order to have a good chance of making it in. Since increasing inequality and returns to education have made advanced-degree jobs more valuable relative to bachelors-only jobs, this could explain another fraction of academic competitiveness. But aside from the medical school data, I can’t find evidence that this is really going on.

5. Is Freaking Out Over College Admissions Correct?

Dale and Krueger(2011) examine this question, using lifetime earnings as a dependent variable.

In general, they find no advantage from attending more selective colleges. Although Harvard students earn much more than University of Podunk students, this is entirely explained by Harvard only accepting the highest-ability people. Conditional on a given level of ability, people do not earn more money by going to more selective colleges.

A subgroup analysis did find that people who started out disadvantaged did gain from going to a selective college, even adjusted for pre-existing ability. Blacks, Latinos, and people from uneducated families all gained from selective college admission. The paper doesn’t speculate on why. One argument I’ve heard is that colleges, in addition to providing book-learning, help induct people into the upper class by teaching upper-class norms, speech patterns, etc, as well as by ensuring people will have an upper-class friend network. This may be irrelevant if you’re already in the upper class, but useful if you aren’t.

A second possibility might be that college degrees are a signal that help people overcome statistical discrimination. Studies have shown that requiring applicants share drug test results or criminal histories usually increases black applicants’ chances of getting hired. This is probably because biased employers assume the worst about blacks (that they’re all criminal drug addicts), and so letting black applicants prove that they’re not criminal drug addicts puts them on more equal footing with white/Asian people. In the same way, if employers start with an assumption of white/Asian competence and black/Latino incompetence, selective college attendance might not change their view of whites/Asians, but might represent a major update to their view of blacks/Latinos.

Dale and Krueger also find that the value of college did not increase during the period of their study (from 1976 to 1989).

Does this mean that at least whites and Asians can stop stressing out about what colleges they get into?

What if you want to go to medical or law school? I can’t find an equally rigorous study, but sites advising prospective doctors tell them that the college they went to matters less than you’d think. The same seems true for aspiring lawyers. As usual, there is no good data for graduate schools.

What if you want to be well-connected and important?

From here, the percent of members of Congress who went to Ivy League colleges over time, by party:

Only about 8% of Congresspeople went to Ivy League colleges, which feels shockingly low considering how elite they are in other ways. The trend is going up among Democrats but not Republicans. There is obviously a 40-50 year delay here and it will be a long time before we know how likely today’s college students are to get elected to Congress. But overall this looks encouraging.

On the other hand, presidents and Supreme Court Justices are overwhelmingly Ivy. Each of the last five presidents went to an Ivy League school (Clinton went to Georgetown for undergrad, but did his law degree at Yale). Every current Supreme Court justice except Clarence Thomas went to an Ivy for undergrad, and all of them including Thomas went to an Ivy for law school. But there’s no good way to control for whether this is because of pre-existing ability or because the schools helped them succeed.

Tech entrepreneurs generally went to excellent colleges. But here we do have a hint that this was just pre-existing ability: many of them dropped out, suggesting that neither the coursework nor the signaling value of a degree was very important to them. Bill Gates, Mark Zuckerberg, and Larry Ellison all dropped out of top schools; Elon Musk finished his undergrad, but dropped out of a Stanford PhD program after two days. This suggests that successful tech entrepreneurs come from the population of people smart enough to get into a good college, but don’t necessarily benefit from the college itself.

Overall, unless people come from a disadvantaged background, there’s surprisingly little evidence that going to a good college as an undergraduate is helpful in the long term – except possibly for a few positions like President or Supreme Court justice.

This doesn’t rule out that it’s important to go to a good institution for graduate school; see this paper. In many fields, a prestigious graduate school is almost an absolute requirement for becoming a professor. But there doesn’t seem to be an undergrad equivalent of this.

Digression: UC schools

I mentioned at the beginning the universal perception in California that UCs are much harder to get into. I know this is the perception everywhere, but it seems much worse in California. Sure, it’s anecdotal evidence, but the anecdotes all sound like this:

My friend’s daughter got 3.85 GPA, had 5 AP classes in high school, was on competitive swimming team, volunteered 100+ hours, was active in school activities, yet she got rejected by all 4 UCs that she applied to. And these were not even the highest tier of UCs, not Berkeley. She did not apply for more schools and thought that UC San Diego and UC Santa Cruz were her safe choices. The whole family is devastated.

The data seem to back this up. Dashed line is applications, dotted line is admissions, solid line is enrollments:

…but I don’t know how much of this is just more applications per person, like everywhere else.

Why should UC schools be hit especially hard? I assumed California’s population was growing faster than the rest of the country’s, but this doesn’t seem true: both California and the US as a whole grew 13% between 1990 and 2000, when the cohort attending college between 2008 and 2018 would have been born.

The Atlantic points out that, because of budget cuts, UC schools are admitting more out-of-state students (who have to pay higher tuition), lowering the number of spots available to Californians. But is this really that big an effect?

It looks like nonresidents went from 6% to 12% over the space of a decade. That shouldn’t screw things up so badly.

I’m really not sure about this. One possibility is that California’s schools are remarkably good. On’s list of best colleges, four of the top ten schools are UCs, plus you get to live in California instead of freezing to death in New England. Since the college admissions crisis is concentrated at the top schools, California has been hit especially hard.

I’m not satisfied with this explanation; let me know if you know more.

6. Conclusions

1. There is strong evidence for more competition for places at top colleges now than 10, 50, or 100 years ago. There is medium evidence that this is also true for upper-to-medium-tier colleges. It is still easy to get into medium-to-lower-tier colleges.

2. Until 1900, there was no competition for top colleges, medical schools, or law schools. A secular trend towards increasing admissions (increasing wealth + demand for skills?) plus two shocks from the GI Bill and the Vietnam draft led to a glut of applicants that overwhelmed schools and forced them to begin selecting applicants.

3. Changes up until ten years ago were because of a growing applicant pool, after which the applicant pool (both domestic and international) stopped growing and started shrinking. Increased competition since ten years ago does not involve applicant pool size.

4. Changes after ten years ago are less clear, but the most important factor is probably the ease of applying to more colleges. This causes an increase in applications-per-admission which is mostly illusory. However, part of it may be real if it means students are stratifying themselves by ability more effectively. There might also be increased competition just because students got themselves stuck in a high-competition equilibrium (ie an arms race), but in the absence of data this is just speculation.

5. Medical schools are getting harder to get into, but law schools are getting easier to get into. There is no good data for graduate schools.

6. All the hand-wringing about getting into good colleges is probably a waste of time, unless you are from a disadvantaged background. For most people, admission to a more selective college does not translate into a more lucrative career or a higher chance of admission to postgraduate education. There may be isolated exceptions at the very top, like for Supreme Court justices.

I became interested in this topic partly because there’s a widespread feeling, across the political spectrum, that everything is getting worse. I previously investigated one facet of this – that necessities are getting more expensive – and found it to be true. Another facet is the idea that everything is more competitive and harder to get into. My parents’ generation tells stories of slacking off in high school, not worrying about it too much, and knowing they’d get into a good college anyway. Millennials tell stories of an awful dog-eat-dog world where you can have perfect grades and SAT scores and hundreds of hours of extracurriculars and still get rejected from everywhere you dreamed of.

I don’t really have a strong conclusion here. At least until ten years ago, colleges were harder to get into because more people were able to (or felt pressured to) go to college. The past ten years are more complicated, but might be because of increased stratification by ability. Is that good or bad? I’m not sure. I still don’t feel like I have a great sense of what, if anything, went wrong, whether our parents’ rose-colored picture was accurate, or whether there’s anything short of reversing all progress towards egalitarianism that could take us back. I’m interested to get comments from people who understand this area better than I do.

Pain As Active Ingredient In Dating

Reciprocity is a simple dating site, created by some friends of mine. You sign up and see a list of all your Facebook friends who also signed up. You can put a checkmark next to their name to indicate you want to date them (they can’t see this). If you both checkmark each other, then the site reveals you’ve matched.

This seemed like an obvious great idea. But I started to hear a lot of stories like the following: “I checkmarked Alice’s name on Reciprocity, and the system didn’t notify me that there was a match, so I assumed Alice didn’t like me. Later I asked her out in person, and she said yes and we had a great time.”

I always figured Alice was just a jerk who was ruining the system for everyone else. After all, the whole premise was to incentivize honesty. Checkmark the names of people you honestly want to date. If they don’t want to date you, they never hear about it, and you would be no worse off. If they do want to date you, the system will let you know, and you can arrange a date. If your pattern of checkmarks doesn’t really match who you want to date, you’re just screwing yourself and everyone else over for no reason.

A few months ago, someone asked me out on a date and I said yes. And I realized I hadn’t checkmarked them on Reciprocity. This caused a crisis of self-loathing. What’s wrong with me? Why would I go against my own incentives and ruin things for everyone else?

I asked a friend, who admitted she had done the same thing. Her theory was that asking someone on a date (with all of its accompanying awkwardness and difficulty) was a stronger signal of interest than ticking a checkbox. And potentially there’s a grey zone of people who you would only date if you thought they liked you more than a certain amount. And asking them in person is hard enough to be a costly signal that you like them at least that amount, but ticking a checkbox isn’t.

This argument rings true to me. And it’s the only explanation I’ve got for why people would act in this self-defeating way.

I wrote before about systems where bureaucracy is the active ingredient, ie the very annoyingness of what you’re doing helps send the signal that makes the system work. The dating situation seems similar. Pain is the active ingredient. You can create clever dating sites that remove the pain. Sometimes it will work: lots of people have gotten great dates on Reciprocity. But other times people just won’t ask each other out.

Probably this story has the same takeaway as Seeing Like A State – you don’t fully understand social systems, so be careful if you think you can improve on them.

Short Book Reviews April 2019

I. Method of Levels

Timothy Carey’s Method Of Levels teaches a form of psychotherapy based on perceptual control theory.

The Crackpot List is specific to physics. But if someone were to create one for psychiatry, Method of Levels would score a perfect 100%. It somehow manages to do okay on the physics one despite not discussing any physics.

The Method of Levels is the correct solution to every psychological problem, from mild depression to psychosis. Therapists may be tempted to use something other than the Method of Levels, but they must overcome this temptation and just use the Method of Levels on everybody. Every other therapy is about dismissing patients as “just crazy”, but the Method of Levels tries to truly understand the patient. Every other therapy is about the therapist trying to change the patient, but the Method of Levels is about the patient trying to change themselves. The author occasionally just lapses into straight-up daydreams about elderly psychologists sitting on the porch, beating themselves up that they were once so stupid as to believe in psychology other than the Method of Levels.

This book isn’t just bad, it’s dangerous. One vignette discusses a patient whose symptoms clearly indicate the start of a manic episode. The author recommends that instead of stigmatizing this person with a diagnosis of bipolar or pumping them full of toxic drugs, you should use the Method of Levels on them. This is a good way to end up with a dead patient.

I like perceptual control theory. I share the author’s hope that it could one day be a theory of everything for the brain. But even if it is, you can’t use theories of everything to do clinical medicine. Darwin discovered a theory of everything for biology, but you can’t reason from evolutionary first principles to how to treat a bacterial infection. You should treat the bacterial infection with antibiotics. This will be in accordance with evolutionary principles, and there will even be some cool evolutionary tie-ins (fungi evolved penicillin as a defense against bacteria). But you didn’t discover penicillin by reasoning from evolutionary first principles. If you tried reasoning from evolutionary first principles, you might end up trying to make the bacteria mutate into a less dangerous strain during the middle of an osteomyelitis case or something. Just use actually existing clinical medicine and figure out the evolutionary justification for it later.

Or maybe a better metaphor is germ theory, a theory of everything specifically targeted to treatable diseases. But fifty years elapsed between Pasteur and penicillin, penicillin alone didn’t treat every germ, we still have some germs we can’t treat, and lots of things like cancer turned out not to be germs at all. You can’t jump straight from a theory of everything – even a good, correct theory of everything – to “now we have solved all problems and here’s the one technique for everything.”

On the other hand, most existing psychotherapy is placebo-ish, and first principles can sometimes be a useful guide. So as long as we are careful to dismiss the part where we throw out all existing medicine, and dismiss the part where we use this for patients having a manic episode, we can very tentatively look at the Method of Levels and what it suggests for patients having garden-variety psychological conflict.

Perceptual control theory says that minds primarily control perceptions. This is true on very low levels, like the hypothalamus controlling (its sensors’ perception of) temperature to 98.6 F. Theoretically it may be true on very high levels, like trying to control (perceived) social status or risk. If two control systems are accidentally trying to control the same variable at different levels, then both of them expend all their energy fighting each other and can’t control anything else. For example, if your house has one thermostat (with associated AC and heater) trying to keep the temperature at 65, and another thermostat (with its own associated AC and heater) trying to keep the temperature at 75, then one thermostat will keep the heat on all the time, the other will keep the AC on all the time, and the temperature will end up at 70 with a gigantic electrical bill.

In the same way, MoL understands intrapsychic conflict as competing control systems. Suppose a gay man is living in a conservative household that stigmatizes homosexuality. He’s trying to control the amount of sex/romance he has at some level that keeps his libido happy. He’s also trying to control his community standing at some level that keeps his sociometer happy. These are conflicting goals; the more he pursues a relationship, the less the community will like him, and vice versa. He will probably feel conflicted inside and not know what to do.

PCT believes the brain has a natural reorganizing process that keeps control systems running smoothly. Powers’ description of this sounds a lot like how we think of learning in neural nets; the brain randomly changes neural weights in a specific control system, with changes that lower the control system’s error getting reinforced, until the system is running smoothly again. If there’s intrapsychic conflict, this reorganization process must not be working.

MoL says the goal of therapy is to activate this reorganization process. The most likely reason it’s failing is that the patient is trying to reorganize the specific control systems that are in conflict, whereas what really needs to be reorganized is the higher-level control system that controls both of them. For example, our hypothetical gay patient shouldn’t be trying to reorganize his sex drive or his need for community belonging. He should be trying to reorganize some higher-level system that determines both of them, maybe his desire for a high quality of life. The “quality of life” control system determines the set point values for both the “sex drive” and the “need for community belonging” control systems, so if it could give them some value where they don’t conflict, the patient’s problem would be solved.

Reorganization is guided by awareness, so the therapist needs to move the patient’s awareness from the control systems that are experiencing the conflict, up to the higher-level control system that’s secretly producing the conflict. Its suggestion is to talk about the conflict with the patient, and especially about the patient’s experience talking about the conflict. So if the patient starts telling you about how he doesn’t know how to balance his homosexuality and his desire to fit in, you can prompt him to continue with questions like:

“What comes to mind when you think about not fitting in with your community?”
“What is your experience of wishing you could be more open about your sexuality like?”
“How does talking about this make you feel?”

Eventually the patient may have what the book calls an “up-a-level-event”, where instead of talking they look like they’re kind of lost in thought, or they close their eyes for a moment, or laugh nervously for no reason. At this point they’re becoming dimly aware of the higher level that’s guiding their lower-level conversations. The therapist should pounce on this and ask questions like:

“I see you closed your eyes for a moment just then. Is there something in particular you were thinking about?”
“You looked lost in thought for a second – why was that?”
“Can you tell me more about what made you laugh just then?”

The patient might then say something like “I was just thinking about how weird it was that I care so much about what my parents think about me when I don’t even respect them”, and then the therapist should keep going on this new topic. Now the patient’s awareness is on this higher level, and so the high-level control system can reorganize itself. Maybe eventually the reorganization that works is to give the pursue-your-sexuality system a higher set point, and the care-about-community system a lower set point, which looks like the patient deciding that he should not worry so much about what community members think about him.

You might have noticed from the first set of questions that this sounds a lot like what therapists do already. Carey does suggest that insofar as current therapies work, it’s because they’re already doing MoL-ish things. He suggests that his book offers more of an account of why they work, and a way to focus on the useful things instead of the chaff.

In particular, a lot of MoL – asking patients how they feel, trying to bring their awareness from the past content to the present process, worrying a lot about small gestures – sounds like psychodynamic therapy, at least the watered-down version of it most people today use. But it’s a lot more comprehensible than most attempts to teach psychodynamics, which never seem to hang together or have concrete suggestions for what you should do at any given time. If all this book ends up giving me is a way to do psychodynamics a little bit more cohesively, I’ll consider it worth my time.

I’m not sure how well its theoretical backing holds up. I always considered the very high-level PCT stuff to be the weakest part of the theory, and this not only relies upon them but goes several steps beyond them. The idea of the reorganizing process is an interesting one. But right now it’s got about as much empirical testing as, well, Freud. Still, some of the ideas discussed here seem lucid in a way Freud didn’t, so I’ll have to think about them more.

I wouldn’t recommend this book to anyone else right now based on the first few chapters being so embarrassing, and also so bad that I wouldn’t trust people to discount them enough even if I warned them how important it was.

II. How To Read Lacan

Why did I read How To Read Lacan by Slavoj Zizek?

I could answer this question on many levels. For example, the theological level: maybe I committed some sin in a past life. Maybe I was predestined to unhappiness. Maybe, having given me free will, God is no longer able to save me from my own bad choices.

On a more practical level: I’m trying to learn more about leftism, I’m trying to learn more about continental philosophy, and I’m trying to learn more about psychoanalysis. I figured I might as well get it all out of the way at once.

I was expecting this to be incomprehensible, but I was pleasantly surprised how good a writer Zizek was. He explains everything clearly, in down-to-earth prose interspersed with mildly funny Slovenian jokes that illustrate his points.

(Lacan himself is completely incomprehensible, to the point where he might as well be speaking Martian, but this book wisely avoided quoting Lacan except where absolutely necessary).

Despite being very readable, this book never really came together. Each chapter consisted of a Lacan quote, followed by Zizek’s interpretations and thoughts. The thoughts were always things like “Sometimes the act of communication itself can communicate something” or “We are never truly engaged with another person, even during sex”. These are always kind of reasonable, Zizek always does a good job proving them and relating them to mildly funny Slovenian jokes, and I came away agreeing with all of them. But I don’t feel like I understand how any of them cohere together into an object called “Lacanianism”, and none of them really seemed like a very surprising revelation, which is one reason this doesn’t get a full book review.

My main takeaway from this is that I should forget Lacan and try to read Zizek directly. Does anyone have recommendations for good starting points?

III. The Steerswoman

The Steerswoman is popular in the rationalist community, and now I see why. The titular organization of steerswomen are a rationalist sect devoted to understanding the world around them. They especially like geography – going to the borders of the known world and filling in the edges of the map – but also just seek knowledge in general. Anyone can ask a steerswoman any question, and the steerswoman must answer. But everyone has to answer any question asked of them by a steerswoman, or else the organization blacklists them and no steerswoman will answer their questions ever again.

The steerswomen live in a not-very-fleshed-out medieval fantasy world surrounding an inland sea. Although there are standard fantasy governments like dukes and chieftains, real power is held by wizards. No one knows anything about them, not even how many of them there are, where they come from, or how they do their magic. The book centers around the inevitably conflict between the nosy steerswomen and the mysterious wizards, and particularly around one steerswoman and her Barbarian™ traveling companion who stumble across a wizardly secret.

This book is from the 80s and had a very 80s feel to it. Compared to more modern fantasy, it’s shorter and feels more bare-boned. There are no two hundred different characters to keep track of, no romantic subplots, no lavish description of random political things that happen in minor towns. Just a woman and her Barbarian™ friend going on a basic standard-issue quest, with the whole thing starting and finishing in less time than it would take George RR Martin to describe the minor clan that controls an out-of-the-way fortress.

Some people called this book feminist, but I found it refreshingly apolitical. Most (though not all) of the steerswomen are women, but the book got a relatively boring explanation out of the way quickly and didn’t come back to it. Most of the characters’ genders were not too important to their personality, and the book did not obsess over gender issues. There is a part in a utopian society where one of the men teases one of the women about how much he wants to have sex with her, and the woman laughs it off, and the man keeps teasing, and this is clearly meant to signal how the society is utopian and everyone is very open and friendly with each other. The 80s were simpler times.

I’ve only read the first book of this long series. Overall I found it fun, but didn’t feel like it spoke deeply to anything within me. The book’s rationalists were discussed shallowly enough that it feels like decent cheerleading for rationality, but nothing you can’t find somewhere else. Although the steerswomen’s question-answering gimmick was cute, I spent more time worrying about the holes in it (can’t you just get someone else to ask steerswomen your questions? how can a worldwide organization in a medieval society keep an effective blacklist? really the world-building here was not that good) than feeling like the real world needed something similar (after all, we have Google). I’ll probably try to read the next few books in the series and update this if it gets better.

OT125: Opentathlon Thread

This is the bi-weekly visible open thread (there are also hidden open threads twice a week you can reach through the Open Thread tab on the top of the page). Post about anything you want, but please try to avoid hot-button political and social topics. You can also talk at the SSC subreddit or the SSC Discord server – and also check out the SSC Podcast. Also:

1. Those of you who don’t use ad-blocker may notice some more traditional Google-style sidebar ads. I’m experimenting to see how much money they make vs. how much they annoy people. If you are annoyed by them, please let me know.

2. Someone is doing one of those tag your location on a map things for SSC users. If you sign up, you may want to include some identifying details or contact information, since right now most of the tags don’t seem very helpful unless people are regularly checking their accounts on the site.

3. I’m considering a “culture war ban” for users who make generally positive contributions to the community but don’t seem to be able to discuss politics responsibly. This would look like me emailing them saying “You’re banned from discussing culture war topics here for three months” and banning them outright if they break the restriction. Pros: I could stop users who break rules only in the context of culture war topics without removing them from the blog entirely. Cons: I would be tempted to use it much more than I use current bans, it might be infuriating for people to read other people’s bad politics but not be able to respond, I’m not sure how to do it without it being an administrative headache for me. Let me know what you think.

Posted in Uncategorized | Tagged | 1,127 Comments

Classified Thread 7

This is the…monthly? bimonthly? occasional?…classified thread. Post advertisements, personals, and any interesting success stories from the last thread.

Posted in Uncategorized | Tagged | 241 Comments

Social Censorship: The First Offender Model

RJ Zigerell (h/t Marginal Revolution) studies public support for eugenics. He finds that about 40% of Americans support some form of eugenics. The policies discussed were very vague, like “encouraging poor criminals to have fewer children” or “encouraging intelligent people to have more children”; they did not specify what form the encouragement would take. Of note, much lack of support for eugenics was a belief that it would not work; people who believed the qualities involved were heritable were much more likely to support programs to select for them. For example, of people who thought criminality was completely genetic, a full 65% supported encouraging criminals to have fewer children.

I was surprised to hear this, because I thought of moral opposition to eugenics was basically universal. If a prominent politician tentatively supported eugenics, it would provoke a media firestorm and they would get shouted down. This would be true even if they supported the sort of generally mild, noncoercive policies the paper seems to be talking about. How do we square that with a 40% support rate?

I think back to a metaphor for norm enforcement I used in an argument against Bryan Caplan:

Imagine a town with ten police officers, who can each solve one crime per day. Left to their own devices, the town’s criminals would commit thirty muggings and thirty burglaries per day (for the purposes of this hypothetical, both crimes are equally bad). They also require different skills; burglars can’t become muggers or vice versa without a lot of retraining. Criminals will commit their crime only if the odds are against them getting caught – but since there are 60 crimes a day and the police can only solve ten, the odds are in their favor.

Now imagine that the police get extra resources for a month, and they use them to crack down on mugging. For a month, every mugging in town gets solved instantly. Muggers realize this is going to happen and give up.

At the end of the month, the police lose their extra resources. But the police chief publicly commits that from now on, he’s going to prioritize solving muggings over solving burglaries, even if the burglaries are equally bad or worse. He’ll put an absurd amount of effort into solving even the smallest mugging; this is the hill he’s going to die on.

Suppose you’re a mugger, deciding whether or not to commit the first new mugging in town. If you’re the first guy to violate the no-mugging taboo, every police officer in town is going to be on your case; you’re nearly certain to get caught. You give up and do honest work. Every other mugger in town faces the same choice and makes the same decision. In theory a well-coordinated group of muggers could all start mugging on the same day and break the system, but muggers aren’t really that well-coordinated.

The police chief’s public commitment solves mugging without devoting a single officer’s time to the problem, allowing all officers to concentrate on burglaries. A worst-crime-first enforcement regime has 60 crimes per day and solves 10; a mugging-first regime has 30 crimes per day and solves 10.

But this only works if the police chief keeps his commitment. If someone tests the limits and commits a mugging, the police need to crack down with what looks like a disproportionate amount of effort – the more disproportionate, the better. Fail, and muggers realize the commitment was fake, and then you’re back to having 60 crimes a day.

I think eugenics opponents are doing the same thing as the police here: they’re trying to ensure certainty of punishment for the first offender. They’ve established a norm of massive retaliation against the first person to openly speak out in favor of eugenics, so nobody wants to be the first person. If every one of the 40% of people who support eugenics speak out at once, probably they’ll all be fine. But they don’t, so they aren’t.

Why aren’t we in the opposite world, where the people who support eugenics are able to threaten the people who oppose it and prevent them from speaking out? I think just because the opponents coordinated first. In theory one day we could switch to the opposite equilibrium.

I think something like this happened with gay rights. In c. 1969, people were reluctant to speak out in favor of gay rights; in 2019, people are reluctant to speak out against them. Some of that is genuinely changed minds; I don’t at all want to trivialize that aspect. But some of it seems to have just been that in 1969, it was common knowledge that the anti-gay side was well-coordinated and could do the massive-retaliation thing, and now it’s common knowledge that the pro-gay side is well-coordinated and can do the massive retaliation thing. The switch involved a big battle and lots of people massively retaliating against each other, but it worked.

Maybe everyone else already realized something like this. But it changes the way I think about censorship. I’m still against it. But I used to have an extra argument against it, which was something like “If eugenics is taboo, that means there must be near-universal opposition to eugenics, which means there’s no point in keeping it taboo, because even it it wasn’t taboo eugenicists wouldn’t have any power.” I no longer think that argument holds water. “Taboo” might mean nothing more than “one of two equally-sized sides has a tenuous coordination advantage”.

(in retrospect I was pretty dumb for not figuring this out, since it’s pretty the same argument I make in Can Things Be Both Popular And Silenced? The answer is obviously yes – if Zigerell’s paper is right, eugenics is both popular and silenced – but the police metaphor explains how.)

The strongest argument against censorship is still that beliefs should be allowed to compete in a marketplace of ideas. But if I were pro-censorship, I might retort that one reason to try to maintain my own side’s tenuous coordination advantage is that if I relax even for a second, the other side might be able to claw together its own coordination advantage and censor me. This isn’t possible in the “one side must be overwhelmingly more powerful” model of censorship, but it’s something that the “tenuous coordination advantage” model has to worry about. The solution would be some sort of stable structural opposition to censorship in general – but the gay rights example shows that real-world censors can’t always expect that to work out for them.

In order to make moderation easier, please restrict yourself to comments about censorship and coordination, not about eugenics or gay rights.

Posted in Uncategorized | Tagged | 293 Comments