codex Slate Star Codex


List Of Passages I Highlighted In My Copy Of Ages Of Discord

Turchin has some great stories about unity vs. polarization over time. For example in the 1940s, unity became such a “problem” that concerned citizens demanded more partisanship:

Concerned about electoral torpor and meaningless political debate, the American Political Science Association in 1946 appointed a committee to examine the role of parties in the American system. Four years later, the committee published a lengthy (and alarmed) report calling for the return of ideologically distinct and powerful political parties. Parties ought to stand for distinct sets of politics, the political scientists urged. Voters should be presented with clear choices.

I have vague memories of similar demands in the early ’90s; everyone was complaining that the parties were exactly the same and the “elites” were rigging things to make sure we didn’t have any real choices.

On the other hand, partisanship during the Civil War was pretty intense:

Another indicator of growing intraelite conflict was the increasing incidence of violence and threatened violence in Congress, which reached a peak during the 1850s. The brutal caning that Representative Preston Brooks of South Carolina gave to Senator Charles Sumner of Massachusetts on the Senate floor in 1856 is the best known such episode, but it was not the only one. In 1842, after Representative Thomas Arnold of Tennessee “reprimanded a pro-slavery member of his own party, two Southern Democrats stalked towards him, at least of one of whom was arhmed with a bowie knife…calling Arnold a ‘damned coward,’ his angry colleagues threatened to cut his throat ‘from ear to ear'” (Freeman 2011). According to Senator Hammond, “The only persons who do not have a revolver and a knife are those who have two revolvers” (quoted in Potter 1976:389). During a debate in 1850, Senator Henry Foote of Mississippi pulled a pistol on Senator Thomas Hart Benton of Missouri (Freeman 2011).

In another bitter debate, a New York congressman inadvertently dropped a pistol (it fell out of his pocket), and this almost precipitated a general shootout on the floor of Congress (Potter 1976: 389).

Turchin places the peak of US unity and cooperation around 1820, and partly credits the need to stand together against Indians:

A particularly interesting case is eighteenth-century Pennyslvania (the following discussion follows closely the text in Turchin 2011:30-31). Initially, European settlers were divided by a number of ethnic and religious boundaries (Silver 2008). The English found it difficult to cooperate with the Germans and the Irish, and each ethnic group was further divided into feuding sectarian groups: Quakers against Anglicans, German Lutherans against Moravians and Mennonites. Yet, by the end of the eighteenth century, the European settlers had forged a common identity (“white people”) in opposition to the natives. As Nancy Shoemaker (2004) shoes, these “metaethnic” labels (Whites versus Reds) were not evoked as soon as settlers and natives came into contact. Rather, during the course of the eighteenth century Europeans and Indians gradually abandoned an initial willingness to recognize in each other a common humanity. Instead, both sides developed new stereotypes of the Other, rooted in the conviction that they were peoples fundamentally at odds, by custom and even by nature (Shoemaker 2004).

The evolution of civic organizations reflected this expanding definition of common identity. Clubs with ethnic and denominational membership criteria appeared in Pennyslvania during the 1740s. These associations represented what Putnam (2000) called “bonding” rather than “bridging” social capital. For example, the St. Andrew’s Society was narrowly focused on helping the Scots, while Deutsche Gesellschaft did the same for the Germans. However, as settler-native warfare intensified, especially during the second half of the eighteenth century, the focus of civic organizations gradually shifted to charity for any victims of Indian attacks, without regard for their ethnicity or religious denomination (Silver 2008). The social scale of coorperation took a step up. Of course, there were definite limits to this new “bridging” social capital: the Indians were most emphatically excluded; in fact, the integration of “white people” developed explicitly in opposition to the Indians.

Although the above description applies to pre-revolutionary Pennsylvania, a very similar dynamic obtained on the Northwestern frontier in Ohio after the Revolution (Griffin 2007). As Griffin notes, for white Americans “Indians existed as cultural glue, since the hatred of them was fast becoming a basis of order.”

This passage stood out to me because modern racial commentators focus on “whiteness” as an idea that evolved in opposition to (and to justify oppression of) blacks. But the Indian theory makes some sense too, especially because Northerners would have more exposure to Indians than they did to black people. But I notice I’ve never heard anyone else talk about this, and most of the history books I’ve read treat Indians as too weak to be an important enemy or have much of a place in the early American consciousness.

One factor leading to greater polarization was “elite overproduction”, here represented by more office-seekers than federal offices. This was apparently a well-known problem in early America:

Despite the increase in government posts, the supply was overwhelmed by demand for such positions. A horde of office-seekers nearly turned Jackson’s inauguration into a riot. Abraham Lincoln once said, “Were it believed that vacant places could be had at the North Pole, the road there would be lined with dead Virginians” (quoted in Potter 1976:432). And, most dramatically (although in a later period), President James Garfield was assassinated by a rejected office-seeker in 1881.

And so on. Some of Turchin’s measures of cooperation vs. polarization are a bit odd. But I have to respect the big-picture-ness of someone who will literally just look at the occurence of the word “cooperation” in various books:

It is interesting that “culture-metric” data support Fraser’s subjective perception [of declining cooperation between business and labor]. For example, the frequency of the word “cooperation” in the corpus of American books grew rapidly during the Progressive Era and somewhat less so during the New Deal (Figure 12.3). After reaching a peak in 1940, there was a minor decline during the 1950s, followed by an increase toward the second peak of 1975. After 1975, however, the frequency of this word went into a sustained decline.

Google Ngram is an imperfect instrument with which to trace cultural shifts. One problem is that the same word (eg, “capitalism”) can be used with either positive or negative valence, and Ngram does not allow one to separate these different meanings. “Cooperation”, however, is rarely used in the negative sense. Because of its predominantly positive valence, its overall frequency should provide us with a proxy for how much a society values cooperative values. Checking different variants (cooperation, Cooperation, cooperative, etc) yields the same overall rise-fall dynamics during the twentieth century (and up to 2008, where the current Google book database stops).

Furthermore, a more specific phrase, “labor-business cooperation” again traces out the same secular cycle, although with significant differences during some decades (eg, the 1920s). Finally, “corporate greed” with its predominantly negative valence is another check on the validity of this result, and it is reassuring that during the twentieth century its frequency moved in the opposite direction from the two positive terms (to show this parallelism more clearly, Figure 12.3 plots “corporate greed” on an inverse scale).


There is an interesting parallel…between the Great Depression and the 1970s Bear Market. Both periods of economic hardship (although it goes without saying that the Great Depression was a much more severe crisis) were broadly interpreted as empirical evidence against the prevailing economic doctrine – the naked, laissez faire capitalism in the first instance, more cooperative relations between business and labor in the second. Yet it is much more likely that the primary mechanism, responsible for long-term economic decline/stagnation in each case, was the negative phase of the Kondratiev cycle, perhaps supplemented by exogenous shocks (eg, the 1973 oil embargo). Yet in each case a prolonged period of economic troubles helped to delegitimize the prevailing ideological regime (Chapter 9).

Thanks for reminding me there’s yet another cycle I need to study, one that supposedly determines the rate of technological advances. Maybe that’s my next book review.

Book Review: Ages Of Discord


I recently reviewed Secular Cycles, which presents a demographic-structural theory of the growth and decline of pre-industrial civilizations. When land is plentiful, population grows and the economy prospers. When land reaches its carrying capacity and income declines to subsistence, the area is at risk of famines, diseases, and wars – which kill enough people that land becomes plentiful again. During good times, elites prosper and act in unity; during bad times, elites turn on each other in an age of backstabbing and civil strife. It seemed pretty reasonable, and authors Peter Turchin and Sergey Nefedov had lots of data to support it.

Ages of Discord is Turchin’s attempt to apply the same theory to modern America. There are many reasons to think this shouldn’t work, and the book does a bad job addressing them. So I want to start by presenting Turchin’s data showing such cycles exist, so we can at least see why the hypothesis might be tempting. Once we’ve seen the data, we can decide how turned off we want to be by the theoretical problems.

The first of Turchin’s two cyclic patterns is a long cycle of national growth and decline. In Secular Cycles‘ pre-industrial societies, this pattern lasted about 300 years; in Ages of Discord‘s picture of the modern US, it lasts about 150:

This summary figure combines many more specific datasets. For example, archaeologists frequently assess the prosperity of a period by the heights of its skeletons. Well-nourished, happy children tend to grow taller; a layer with tall skeletons probably represents good times during the relevant archaeological period; one with stunted skeletons probably represents famine and stress. What if we applied this to the modern US?

Average US height and life expectancy over time. As far as I can tell, the height graph is raw data. The life expectancy graph is the raw data minus an assumed constant positive trend – that is, given that technological advance is increasing life expectancy at a linear rate, what are the other factors you see when you subtract that out? The exact statistical logic be buried in Turchin’s source (Historical Statistics of the United States, Carter et al 2004), which I don’t have and can’t judge.

This next graph is the median wage divided by GDP per capita, a crude measure of income equality:

Lower values represent more inequality.

This next graph is median female age at first marriage. Turchin draws on research suggesting this tracks social optimism. In good times, young people can easily become independent and start supporting a family; in bad times, they will want to wait to make sure their lives are stable before settling down:

This next graph is Yale tuition as a multiple of average manufacturing worker income. To some degree this will track inequality in general, but Turchin thinks it also measures something like “difficulty of upward mobility”:

This next graph shows DW-NOMINATE’s “Political Polarization Index”, a complicated metric occasionally used by historians of politics. It measures the difference in voting patterns between the average Democrat in Congress and the average Republican in Congress (or for periods before the Democrats and Republicans, whichever two major parties there were). During times of low partisanship, congressional votes will be dominated by local or individual factors; during times of high partisanship, it will be dominated by party identification:

I’ve included only those graphs which cover the entire 1780 – present period; the book includes many others that only cover shorter intervals (mostly the more recent periods when we have better data). All of them, including the shorter ones not included here, reflect the same general pattern. You can see it most easily if you standardize all the indicators to the same scale, match the signs so that up always means good and down always means bad, and put them all together:

Note that these aren’t exactly the same indicators I featured above; we’ll discuss immigration later.

The “average” line on this graph is the one that went into making the summary graphic above. Turchin believes that after the American Revolution, there was a period of instability lasting a few decades (eg Shays’ Rebellion, Whiskey Rebellion) but that America reached a maximum of unity, prosperity, and equality around 1820. Things gradually got worse from there, culminating in a peak of inequality, misery, and division around 1900. The reforms of the Progressive Era gradually made things better, with another unity/prosperity/equality maximum around 1960. Since then, an increasing confluence of negative factors (named here as the Reagan Era trend reversal, but Turchin admits it began before Reagan) has been making things worse again.


Along with this “grand cycle” of 150 years, Turchin adds a shorter instability cycle of 40-60 years. This is the same 40-60 year instability cycle that appeared in Secular Cycles, where Turchin called it “the bigenerational cycle”, or the “fathers and sons cycle”.

Timing and intensity of internal war in medieval and early modern England, from Turchin and Nefedov 2009.

The derivation of this cycle, explained on pages 45 – 58 of Ages of Discord, is one of the highlights of the book. Turchin draws on the kind of models epidemiologists use to track pandemics, thinking of violence as an infection and radicals as plague-bearers. You start with an unexposed vulnerable population. Some radical – patient zero – starts calling for violence. His ideas spread to a certain percent of people he interacts with, gradually “infecting” more and more people with the “radical ideas” virus. But after enough time radicalized, some people “recover” – they become exhausted with or disillusioned by conflict, and become pro-cooperation “active moderates” who are impossible to reinfect (in the epidemic model, they are “inoculated”, but they also have an ability without a clear epidemiological equivalent to dampen radicalism in people around them). As the rates of radicals, active moderates, and unexposed dynamically vary, you get a cyclic pattern. First everyone is unexposed. Then radicalism gradually spreads. Then active moderation gradually spreads, until it reaches a tipping point where it triumphs and radicalism is suppressed to a few isolated reservoirs in the population. Then the active moderates gradually die off, new unexposed people are gradually born, and the cycle starts again. Fiddling with all these various parameters, Turchin is able to get the system to produce 40-60 year waves of instability.

To check this empirically, Turchin tries to measure the number of “instability events” in the US over various periods. He very correctly tries to use lists made by others (since they are harder to bias), but when people haven’t catalogued exactly the kind of instability he’s interested in over the entire 1780 – present period, he sometimes adds his own interpretation. He ends up summing riots, lynchings, terrorism (including assassinations), and mass shootings – you can see his definition for each of these starting on page 114; the short version is that all the definitions seem reasonable but inevitably include a lot of degrees of freedom.

When he adds all this together, here’s what happens:

Political instability / violent events show three peaks, around 1870, 1920, and 1970.

The 1870 peak includes the Civil War, various Civil War associated violence (eg draft riots), and the violence around Reconstruction (including the rise of the Ku Klux Klan and related violence to try to control newly emancipated blacks).

The 1920 peak includes the height of the early US labor movement. Turchin discusses the Mine War, an “undeclared war” from 1920-1921 between bosses and laborers in Appalachian coal country:

Although it started as a labor dispute, it eventually turned into the largest armed insurrection in US history, other than the Civil War. Between 10,000 and 15,000 miners armed with rifles fought thouasnds of strike-breakers and sheriff’s deputies, called the Logan Defenders. The insurrection was ended by the US Army. While such violent incidents were exceptional, they took place against a background of a general “class war” that had been intensifying since the violent teens. “In 1919 nearly four million workers (21% of the workforce) took disruptive action in the face of employer reluctance to recognize or bargain with unions” (Domhoff and Webber, 2011:74).

Along with labor violence, 1920 was also a peak in racial violence:

Race-motivated riots also peaked around 1920. The two most serious such outbreaks were the Red Summer of 1919 (McWhirter 2011) and the Tulsa (Oklahoma) Race Riot. The Red Summer involved riots in more than 20 cities across the United States and resulted in something like 1,000 fatalities. The Tulsa riot in 1921, which caused about 300 deaths, took on an aspect of civil war, in which thousands of whites and blacks, armed with firearms, fought in the streets, and most of the Greenwood District, a prosperous black neighborhood, was destroyed.

And terrorism:

The bombing campaign by Italian anarchists (“Galleanists”) culminated in the 1920 explosion on Wall Street, which caused 38 fatalities.

The same problems: labor unrest, racial violence, terrorism – repeated during the 1970s spike. Instead of quoting Turchin on this, I want to quote this Status 451 review of Days of Rage, because it blew my mind:

“People have completely forgotten that in 1972 we had over nineteen hundred domestic bombings in the United States.” — Max Noel, FBI (ret.)

Recently, I had my head torn off by a book: Bryan Burrough’s Days of Rage, about the 1970s underground. It’s the most important book I’ve read in a year. So I did a series of running tweetstorms about it, and Clark asked me if he could collect them for posterity. I’ve edited them slightly for editorial coherence.

Days of Rage is important, because this stuff is forgotten and it shouldn’t be. The 1970s underground wasn’t small. It was hundreds of people becoming urban guerrillas. Bombing buildings: the Pentagon, the Capitol, courthouses, restaurants, corporations. Robbing banks. Assassinating police. People really thought that revolution was imminent, and thought violence would bring it about.

One thing that Burrough returns to in Days of Rage, over and over and over, is how forgotten so much of this stuff is. Puerto Rican separatists bombed NYC like 300 times, killed people, shot up Congress, tried to kill POTUS (Truman). Nobody remembers it.

The passage speaks to me because – yeah, nobody remembers it. This is also how I feel about the 1920 spike in violence. I’d heard about the Tulsa race riot, but the Mine War and the bombing of Wall Street and all the other stuff was new to me. This matters because my intuitions before reading this book would not have been that there were three giant spikes in violence/instability in US history located fifty years apart. I think the lesson I learn is not to trust my intuitions, and to be a little more sympathetic to Turchin’s data.

One more thing: the 1770 spike was obviously the American Revolution and all of the riots and communal violence associated with it (eg against Tories). Where was the 1820 spike? Turchin admits it didn’t happen. He says that because 1820 was the absolute best part of the 150 year grand cycle, everybody was so happy and well-off and patriotic that the scheduled instability peak just fizzled out. Although Turchin doesn’t mention it, you could make a similar argument that the 1870 spike was especially bad (see: the entire frickin’ Civil War) because it hit close to (though not exactly at) the worst part of the grand cycle. 1920 hit around the middle, and 1970 during a somewhat-good period, so they fell in between the nonissue of 1820 and the disaster of 1870.


I haven’t forgotten the original question – what drives these 150 year cycles of rise and decline – but I want to stay with the data just a little longer. Again, these data are really interesting. Either some sort of really interesting theory has to be behind them – or they’re just low-quality data cherry-picked to make a point. Which are they? Here are a couple of spot-checks to see if the data are any good.

First spot check: can I confirm Turchin’s data from independent sources?

Here is a graph of average US height over time which seems broadly similar to Turchin’s.

Here is a different measure of US income inequality over time, which again seems broadly similar to Turchin’s. Piketty also presents very similar data, though his story places more emphasis on the World Wars and less on the labor movement.

– The Columbia Law Review measures political polarization over time and gets mostly the same numbers as Turchin.

I’m going to consider this successfully checked; Turchin’s data all seem basically accurate.

Second spot check: do other indicators Turchin didn’t include confirm the pattern he detects, or did he just cherry-pick the data series that worked? Spoiler: I wasn’t able to do this one. It was too hard to think of measures that should reflect general well-being and that we have 200+ years of unconfounded data for. But here are my various failures:

– The annual improvement in mortality rate does not seem to follow the cyclic pattern. But isn’t this more driven by a few random factors like smoking rates and the logic of technological advance?

Treasury bonds maybe kind of follow the pattern until 1980, after which they go crazy.

Divorce rates look kind of iffy, but isn’t that just a bunch of random factors?

Homicide rates, with the general downward trend removed, sort of follow the pattern, except for the recent decline?

USD/GBP exchange rates don’t show the pattern at all, but that could be because of things going on in Britain?

The thing is – really I have no reason to expect divorce rates, homicide rates, exchange rates etc to track national flourishing. For one thing, they may just be totally unrelated. For another, even if they were tenuously related, there are all sorts of other random factors that can affect them. The problem is, I would have said this was true for height, age at first marriage, and income inequality too, before Turchin gave me convincing-sounding stories for why it wasn’t. I think my lesson is that I have no idea which indicators should vs. shouldn’t follow a secular-cyclic pattern and so I can’t do this spot check against cherry-picking the way I hoped.

Third spot check: common sense. Here are some things that stood out to me:

– The Civil War is at a low-ish part of the cycle, but by no means the lowest.

– The Great Depression happened at a medium part of the cycle, when things should have been quickly getting better.

– Even though there was a lot of new optimism with Reagan, continuing through the Clinton years, the cycle does not reflect this at all.

Maybe we can rescue the first and third problem by combining the 150 year cycle with the shorter 50 year cycle. The Civil War was determined by the 50-year cycle having its occasional burst of violence at the same time the 150-year cycle was at a low-ish point. People have good memories of Reagan because the chaos of the 1970 violence burst had ended.

As for the second, Turchin is aware of the problem. He writes:

There is a widely held belief among economists and other social scientists that the 1930s were the “defining moment” in the development of the American politico-economic system (Bordo et al 1998). When we look at the major structural-demographic variables, however, the decade of the 1930s does not seem to be a turning point. Structural-demographic trends that were established during the Progressive Era continues through the 1930s, although some of them accelerated.

Most notably, all the well-being variables that went through trend reversals before the Great Depression – between 1900 and 1920. From roughly 1910 and to 1960 they all increased roughly monotonically, with only one or two minor fluctuations around the upward trend. The dynamics of real wages also do not exhibit a breaking point in the 1930s, although there was a minor acceleration after 1932.

By comparison, he plays up the conveniently-timed (and hitherto unknown to me) depression of the mid-1890s. Quoting Turchin quoting McCormick:

No depression had ever been as deep and tragic as the one that lasted from 1893 to 1897. Millions suffered unemployment, especially during the winters of 1893-4 and 1894-5, and thousands of ‘tramps’ wandered the countryside in search of food […]

Despite real hardship resulting form massive unemployment, well-being indicators suggest that the human cost of the Great Depression of the 1930s did not match that of the “First Great Depression” of the 1890s (see also Grant 1983:3-11 for a general discussion of the severity of the 1890s depression. Furthermore, while the 1930s are remembered as a period of violent labor unrest, the intensity of class struggle was actually lower than during the 1890s depression. According to the US Political Violence Database (Turchin et al. 2012) there were 32 lethal labor disputes during the 1890s that collectively caused 140 deaths, compared with 20 such disputes in the 1930s with the total of 55 deaths. Furthermore, the last lethal strike in US labor history was in 1937…in other words, the 1930s was actually the last uptick of violent class struggle in the US, superimposed on an overall declining trend.

The 1930s Depression is probably remembered (or rather misremembered) as the worst economic slump in US history, simply because it was the last of the great depressions of the post-Civil War era.

Fourth spot check: Did I randomly notice any egregious errors while reading the book?

On page 70, Turchin discusses “the great cholera epidemic of 1849, which carried away up to 10% of the American population”. This seemed unbelievably high to me. I checked the source he cited, Kohl’s “Encyclopedia Of Plague And Pestilence”, which did give that number. But every other source I checked agreed that the epidemic “only” killed between 0.3% – 1% of the US population (it did hit 10% in a few especially unlucky cities like St. Louis). I cannot fault Turchin’s scholarship in the sense of correctly repeating something written in an encyclopedia, but unless I’m missing something I do fault his common sense.

Also, on page 234, Turchin interprets the percent of medical school graduates who get a residency as “the gap between the demand and supply of MD positions”, which he ties into a wider argument about elite overproduction. But I think this shows a limited understanding of how the medical system works. There is currently a severe undersupply of doctors – try getting an appointment with a specialist who takes insurance in a reasonable amount of time if you don’t believe me. Residencies aren’t limited by organic demand. They’re limited because the government places so many restrictions on them that hospitals don’t sponsor them without government funding, and the government is too stingy to fund more of them. None of this has anything to do with elite overproduction.

These are just two small errors in a long book. But they’re two errors in medicine, the field I know something about. This makes me worry about Gell-Mann Amnesia: if I notice errors in my own field, how many errors must there be in other fields that I just didn’t catch?

My overall conclusion from the spot-checks is that the data as presented are basically accurate, but that everything else is so dependent on litigating which things are vs. aren’t in accordance with the theory that I basically give up.


Okay. We’ve gone through the data supporting the grand cycle. We’ve gone through the data and theory for the 40-60 year instability cycle. We’ve gone through the reasons to trust vs. distrust the data. Time to go back to the question we started with: why should the grand cycle, originally derived from the Malthusian principles that govern pre-industrial societies, hold in the modern US? Food and land are no longer limiting resources; famines, disease, and wars no longer substantially decrease population. Almost every factor that drives the original secular cycle is missing; why even consider the possibility that it might still apply?

I’ve put this off because, even though this is the obvious question Ages of Discord faces from page one, I found it hard to get a single clear answer.

Sometimes, Turchin talks about the supply vs. demand of labor. In times when the supply of labor outpaces demand, wages go down, inequality increases, elites fragment, and the country gets worse, mimicking the “land is at carrying capacity” stage of the Malthusian cycle. In times when demand for labor exceeds supply, wages go up, inequality decreases, elites unite, and the country gets better. The government is controlled by plutocrats, who always want wages to be low. So they implement policies that increase the supply of labor, especially loose immigration laws. But their actions cause inequality to increase and everyone to become miserable. Ordinary people organize resistance: populist movements, socialist cadres, labor unions. The system teeters on the edge of violence, revolution, and total disintegration. Since the elites don’t want those things, they take a step back, realize they’re killing the goose that lays the golden egg, and decide to loosen their grip on the neck of the populace. The government becomes moderately pro-labor and progressive for a while, and tightens immigration laws. The oversupply of labor decreases, wages go up, inequality goes down, and everyone is happy. After everyone has been happy for a while, the populists/socialists/unions lose relevance and drift apart. A new generation of elites who have never felt threatened come to power, and they think to themselves “What if we used our control of the government to squeeze labor harder?” Thus the cycle begins again.

But at other times, Turchin talks more about “elite overproduction”. When there are relatively few elites, they can cooperate for their common good. Bipartisanship is high, everyone is unified behind a system perceived as wise and benevolent, and we get a historical period like the 1820s US golden age that historians call The Era Of Good Feelings. But as the number of elites outstrips the number of high-status positions, competition heats up. Elites realize they can get a leg up in an increasingly difficult rat race by backstabbing against each other and the country. Government and culture enter a defect-defect era of hyperpartisanship, where everyone burns the commons of productive norms and institutions in order to get ahead. Eventually…some process reverses this or something?…and then the cycle starts again.

At still other times, Turchin seems to retreat to a sort of mathematical formalism. He constructs an extremely hokey-looking dynamic feedback model, based on ideas like “assume that the level of discontent among ordinary people equals the urbanization rate x the age structure x the inverse of their wages relative to the elite” or “let us define the fiscal distress index as debt ÷ GDP x the level of distrust in state institutions”. Then he puts these all together into a model that calculates how the the level of discontent affects and is affected by the level of state fiscal distress and a few dozen other variables. On the one hand, this is really cool, and watching it in action gives you the same kind of feeling Seldon must have had inventing psychohistory. On the other, it seems really made-up. Turchin admits that dynamic feedback systems are infamous for going completely haywire if they are even a tiny bit skew to reality, but assures us that he understands the cutting-edge of the field and how to make them not to do that. I don’t know enough to judge whether he’s right or wrong, but my priors are on “extremely, almost unfathomably wrong”. Still, at times he reminds us that the shifts of dynamic feedback systems can be attributed only to the system in its entirety, and that trying to tell stories about or point to specific factors involved in any particular shift is an approximation at best.

All of these three stories run into problems almost immediately.

First, the supply of labor story focuses pretty heavily on immigration. Turchin puts a lot of work into showing that immigration follows the secular cycle patterns; it is highest at the worst part of the cycle, and lowest at the best parts:

In this model, immigration is a tool of the plutocracy. High supply of labor (relative to demand) drives down wages, increases inequality, and lowers workers’ bargaining power. If the labor supply is poorly organized, comes from places that don’t understand the concept of “union”, don’t know their rights, and have racial and linguistic barriers preventing them from cooperating with the rest of the working class, well, even better. Thus, periods when the plutocracy is successfully squeezing the working class are marked by high immigration. Periods when the plutocracy fears the working class and feels compelled to be nice to them are marked by low immigration.

This position makes some sense and is loosely supported by the long-term data above. But isn’t this one of the most-studied topics in the history of economics? Hasn’t it been proven almost beyond doubt that immigrants don’t steal jobs from American workers, and that since they consume products themselves (and thus increase the demand for labor) they don’t affect the supply/demand balance that sets wages?

It appears I might just be totally miscalibrated on this topic. I checked the IGM Economic Experts Panel. Although most of the expert economists surveyed believed immigration was a net good for America, they did say (50% agree to only 9% disagree) that “unless they were compensated by others, many low-skilled American workers would be substantially worse off if a larger number of low-skilled foreign workers were legally allowed to enter the US each year”. I’m having trouble seeing the difference between this statement (which economists seem very convinced is true) and “you should worry about immigrants stealing your job” (which everyone seems very convinced is false). It might be something like – immigration generally makes “the economy better”, but there’s no guarantee that these gains are evently distributed, and so it can be bad for low-skilled workers in particular? I don’t know, this would still represent a pretty big update, but given that I was told all top economists think one thing, and now I have a survey of all top economists saying the other, I guess big updates are unavoidable. Interested in hearing from someone who knows more about this.

Even if it’s true that immigration can hurt low-skilled workers, Turchin’s position – which is that increased immigration is responsible for a very large portion of post-1973 wage stagnation and the recent trend toward rising inequality – sounds shocking to current political sensibilities. But all Turchin has to say is:

An imbalance between labor supply and demand clearly played an important role in driving real wages down after 1978. As Harvard economist George J. Borjas recently wrote, “The best empirical research that tries to examine what has actually happened in the US labor market aligns well with economic theory: An increase in the number of workers leads to lower wages.”

My impression was that Borjas was an increasingly isolated contrarian voice, so once again, I just don’t know what to do here.

Second, the plutocratic oppression story relies pretty heavily on the idea that inequality is a unique bad. This fits the zeitgeist pretty well, but it’s a little confusing. Why should commoners care about their wages relative to elites, as opposed to their absolute wages? Although median-wage-relative-to-GDP has gone down over the past few decades, absolute median wage has gone up – just a little, slowly enough that it’s rightly considered a problem – but it has gone up. Since modern wages are well above 1950s wages, in what sense should modern people feel like they are economically bad off in a way 1950s people didn’t? This isn’t a problem for Turchin’s theory so much as a general mystery, but it’s a general mystery I care about a lot. One answer is that the cost disease is fueled by a Baumol effect pegged to per capital income (see part 3 here), and this is a way that increasing elite wealth can absolutely (not relatively) immiserate the lower classes.

Likewise, what about The Spirit Level Delusion and other resources showing that, across countries, inequality is not particularly correlated with social bads? Does this challenge Turchin’s America-centric findings that everything gets worse along with inequality levels?

Third, the plutocratic oppression story meshes poorly with the elite overproduction story. In elite overproduction, united elites are a sign of good times to come; divided elites means dysfunctional government and potential violence. But as Pseudoerasmus points out, united elites are often united against the commoners, and we should expect inequality to be highest at times when the elites are able to work together to fight for a larger share of the pie. But I think this is the opposite of Turchin’s story, where elites unite only to make concessions, and elite unity equals popular prosperity.

Fourth, everything about the elite overproduction story confuses me. Who are “elites”? This category made sense in Secular Cycles, which discussed agrarian societies with a distinct titled nobility. But Turchin wants to define US elites in terms of wealth, which follows a continuous distribution. And if you’re defining elites by wealth, it doesn’t make sense to talk about “not enough high-status positions for all elites”; if you’re elite (by virtue of your great wealth), by definition you already have what you need to maintain your elite status. Turchin seems aware of this issue, and sometimes talks about “elite aspirants” – some kind of upper class who expect to be wealthy, but might or might not get that aspiration fulfilled. But then understanding elite overproduction hinges on what makes one non-rich-person person a commoner vs. another non-rich-person an “elite aspirant”, and I don’t remember any clear discussion of this in the book.

Fifth, what drives elite overproduction? Why do elites (as a percent of the population) increase during some periods and decrease during others? Why should this be a cycle rather than a random walk?

My guess is that Ages of Discord contains answers to some of these questions and I just missed them. But I missed them after reading the book pretty closely to try to find them, and I didn’t feel like there were any similar holes in Secular Cycles. As a result, although the book had some fascinating data, I felt like it lacked a clear and lucid thesis about exactly what was going on.


Accepting the data as basically right, do we have to try to wring some sense out of the theory?

The data cover a cycle and a half. That means we only sort of barely get to see the cycle “repeat”. The conclusion that it is a cycle and not some disconnected trends is based only on the single coincidence that it was 70ish years from the first turning point (1820) to the second (1890), and also 70ish years from the second to the third (1960).

A parsimonious explanation would be “for some reason things were going unusually well around 1820, unusually badly around 1890, and unusually well around 1960 again.” This is actually really interesting – I didn’t know it was true before reading this book, and it changes my conception of American history a lot. But it’s a lot less interesting than the discovery of a secular cycle.

I think the parsimonious explanation is close to what Thomas Piketty argued in his Capital In The Twenty-First Century. Inequality was rising until the World Wars, because that’s what inequality naturally does given reasonable assumptions about growth rates. Then the Depression and World Wars wiped out a lot of existing money and power structures and made things equal again for a little while. Then inequality started rising again, because that’s what inequality naturally does given reasonable assumptions about growth rates. Add in a pinch of The Spirit Level – inequality is a mysterious magic poison that somehow makes everything else worse – and there’s not much left to be explained.

(some exceptions: why was inequality decreasing until 1820? Does inequality really drive political polarization? When immigration corresponds to periods of high inequality, is the immigration causing the inequality? And what about the 50 year cycle of violence? That’s another coincidence we didn’t include in the coincidence list!)

So what can we get from Ages of Discord that we can’t get from Piketty?

First, the concept of “elite overproduction” is one that worms its way into your head. It’s the sort of thing that was constantly in the background of Increasingly Competitive College Admissions: Much More Than You Wanted To Know. It’s the sort of thing you think about when a million fresh-faced college graduates want to become Journalists and Shape The Conversation and Fight For Justice and realistically just end up getting ground up and spit out by clickbait websites. Ages of Discord didn’t do a great job breaking down its exact dynamics, but I’m grateful for its work bringing it from a sort of shared unconscious assumption into the light where we can talk about it.

Second, the idea of a deep link between various indicators of goodness and badness – like wages and partisan polarization – is an important one. It forces me to reevaluate things I had considered settled, like that immigration doesn’t worsen inequality, or that inequality is not a magical curse that poisons everything.

Third, historians have to choose what events to focus on. Normal historians usually focus on the same normal events. Unusual historians sometimes focus on neglected events that support their unusual theses, so reading someone like Turchin is a good way to learn parts of history you’d never encounter otherwise. Some of these I was able to mention above – like the Mine War of 1920 or the cholera epidemic of 1849; I might make another post for some of the others.

Fourth, it tries to link events most people would consider separate – wage stagnation since 1973, the Great Stagnation in technology, the decline of Peter Thiel’s “definite optimism”, the rise of partisan polarization. I’m not sure exactly how it links them or what it has to stay about the link, but link them it does.

But the most important thing about this book is that Turchin claims to be able to predict the future. The book (written just before Trump was elected in 2016) ends by saying that “we live in times of intensifying structural-demographic pressures for instability”. The next bigenerational burst of violence is scheduled for about 2020 (realistically +/- a few years). It’s at a low point in the grand cycle, so it should be a doozy.

What about beyond that? It’s unclear exactly where he thinks we are right now in the grand cycle. If the current cycle lasts exactly as long as the last one, we would expect it to bottom out in 2030, but Turchin never claims every cycle is exactly as long. A few of his graphs suggest a hint of curvature, suggesting we might currently be in the worst of it. The socialists seem to have gotten their act together and become an important political force, which the theory predicts is a necessary precursor to change.

I think we can count the book as having made correct predictions if violence spikes in the very near future (are the current number of mass shootings enough to satisfy this requirement? I would have to see it graphed using the same measurements as past spikes), and if sometime in the next decade or so things start looking like there’s a ray of light at the end of the tunnel.

I am pretty interested in finding other ways to test Turchin’s theories. I’m going to ask some of my math genius friends to see if the dynamic feedback models check out; if anyone wants to help, let me know how I can help you (if money is an issue, I can send you a copy of the book, and I will definitely publish anything you find on this blog). If anyone has any other ideas for to indicators that should be correlated with the secular cycle, and ideas about how to find them, I’m intereted in that too. And if anyone thinks they can explain the elite overproduction issue, please enlighten me.

I ended my review of Secular Cycles by saying:

One thing that strikes me about [Turchin]’s cycles is the ideological component. They describe how, during a growth phase, everyone is optimistic and patriotic, secure in the knowledge that there is enough for everybody. During the stagflation phase, inequality increases, but concern about inequality increases even more, zero-sum thinking predominates, and social trust craters (both because people are actually defecting, and because it’s in lots of people’s interest to play up the degree to which people are defecting). By the crisis phase, partisanship is much stronger than patriotism and radicals are talking openly about how violence is ethically obligatory.

And then, eventually, things get better. There is a new Augustan Age of virtue and the reestablishment of all good things. This is a really interesting claim. Western philosophy tends to think in terms of trends, not cycles. We see everything going on around us, and we think this is some endless trend towards more partisanship, more inequality, more hatred, and more state dysfunction. But Secular Cycles offers a narrative where endless trends can end, and things can get better after all.

This is still the hope, I guess. I don’t have a lot of faith in human effort to restore niceness, community, and civilization. All I can do is pray the Vast Formless Things accomplish it for us without asking us first.

Meetups Everywhere 2019

Last autumn we organized meetups in 85 different cities (and one ship!) around the world. Some of the meetup groups stuck around or reported permanent spikes in membership, which sounds like a success, so let’s do it again.

For most cities: If you’re willing to host a meetup for your city, then decide on a place, date, and time, and post it in the comments here, along with an email address where people can contact you. Then please watch the comments in case I need to ask you any questions. If you’re not sure whether your city has enough SSC readers to support a meetup, see the list of people by city at the bottom of this post. There may be more of us than you think – last year we were able to support meetups in such great megalopolises as Norman, Oklahoma and Wellington, New Zealand. But I would prefer people not split things up too much – if you’re very close to a bigger city, consider going there instead of hosting your own.

If you want a meetup for your city, please err in favor of volunteering to organize – the difficulty level is basically “pick a coffee shop you like, tell me the address, and give me a time”; it would be dumb if nobody got to go to meetups because everyone felt too awkward and low-status to volunteer.

For especially promising cities in the US: I am going to try to attend your meetups. My very tentative schedule looks like this:

Friday 9/20: Boston
Saturday 9/21: NYC
Sunday 9/22: Philly
Monday 9/23: DC
Thursday 9/26: Ann Arbor
Saturday 9/28: Chicago
Sunday 9/29: Austin
Tuesday 10/1: Portland
Wednesday 10/2: Seattle
Friday 10/4: Fairbanks
Thursday 10/10: Berkeley
Friday 10/11: Orange County

If you are in one of these cities and want to host a meetup, please schedule it for the evening of the relevant day. If that’s impossible, let me know and I might be able to reschedule. I will announce these ones on the blog, and in the past that’s meant they can get very big (100+ people in the biggest cities) – you might want to hold it in a house, park, or classroom (not a cafe or small apartment). If you have a great location but need money, email me and I might be able to help.

Small-print rules for organizers

1. In a week or so, I’ll make another post listing the details for each city so people know where to go.I don’t guarantee I’ll have the post with times and addresses up until September 9, so please choose a day after that. The weekend of September 21st and 22nd might be one good choice.

2. In the past, the best venues have been ones that are quiet(ish) and have lots of mobility for people to arrange themselves into circles or subgroups as desired. Private houses have been pretty good. Same with food courts. Cafes and restaurants have gone okay, as have empty fields (really). Bars don’t seem to have worked very well at all.

3. Usually only about a quarter of people who express interest actually attend. If your city has fewer than 20 people on the big list, don’t offer to organize unless you’re okay with a good chance of only one or two other people showing up.

4. If more than one person volunteers to organize, I’ll pick one of them. Priority will be given to people I know well, people who have organized meetups before, and (especially) an existing SSC/LW/EA meetup group in the city. If you run an existing SSC/LW/EA meetup group and you want to organize your city’s SSC meetup, please mention that in the post so I can give you precedence.

5. If you have an existing meetup group, you can just tell me what you’re already doing and when your next meetup is. But try to have the one you list here be some kind of “welcome, SSC people” meetup or otherwise low-barrier-to-entry. And please give me a firm date and time commitment instead of “tell people to check our mailing list to find out where the meeting will be that week”.

6. If you’re formally volunteering to organize a meetup, please respond with an unambiguous statement to this effect, the exact address, the exact time, and the date (+ contact details if possible), preferably in bold. I’m not going to count someone as offering to organize a meetup unless they do this. Please don’t post “I hope someone agrees to organize a meetup in my city”. Just offer to organize the meetup! Again, please include an exact time, exact date, and exact address with your offer to host. Please don’t post vague speculation about how you might want to host at some point – just offer to host and give me the information I need. If it turns out there’s someone better, don’t worry, they’ll also offer and I’ll choose them.

7. Mingyuan is Director Of Meetups and might be asking you some questions; I vouch for her and you should give her any information she needs.

Thanks, and see (some of) you soon!

Posted in Uncategorized | Tagged , | 237 Comments

Book Review: Reframing Superintelligence

Ten years ago, everyone was talking about superintelligence, the singularity, the robot apocalypse. What happened?

I think the main answer is: the field matured. Why isn’t everyone talking about nuclear security, biodefense, or counterterrorism? Because there are already competent institutions working on those problems, and people who are worried about them don’t feel the need to take their case directly to the public. The past ten years have seen AI goal alignment reach that level of maturity too. There are all sorts of new research labs, think tanks, and companies working on it – the Center For Human-Compatible AI at UC Berkeley, OpenAI, Ought, the Center For The Governance Of AI at Oxford, the Leverhulme Center For The Future Of Intelligence at Cambridge, etc. Like every field, it could still use more funding and talent. But it’s at a point where academic respectability trades off against public awareness at a rate where webzine articles saying CARE ABOUT THIS OR YOU WILL DEFINITELY DIE are less helpful.

One unhappy consequence of this happy state of affairs is that it’s harder to keep up with the field. In 2014, Nick Bostrom wrote Superintelligence: Paths, Dangers, Strategies, giving a readable overview of what everyone was thinking up to that point. Since then, things have been less public-facing, less readable, and more likely to be published in dense papers with a lot of mathematical notation. They’ve also been – no offense to everyone working on this – less revolutionary and less interesting.

This is one reason I was glad to come across Reframing Superintelligence: Comprehensive AI Services As General Intelligence by Eric Drexler, a researcher who works alongside Bostrom at Oxford’s Future of Humanity Institute. This 200 page report is not quite as readable as Superintelligence; its highly-structured outline form belies the fact that all of its claims start sounding the same after a while. But it’s five years more recent, and presents a very different vision of how future AI might look.

Drexler asks: what if future AI looks a lot like current AI, but better?

For example, take Google Translate. A future superintelligent Google Translate would be able to translate texts faster and better than any human translator, capturing subtleties of language beyond what even a native speaker could pick up. It might be able to understand hundreds of languages, handle complicated multilingual puns with ease, do all sorts of amazing things. But in the end, it would just be a translation app. It wouldn’t want to take over the world. It wouldn’t even “want” to become better at translating than it was already. It would just translate stuff really well.

The future could contain a vast ecosystem of these superintelligent services before any superintelligent agents arrive. It could have media services that can write books or generate movies to fit your personal tastes. It could have invention services that can design faster cars, safer rockets, and environmentally friendly power plants. It could have strategy services that can run presidential campaigns, steer Fortune 500 companies, and advise governments. All of them would be far more effective than any human at performing their given task. But you couldn’t ask the presidential-campaign-running service to design a rocket any more than you could ask Photoshop to run a spreadsheet.

In this future, our AI technology would have taken the same path as our physical technology. The human body can run fast, lift weights, and fight off enemies. But the automobile, crane, and gun are three different machines. Evolution had to cram running-ability, lifting-ability, and fighting-ability into the same body, but humans had more options and were able to do better by separating them out. In the same way, evolution had to cram book-writing, technology-inventing, and strategic-planning into the same kind of intelligence – an intelligence that also has associated goals and drives. But humans don’t have to do that, and we probably won’t. We’re not doing it today in 2019, when Google Translate and AlphaGo are two different AIs; there’s no reason to write a single AI that both translates languages and plays Go. And we probably won’t do it in the superintelligent future either. Any assumption that we will is based more on anthropomorphism than on a true understanding of intelligence.

These superintelligent services would be safer than general-purpose superintelligent agents. General-purpose superintelligent agents (from here on: agents) would need a human-like structure of goals and desires to operate independently in the world; Bostrom has explained ways this is likely to go wrong. AI services would just sit around algorithmically mapping inputs to outputs in a specific domain.

Superintelligent services would not self-improve. You could build an AI researching service – or, more likely, several different services to help with several different aspects of AI research – but each of them would just be good at solving certain AI research problems. It would still take human researchers to apply their insights and actually build something new. In theory you might be able to automate every single part of AI research, but it would be a weird idiosyncratic project that wouldn’t be anybody’s first choice.

Most important, superintelligent services could help keep the world safe from less benevolent AIs. Drexler agrees that a self-improving general purpose AI agent is possible, and assumes someone will build one eventually, if only for the lulz. He agrees this could go about the way Bostrom expects it to go, ie very badly. But he hopes that there will be a robust ecosystem of AI services active by then, giving humans superintelligent help in containing rogue AIs. Superintelligent anomaly detectors might be able to notice rogue agents causing trouble, superintelligent strategic planners might be able to develop plans for getting rid of them, and superintelligent military research AIs might be able to create weapons capable of fighting them off.

Drexler therefore does not completely dismiss Bostromian disaster scenarios, but thinks we should concentrate on the relatively mild failure modes of superintelligent AI services. These may involve normal bugs, where the AI has aberrant behaviors that don’t get caught in testing and cause a plane crash or something, but not the unsolveable catastrophes of the Bostromian paradigm. Drexler is more concerned about potential misuse by human actors – either illegal use by criminals and enemy militaries, or antisocial use to create things like an infinitely-addictive super-Facebook. He doesn’t devote a lot of space to these, and it looks like he hopes these can be dealt with through the usual processes, or by prosocial actors with superintelligent services on their side (thirty years from now, maybe people will say “it takes a good guy with an AI to stop a bad guy with an AI”).

This segues nicely into some similar concerns that OpenAI researcher Paul Christiano has brought up. He worries that AI services will be naturally better at satisfying objective criteria than at “making the world better” in some vague sense. Tasks like “maximize clicks to this site” or “maximize profits from this corporation” are objective criteria; tasks like “provide real value to users of this site instead of just clickbait” or “have this corporation act in a socially responsible way” are vague. That means AI may asymmetrically empower some of the worst tedencies in our society without giving a corresponding power increase to normal people just trying to live enjoyable lives. In his model, one of the tasks of AI safety research is to get AIs to be as good at optimizing vague prosocial tasks as they will naturally be at optimizing the bottom line. Drexler doesn’t specifically discuss this in Reframing Superintelligence, but it seems to fit the spirit of the kind of thing he’s concerned about.


I’m not sure how much of the AI alignment community is thinking in a Drexlerian vs. a Bostromian way, or whether that is even a real dichotomy that a knowledgeable person would talk about. I know there are still some people who are very concerned that even programs that seem to be innocent superintelligent services will be able to self-improve, develop misaligned goals, and cause catastrophes. I got to talk to Dr. Drexler a few years ago about some of this (although I hadn’t read the book at the time, didn’t understand the ideas very well, and probably made a fool of myself); at the time, he said that his work was getting a mixed reception. And there are still a few issues that confuse me.

First, many tasks require general intelligence. For example, an AI operating in a domain with few past examples (eg planning defense against a nuclear attack) will not be able to use modern training paradigms. When humans work on these domains, they use something like common sense, which is presumably the sort of thing we have because we understand thousands of different domains from gardening to ballistics and this gives us a basic sense of how the world works in general. Drexler agrees that we will want AIs with domain-general knowledge that cannot be instilled by training, but he argues that this is still “a service”. He agrees these tasks may require AI architectures different from any that currently exist, with relatively complete world-models, multi-domain reasoning abilities, and the ability to learn “on the fly” – but he doesn’t believe those architectures will need to be agents. Is he right?

Second, is it easier to train services or agents? Suppose you want a good multi-domain reasoner that can help you navigate a complex world. One proposal is to create AIs that train themselves to excel in world simulations the same way AlphaGo trained itself to excel in simulated games of Go against itself. This sounds a little like the evolutionary process that created humans, and agent-like drives might be a natural thing to come out of this process. If agents were easier to “evolve” than services, agentic AI might arise at an earlier stage, either because designers don’t see a problem with it or because they don’t realize it is agentic in the relevant sese.

Third, how difficult is it to separate agency from cognition? Natural intelligences use “active sampling” strategies at levels as basic as sensory perception, deciding how to direct attention in order to best achieve their goals. At higher levels, they decide things like which books to read, whose advice to seek out, or what subdomain of the problem to evaluate first. So far AIs have managed to address even very difficult problems without doing this in an agentic way. Can this continue forever? Or will there be some point at which intelligences with this ability outperform those without it.

I think Drexler’s basic insight is that Bostromian agents need to be really different from our current paradigm to do any of the things Bostrom predicts. A paperclip maximizer built on current technology would have to eat gigabytes of training data about various ways people have tried to get paperclips in the past so it can build a model that lets it predict what works. It would build the model on its actually-existing hardware (not an agent that could adapt to much better hardware or change its hardware whenever convenient). The model would have a superintelligent understanding of the principles that had guided some things to succeed or fail in the training data, but wouldn’t be able to go far beyond them into completely new out-of-the-box strategies. It would then output some of those plans to a human, who would look them over and make paperclips 10% more effectively.

The very fact that this is less effective than the Bostromian agent suggests there will be pressure to build the Bostromian agent eventually (Drexler disagrees with this, but I don’t understand why). But this will be a very different project from AI the way it currently exists, and if AI the way it currently exists can be extended all the way to superintelligence, that would give us a way to deal with hostile superintelligences in the future.


All of this seems kind of common sense to me now. This is worrying, because I didn’t think of any of it when I read Superintelligence in 2014.

I asked readers to tell me if there was any past discussion of this. Many people brought up Robin Hanson’s arguments, which match the “ecosystem of many AIs” part of Drexler’s criticisms but don’t focus as much on services vs. agents. Other people brought up discussion under the heading of Tool AI. Combine those two strains of thought, and you more or less have Drexler’s thesis, minus some polish. I read some of these discussions, but I think I failed to really understand them at the time. Maybe I failed to combine them, focused too much on the idea of an Oracle AI, and missed the idea of an ecosystem of services. Or maybe it all just seemed too abstract and arbitrary when I had fewer examples of real AI systems to think about.

I’ve sent this post by a couple of other people, who push back against it. They say they still think Bostrom was right on the merits and superintelligent agents are more likely than superintelligent services. Many brought up Gwern’s essay on why tool AIs are likely to turn into agent AIs and this post by Eliezer Yudkowsky on the same topic – I should probably reread these, reread Drexler’s counterarguments, and get a better understanding. For now I don’t think I have much of a conclusion either way. But I think I made a mistake of creativity in not generating or understanding Drexler’s position earlier, which makes me more concerned about how many other things I might be missing.

Open Thread 135

This is the bi-weekly visible open thread (there are also hidden open threads twice a week you can reach through the Open Thread tab on the top of the page). Post about anything you want, but please try to avoid hot-button political and social topics. You can also talk at the SSC subreddit or the SSC Discord server – and also check out the SSC Podcast. Also:

1. I’m going to experiment with not giving open threads punnish titles for a while. I worry that people unfamiliar with the blog don’t realize that “Opangolin Thread” or “Opentecost Thread” are open threads, and just get confused and go away.

2. I will not write a sponsored post for your company. Stop asking this. If you email me about this I will report you as a spammer.

Posted in Uncategorized | Tagged | 936 Comments

Don’t Fear The Simulators

From the New York Times: Are We Living In A Computer Simulation? Let’s Not Find Out.

It lists the standard reasons for thinking we might be in a simulation, then brings up some proposals for testing the hypothesis (for example, the cosmic background radiation might look different in simulations and real universes). But it suggests that we not do that, because if we learn we’re in a simulation, that might ruin the simulation and cause the simulators to destroy the universe.

But I think a little more thought suggests we don’t have anything to worry about.

In order to notice we had discovered our simulated nature, the simulators would have to have a monitor watching us. We should expect this anyway. Although humans may run some simulations without monitoring them carefully, the simulators have no reason to be equally careless; if they can simulate billions of sentient beings, their labor costs are necessarily near zero. Such a monitor would have complete instantaneous knowledge of everything happening in our universe, and since anyone who can simulate a whole planet must have really good data processing capabilities, it would be able to understand and act upon the entire content of its omniscient sensorium. It would see the fall of each sparrow, record the position of ever atom, have the level of situational awareness that gods could only dream of.

What I’m saying is, it probably reads the New York Times.

That means it knows these experiments are going to happen. If it cares about the results, it can fake them. Assuming for some reason that it messed up designing the cosmic background radiation (why are we assuming this, again?), it can correct that mistake now, or cause the experimental apparatus to report the wrong data, or do one of a million other things that would prevent us from learning we are in a simulation.

The Times’ argument requires that simulators are so powerful that they can create entire universes, so on-top-of-things that they will know the moment we figure out their game – but also so incompetent that they can’t react to a warning published several years in advance in America’s largest newspaper.

There’s another argument for the same conclusion: the premises of the simulation argument suggest this isn’t the simulators’ only project. Each simulator civilization must simulate thousands or millions of universes. Presumably we’re not the first to think of checking the cosmic background radiation. Do you think the simulators just destroy all of them when they reach radio-wave-technology, and never think about fixing the background radiation mismatch or adding in some fail-safe to make sure the experiments return the wrong results?

For that matter, this is probably a stage every civilization goes through, including whatever real civilization we are supposed to simulate. What good is a simulation that can replicate every aspect of the real world except its simulation-related philosophy? The simulators probably care a lot about simulation-related philosophy! If they’re going around simulating universes, they have probably thought a lot about whether they themselves are a simulation, and simulation-related philosophy is probably a big part of their culture. They can’t afford to freak out every time one of their simulations starts grappling with simulation-related philosophy. It would be like freaking out when a simulation developed science, or religion, or any other natural part of cultural progress.

Some other sources raise concern that we might get our simulation terminated by becoming too computationally intensive (maybe by running simulations of our own). I think this is a more serious concern. But by the time we need to think about it, we’ll have superintelligences of our own to advise us on the risk. For now, I think we should probably stop worrying about bothering the simulators (see also the last section here). If they want us alive for some reason, we probably can’t cause them enough trouble to change that.

Maybe Your Zoloft Stopped Working Because A Liver Fluke Tried To Turn Your Nth-Great-Grandmother Into A Zombie

Or at least this is the theory proposed in Brain Evolution Through The Lens Of Parasite Manipulation by Marco del Giudice.

The paper starts with an overview of parasite manipulation of host behavior. These are the stories you hear about toxoplasma-infected rats seeking out cats instead of running away from them, or zombie ants climbing stalks of grass so predators will eat them. The parasite secretes chemicals that alter host neurochemistry in ways that make the host get eaten, helping the parasite transfer itself to a new organism.

Along with rats and ants, there is a dizzying variety of other parasite manipulation cases. They include parasitic wasps who hack spiders into forming protective webs for their pupae, parasitic flies that cause bees to journey far from their hive in order to spread fly larva more widely, and parasitic microorganisms that cause mosquitoes to draw less blood from each victim (since that forces the mosquitoes to feed on more victims, and so spread the parasite more widely). Parasitic nematodes make their ant hosts turn red, which causes (extremely stupid?) birds to mistake them for fruit and eat them. Parasitic worms make crickets seek water; as the cricket drowns, the worms escape into the pond and begin the next stage of their life cycle. Even mere viruses can alter behavior; the most famous example is rabies, which hacks dogs, bats, and other mammals into hyperaggressive moods that usually result in them biting someone and transmitting the rabies virus.

Even our friendly gut microbes might be manipulating us. People talk a lot about the “gut-brain axis” and the effect of gut microbes on behavior, as if this is some sort of beautiful symbiotic circle-of-life style thing. But scientists have found that gut microbes trying to colonize fruit flies will hack the flies’ food preferences to get a leg up – for example, a carb-metabolizing microbe will secrete hormones that make the fly want to eat more carbs than fat in order to outcompete its fat-metabolizing rivals for gut real estate; there are already papers speculating that the same processes might affect humans. Read Alcock 2014 and you will never look at food cravings the same way again.

But del Giudice thinks this is just the tip of the iceberg. Throughout evolutionary history, parasites have been trying to manipulate host behavior and hosts have been trying to avoid manipulation, resulting in an eons-long arms race. The equilibrium is what we see today: parasite manipulation is common in insects, rare in higher animals, and overall of limited importance. But in arms race dynamics, the current size of the problem tells you nothing about the amount of resources invested in preventing the problem. There is zero problem with war between Iran and Saudi Arabia right now, but both sides have invested billions of dollars in military supplies to keep their opponent from getting a leg up. In the same way, just because mammals usually avoid parasite behavior manipulation now doesn’t mean they aren’t on a constant evolutionary war footing.

So if you’re an animal at constant risk of having your behavior hijacked by parasites, what do you do?

First, you make your biological signaling cascades more complicated. You have multiple redundant systems controlling every part of behavior, and have them interact in ways too complicated for any attacker to figure out. You have them sometimes do the opposite of what it looks like they should do, just to keep enemies on their toes. This situation should sound very familiar to anyone who’s ever studied biology.

Del Giudice compares the neurosignaling of the shrimp-like gammarids (small, simple, frequently hijacked by parasites) to rats (large, complex, hard to hijack). Gammarids have very simple signaling: high serotonin means “slow down”, low serotonin means “speed up”. The helminths that parasitize gammarids secrete serotonin, and the gammarids slow down and get eaten, transferring the parasite to a new host. Biologists can replicate this process; if they inject serotonin into a gammarid, the gammarid will slow down in the same way.

Toxoplasma hijacks rats and makes them fearless enough to approach cats. Dopamine seems to be involved somehow. But researchers injecting dopamine into rats don’t get the same result; in fact, this seems to make rats avoid cats more. Maybe toxoplasma started by increasing dopamine, rats evolved a more complicated signaling code, and toxoplasma cracked the code and now increases dopamine plus other things we don’t understand yet.

Aside from the brain, the immune system is the most important target to secure, so this theory should predict that immune signaling will also be unusually inscrutable. Again, this situation should sound very familiar to anyone who’s ever studied biology.

Second, you have a bunch of feedback loops and flexibility ready to deploy at any kind of trouble. If something makes dopamine levels go up, you decrease the number of dopamine receptors, so that overall dopaminergic neurotransmission is the same as always. If something is making you calmer than normal, you have some other system ready to react by making you more anxious again.

Del Giudice makes the obvious connection to psychopharmacology. Many psychoactive drugs build tolerance quickly: for example, heroin addicts constantly need higher and higher doses to get their “hit”. Further, tolerance builds in a pattern weirdly similar to antibody response – it takes a while to build up a cocaine tolerance, and you lose it over time if you don’t use cocaine, but the body “remembers” the process and a single hit of cocaine years later is sufficient to bring you back up to the highest tolerance level you’ve ever had.

The standard explanation for tolerance is that it’s an attempt to maintain homeostasis against the sort of conditions that can cause natural variation in neurotransmitter levels. I never questioned this before. But why is the body prepared to suddenly have all its serotonin reuptake transporters inhibited? Is that something that frequently happens, out in nature? I guess maybe plant toxins could do that, but then how come the body is prepared to deal with this for months or years?

While not denying the value of these standard explanations, Del Giudice thinks defense against parasite behavior manipulation may also play a role. Remember, gammarids absolutely have parasites that try to increase their serotonin levels as a prelude to getting them killed. Is it that surprising that a lot of different animal lineages would develop a reaction of “If something other than normal cognition has started increasing your serotonin levels, it’s a trap and you need to get them back down again”? Does that explain why SSRIs don’t work for some people, or randomly stop working, or need frequent dose escalation?

Third, you encode messages in the timing of pulses. This is a central feature of neuroendocrine communication – an intense pulse of testosterone at 6 AM means something different from tonically high testosterone all day. Parasites cannot do pulses. Remember, these parasites are usually microscopic. Each parasite can only produce a miniscule quantity of neurotransmitter or hormone. Only colonies of thousands or millions of parasites can produce enough chemicals to affect host signaling. This parasites cannot communicate or coordinate with each other, so there’s no way for them to be producing lots of testosterone one minute and none at all the next. That means that when a hormone arrives in a pulse, or better yet a complicated pattern of pulses, that’s a pretty reliable sign that it’s coming from a real gland.

Fourth, you exploit your individuality. The immune system already does this; there are some genes called the major histocompatibility complex that are designed to be especially variable, such that most people (except identical twins) will have different MHCs. These help the immune system differentiate self from other. Because they have such high individual variability, pathogens can’t just evolve around the MHC; they would have to undergo an entire evolutionary process for each new host they invade.

Del Giudice wonders if parasite-host arms races created pressure for increased human variability. SSRIs will make some people less depressed. But some people will get more depressed. A few will even get suicidal. A very few will flip out and become psychotic, or improve much more quickly than the textbooks say should be possible and feel completely reborn on day 3, or have something else even weirder happen. I always assumed God just hated psychiatrists and wanted them to be miserable. But another possibility is that extreme individual variability in neurosignaling pathways is a defense against parasite manipulation. If the effects of serotonin are unpredictable for any individual, no parasite species can devise a universally valid mechanism for controlling its hosts.

Fifth, you let the parasites become part of the furniture. If everybody in your ecosystem is infected with a parasite that raises serotonin, you just evolve a tonically lower serotonin level, and then it all cancels out. This one seems a little bit weird to me – surely this isn’t the stable equilibrium? But:

A downside of preemptive strategies is evolved dependence (de Mazancourtet al. 2005): if brain physiology and behavior are designed to function optimally when the parasite is present, the absence of the parasite will lead to inappropriate or fitness-reducing behaviors (Weinersmith and Earley 2016; see also Johnson and Foster 2018).

I think this is meant to hint at the “hygiene hypothesis”, ie our immune systems are screwed up because we are not getting exposed to the parasites it was built to expect. Suppose lots of parasites try to downregulate the immune system (which sounds logical enough), and the body doesn’t know which ones it’s going to get but expects it to follow a Poisson distribution around some mean. Then it might just upregulate the activity of the immune system that same amount. If you get rid of all the parasites, then your immune system is just set too high and you get autoimmune disorders.

(in case you had the same question I did – yes, the parasitologist Kelly Weinersmith cited above is the same Kelly Weinersmith who co-wrote Soonish with Zach Weinersmith of SMBC fame.)

Sixth, you use antiparasitic drugs as neurotransmitters. This is the kind of murderous-yet-clever solution I expect of evolution, and it does not disappoint. Several neurotransmitters, including neuropeptide Y, neurokinin A, and substance P are pretty good antimicrobials. The assumption has always been that the body kills two birds with one stone, getting its signaling done and also having some antimicrobials around to take out stray bacteria. But Del Giudice proposes that this is to prevent parasites from hijacking the signal; any parasite that tried to produce or secrete an antiparasitic drug would die in the process.

Dopamine is mildly toxic. The body is usually pretty good at protecting itself, but the mechanism fails under stress; this is why too much methamphetamine rots your brain. Why would you use a toxic chemical as a neurotransmitter? For the same reason you would use antiparasitic drugs – because you want to kill anything smaller than you that tries to synthesize it.

People always talk about the body as a beautiful well-oiled machine. But sometimes the body communicates with itself by messages written with radioactive ink on asbestos-laced paper, in the hopes that it’s killing itself slightly more slowly than it’s killing anyone who tries to send it fake messages. Honestly it is a miracle anybody manages to stay alive at all.

All these features together are a pretty effective way of dealing with parasite manipulation. There are a few parasites that can manipulate human behavior – rabies definitely, toxoplasma maybe – but overall we are remarkably safe.

Del Giudice argues that a combination of factors make it easy for parasites to manipulate insects but not large vertebrates. First, insects are small, so you only need a few parasites to produce an insect-sized level of neurotransmitter. Second, insects are so simple that usually one neurotransmitter maps nicely to one behavior; they are too small to support multiple redundant systems or complicated signal cascades. Del Giudice writes:

Although parasites can evolve subtler and more indirect means of manipulation, their computational capabilities are ultimately limited by their size. As the size and complexity of the host’s brain increase relative to the parasite, the disparity may become so extreme that the host is able to “outcompute” its adversary, making complex manipulations effectively impossible. The parasite may still be able to alter the host’s behavior in nonspecific ways (e.g., sickness, brain damage), but is unable to induce the kind of coordinated pattern required for trophic transmission or bodyguard manipulation. Although this argument is admittedly speculative, it is consistent with the fact that complex behavioral manipulations have not been documented in larger, warm-blooded animals (see Lafferty and Kuris 2002).

Finally, almost nothing eats humans, so there aren’t a lot of parasites interested in using us as a vehicle to get to their definitive hosts. If parasites want anything from us, it’s probably STIs wishing we had more risky sex; accordingly, Del Giudice obliquely cites Greg Cochran’s controversial hypothesis that homosexuality may be related to parasites hijacking sexual machinery.

But let’s take a step back: is any of this true?

The strongest evidence against is the dog that didn’t bark. Some systems look heavily defended against parasite manipulation, but others don’t. Amphetamines raise dopamine effectively and without significant tolerance buildup (see part IV here for a defense of this claim); antipsychotics lower dopamine equally effectively and consistently. Since dopamine is one of the most lucrative systems for parasites to hijack, it’s surprising to find it so easy to affect. And what about immune function? Externally administered corticosteroids decrease immune activity and make the body more vulnerable to infection; why don’t parasites secrete them? Why don’t we have some counter against them? These systems look consistent with an evolutionary history in which we don’t expect any threat from parasite manipulation and don’t need to defend ourselves very hard.

But also: homeostasis might be the most basic activity of all living things. Every bodily system can be modeled as a striving for homeostasis in some domain or other, even high-level cognitive functions. So it’s not clear that tolerance to psychiatric drugs needs a complicated evolutionary explanation beyond just “if you increase serotonin, your body is going to try to decrease it again, because that’s what bodies do“.

So I’m not sure how much of an effect this really had. It’s an interesting theory. But whether it explains some things, nothing, or everything, it’s too early to say.

But I like this paper because it takes the complexity of biology seriously. There’s a sense that science is stagnating, and biology is one of the worst offenders. In the 1800s and early 1900s, we were pinning down our mastery of anatomy, discovering all the major hormone systems, learning about microbes and inventing antibiotics. It seemed like the same kind of thing as physics, where you could go out into the world, observe things, and make difficult but fundamentally straightforward discoveries. But for the past fifty years, it’s been kind of a mess. Despite some amazing work by amazing people, we still don’t even understand questions as basic as what depression is. Everything seems bogged down in a million different opaque signaling cascades that fight off any effort to untangle or shift them.

Del Giudice offers a seductive explanation: the perceived perversity of the human blueprint is absolutely real. Parts of it – the parts most involved in health and disease – were sculpted by evolution to be as hard as possible to understand or affect. This makes me feel better about how often the drugs I prescribe fail in surprising ways

Attempted Replication: Does Beef Jerky Cause Manic Episodes?

Last year, a study came out showing that beef jerky and other cured meats, could trigger mania in bipolar disorder (paper, popular article). It was a pretty big deal, getting coverage in the national press and affecting the advice psychiatrists (including me) gave their patients.

The study was pretty simple: psychiatrists at a mental hospital in Baltimore asked new patients if they had ever eaten any of a variety of foods. After getting a few hundred responses, they compared answers to controls and across diagnostic categories. The only hit that came up was that people in the hospital for bipolar mania were more likely to have said they ate dry cured meat like beef jerky (odds ratio 3.49). This survived various statistical comparisons and made some biological sense.

The methodology was a little bit weird, because they only asked if they’d ever had the food, not if they’d eaten a lot of it just before becoming sick. If you had beef jerky once when you were fourteen, and ended up in the psych hospital when you were fifty-five, that counted. Either they were hoping that “ever had beef jerky at all” was a good proxy for “eats a lot of beef jerky right now”, or that past consumption produced lasting changes in gut bacteria. In any case, they found a strong effect even after adjusting for confounders and doing the necessary Bonferroni corrections, so it’s hard to argue with success.

Since the study was so simple, and already starting to guide psychiatric practice, I decided to replicate it with the 2019 Slate Star Codex survey.

In a longer section on psychiatric issues, I asked participants “Have you ever been hospitalized for bipolar mania?”. They could answer “Yes, many times”, “Yes, once”, or “No”. 3040 people answered the question, of whom 26 had been hospitalized once, 13 many times, and 3001 not at all.

I also asked participants “How often do you eat beef jerky, meat sticks, or other similar nitrate-cured meats?”. They could answer “Never”, “less than once a year”, “A few times a year”, “A few times a month”, A few times a week”, or “Daily or almost daily”. 5,334 participants had eaten these at least once, 2,363 participants had never eaten them.

(for the rest of this post, I’ll use “beef jerky” as shorthand for this longer and more complicated question)

Power calculation: the original study found odds ratio of 3.5x; because the percent of my sample who had been hospitalized for mania was so low, OR = RR; I decided to test for an odds ratio of 3. About 1.2% of non-jerky-eaters had been hospitalized for mania, so I used this site to calculate necessary sample size with Group 1 as 1.2%, Group 2 as 3.6% (=1.2×3), enrollment ratio of 0.46 (ratio of the 921 jerky-never-eaters to 2015 jerky eaters), alpha of 0.05, and power of 80%. It recommended a total sample of 1375, well below the 2974 people I had who answered both questions.

Of 932 jerky non-eaters, 11 were hospitalized for mania, or 1.2%. Of 2042 jerky-eaters, 27 were hospitalized for mania, or 1.3%. Odds ratio was 1.12, chi-square statistic was 0.102, p = 0.75. The 95% confidence interval was (.55, 2.23). So there was no significant difference in mania hospitalizations between jerky-eaters and non-eaters.

I also tried to do the opposite comparison, seeing if there was a difference in beef jerky consumption between people with a history of hospitalization for mania and people without such a history. I recoded the “beef jerky” variable to a very rough estimate to how many times per year people ate jerky (“never” = 0, “daily” = 400, etc). The rough estimate wasn’t very principled, but I came up with my unprincipled system before looking at any results. People who had never been hospitalized for mania ate beef jerky an average of 16 times per year; people who had been hospitalized ate it an average of 8 times per year. This is the opposite direction predicted by the original study, and was not significant.

I tried looking at people who had a bipolar diagnosis (which requires at least one episode of mania or hypomania) rather than just people who had been hospitalized for bipolar mania. This gave me four times the sample size of bipolar cases, but there was still no effect. 63% of cases (vs. 69% of controls) had ever eaten jerky, and cases on average ate jerky 15 times a year (compared to 20 times for controls). Neither of these findings was significant.

Why were my survey results so different from the original paper?

My data had some serious limitations. First, I was relying on self-report about mania hospitalization, which is less reliable than catching manic patients in the hospital. Second, I had a much smaller sample size of manic patients (though a larger sample size of controls). Third, I had a different population (SSC readers are probably more homogenous in terms of class, but less homogenous in terms of nationality) than the original study, and did not adjust for confounders.

There were also some strengths to this dataset. I had a finer-grained measure of beef jerky consumption than the original study. I had a larger control group. I was able to be more towards the confirmatory side of confirmatory/exploratory analysis.

Despite the limitations, there was a pretty striking lack of effect for jerky consumption. This is despite the dataset being sufficiently well-powered to confirm other effects that are classically known to exist (for example, people hospitalized by mania had higher self-rated childhood trauma than controls, p < 0.001). This is an important finding and should be easy to test by anyone with access to psychiatric patients or who is surveying a large population. I urge other people (hint to psychiatry residents reading this blog who have to do a research project) to look into this further. I welcome people trying to replicate or expand on these results. All of the data used in this post are freely available and can be downloaded here.

Book Review: Secular Cycles


There is a tide in the affairs of men. It cycles with a period of about three hundred years. During its flood, farms and businesses prosper, and great empires enjoy golden ages. During its ebb, war and famine stalk the land, and states collapse into barbarism.

Chinese population over time

At least this is the thesis of Peter Turchin and Sergey Nefedov, authors of Secular Cycles. They start off Malthusian: due to natural reproduction, population will keep increasing until it reaches the limits of what the land can support. At that point, everyone will be stuck at subsistence level. If any group ever enjoys a standard of living above subsistence level, they will keep reproducing until they are back down at subsistence.

Standard Malthusian theory evokes images of a population stable at subsistence level forever. But Turchin and Nefedov argues this isn’t how it works. A population at subsistence will always be one meal away from starving. When a famine hits, many of them will starve. When a plague hits, they will already be too sickly to fight it off. When conflict arrives, they will be desperate enough to enlist in the armies of whichever warlord can offer them a warm meal.

These are not piecemeal events, picking off just enough of the population to bring it back to subsistence. They are great cataclysms. The Black Plague killed 30% – 60% of Europeans; the Antonine Plague of Rome was almost as deadly. The Thirty Years War killed 25% – 40% of Germans; the Time of Troubles may have killed 50% of medieval Russia.

Thus the secular cycle. When population is low, everyone has more than enough land. People grow rich and reproduce. As time goes on, the same amount of farmland gets split among more and more people. Wages are driven down to subsistence. War, Famine, and Pestilence ravage the land, with Death not far behind. The killings continue until population is low again, at which point the cycle starts over.

This applies mostly to peasants, who are most at risk of starving. But nobles go through a related process. As a cycle begins, their numbers are low. As time goes on, their population expands, both through natural reproduction and through upward mobility. Eventually, there are more nobles than there are good positions…

(this part confused me a little. Shouldn’t number of good positions scale with population? IE if one baron rules 1,000 peasants, the number of baronial positions should scale with the size of a society. I think T&N hint at a few answers. First, some positions are absolute rather than relative, eg “King” or “Minister of the Economy”. Second, noble numbers may sometimes increase faster than peasant numbers, since nobles have more food and better chances to reproduce. Third, during boom times, the ranks of nobles are swelled through upward mobility. Fourth, conspicuous consumption is a ratchet effect: during boom times, the expectations of nobility should gradually rise. Fifth, sometimes the relevant denominator is not peasants but land: if a noble only has one acre of land, it doesn’t matter how many peasants he controls. Sixth, nobles usually survive famines and plagues pretty well, so after those have done their work, there are far fewer peasants but basically the same number of nobles. All of these factors contribute to excess noble population – or as T&N call it, “elite overproduction”)

…and the nobles form “rival patronage networks” to fight for the few remaining good spots. The state goes from united (or at least all nobles united against the peasants) to divided, with coalitions of nobles duking it out (no pun intended). This can lead either to successful peasant rebellion, as some nobles support the peasants as part of inter-noble power plays, or just to civil war. Although famine and plague barely affect nobles, war affects them disproportionately – both because they were often knights or other front-line soldiers, and because killing the other side’s nobles was often a major strategic goal (think Game of Thrones). So a civil war usually further depletes the already-depleted peasant population, and finally depletes noble populations, leading to a general underpopulation and the beginning of the next cycle.

Combine these two processes, and you get the basic structure of a secular cycle. There are about a hundred years of unalloyed growth, as peasant and noble populations rebound from the last disaster. During this period, the economy is strong, the people are optimistic and patriotic, and the state is strong and united.

After this come about fifty years of “stagflation”. There is no more room for easy growth, but the system is able to absorb the surplus population without cracking. Peasants may not have enough land, but they go to the city in search of jobs. Nobles may not have enough of the positions they want, but they go to college in order to become bureaucrats, or join the retinues of stronger nobles. The price of labor reaches its lowest point, and the haves are able to exploit the desperation of the have-nots to reach the zenith of their power. From the outside, this period can look like a golden age: huge cities buzzing with people, universities crammed with students, ultra-rich nobles throwing money at the arts and sciences. From the inside, for most people it will look like a narrowing of opportunity and a hard-to-explain but growing sense that something is wrong.

After this comes a crisis. The mechanisms that have previously absorbed surplus population fail. Famine and disease ravage the peasantry. State finances fall apart. Social trust and patriotism disappear as it becomes increasingly obvious that it’s every man for himself and that people with scruples will be defeated or exploited by people without.

After this comes the depression period (marked “intercycle” on the graph above, but I’m going to stick with the book’s term). The graph makes it look puny, but it can last 100 to 150 years. During this period, the peasant population is low, but the noble population is still high. This is most likely an era of very weak or even absent state power, barbarian invasions, and civil war. The peasant population is in a good position to expand, but cannot do so because wars keep killing people off or forcing them into walled towns where they can’t do any farming. Usually it takes a couple more wars and disasters before the noble population has decreased enough to reverse elite overproduction. At this point the remaining nobles look around, decide that there is more than enough for all of them, and feel incentivized to cooperate with the formation of a strong centralized state.

This cycle is interwoven with a second 40-60 year process that T&N call the “fathers-and-sons cycle” or “bigenerational cycle”. The data tend to show waves of disorder about every 40-60 years. During the “integrative trend” (T&N’s term for the optimistic growth and stagflation phases), these can just be minor protests or a small rebellion that is easily crushed. During the “disintegrative trend” (crisis + depression), they usually represent individual outbreaks of civil war. For example, during the Roman Republic, the violence around the death of Tiberius Gracchus in 133 BC was relatively limited, because Rome had not yet entered its crisis phase. 40 years later, in the depths of the crisis phase, there was a second outbreak of violence (91 – 82 BC) including the Social War and Sulla’s wars, which escalated to full-scale (though limited) civil war. 40 years later there was a third outbreak (49 – 27 BC) including Caesar and Augustus’s very large civil wars. After that the new integrative trend started and further violence was suppressed.

In Secular Cycles, T&N mostly just identify this pattern from the data and don’t talk a lot about what causes it. But in some of Turchin’s other work, he applies some of the math used to model epidemics in public health. His model imagines three kinds of people: naives, radicals, and moderates. At the start of a cycle, most people are naive, with a few radicals. Radicals gradually spread radicalism, either by converting their friends or provoking their enemies (eg a terrorist attack by one side convinces previously disengaged people to join the other side). This spreads like any other epidemic. But as violence gets worse, some people convert to “moderates”, here meaning not “wishy-washy people who don’t care” but something more like “people disenchanted with the cycle of violence, determined to get peace at any price”. Moderates suppress radicals, but as they die off most people are naive and the cycle begins again. Using various parameters for his model Turchin claims this predicts the forty-to-sixty year cycle of violence observed in the data.

So this is the basic thesis of Secular Cycles. Pre-industrial history operates on two cycles: first, a three-hundred year cycle of the rise-and-fall of civilizations. And second, a 40-60 year cycle of violent disorder that only becomes relevant during the lowest parts of the first cycle.


This is all in the first chapter of the book! The next eight chapters are case studies of eight different historical periods and how they followed the secular cycle model.

For example, Chapter 7 is on the Roman Empire. It starts with Augustus in 27 BC. The Roman Republic has just undergone a hundred years of civil war, from the Gracchi to Marius to Sulla to Pompey to Caesar to Antony. All of this decreased its population by 30% from its second-century peak. That means things are set to get a lot better very quickly.

The expansion phase of the Empire lasted from Augustus (27 BC) to Nerva (96 AD), followed by a stagflation phase from Nerva to Antonius Pius (165 AD). Throughout both phases, the population grew – from about 40 million in Augustus’ day to 65 million in Antonius’. Wheat prices stayed stable until Nerva, then doubled from the beginning of the second century to its end. Legionary pay followed the inverse pattern, staying stable until Nerva and then decreasing by a third before 200. The finances of the state were the same – pretty good until the late second century (despite occasional crazy people becoming Emperor and spending the entire treasury building statues of themselves), but cratering during the time of Marcus Aurelius and Commodus (who debased the denarius down to only 2 g silver).

Throughout expansion and stagflation, the Empire was relatively peaceful (the “Pax Romana”). Sure, occasionally a crazy person would become Emperor and they would have to kill him. There was even one small civil war which lasted all of a year (69 AD). But in general, these were isolated incidents.

Throughout the expansion phase, upward mobility was high and income inequality relatively low. T&N measure this as how many consuls (the highest position in the Roman governmental hierarchy) had fathers who were also consuls. This decreased throughout the first century – from 46% to 18% – then started creeping back up during the stagflation phase to reach 32% at the end of the second century.

The crisis phase began in 165 AD at the peak of Rome’s population and wealth. The Antonine Plague ravaged the Empire, killing 30% of the population. Fifteen years later, the century-long dominance of the Good Emperors ended, and Commodus took the throne. Then he was murdered and Pertinax took the throne. Then he was murdered and Didius Julianus took the throne. Then he was murdered and Septimius Severus took the throne.

Now we are well into the disintegrative trend, and the shorter 40-60 year cycle comes into play. Septimius Severus founds a dynasty that lasts 41 years, until Septimius Alexander (the grandson of Septimius Severus’ sister-in-law; it’s complicated) was assassinated by his own soldiers in Germany. This begins the Crisis Of The Third Century, a time of constant civil war, mass depopulation, and economic collapse. The Five Good Emperors of the second century ruled 84 years between them (average of 17 years per emperor). The fifty year Crisis included 27 emperors, for an average of less than 2 years per emperor.

Finally, in 284, Emperor Diocletian ended the civil wars, re-established centralized authority, and essentially refounded the Roman Empire – a nice round 310 years after Augustus did the same. T&N mark this as the end of a secular cycle and the beginning of a new integrative trend.

T&N are able to tell this story. But they don’t just tell the story. They are able to cite various statistics to back themselves up. The Roman population statistics. The price of wheat and other foodstuffs. The average wages for laborers. They especially like coin hoards – the amount of buried treasure from a given period discovered by treasure-hunters – because they argue you only bury your money during times of instability, so this forms a semi-objective way of measuring how unstable things are.

They are at their best when presenting very broad summary statistics. For example, Roman industry produced vast amounts of lead, which entered the atmosphere and settled into the Greenland ice sheet. Here is Roman lead output per year as measured in ice cores:

This shows four peaks for the four cycles T&N identify in Rome: the Kingdom, the Republic, the Early Empire of Augustus (Principate, the one described above), and the Late Empire of Diocletian (Dominate). It even shows a sawtooth-y pattern corresponding to the shorter bigenerational cycles.

Or here is building activity in Rome, measured by how many buildings archaeologists have found from a given time:

This is a little less perfect (why is there a big gap in the middle of the Principate? I guess Augustus is a hard act to follow, building-wise) but it still looks good for the cycle theory.

And here is an Index Of Political Instability, which “combines measures of duration, intensity, and scale of political instability events, coded by a team of professional historians”:

Rome is the one on top. Instability clearly peaks during the crisis-depression phases between T&N’s secular cycles – again with a sawtooth pattern representing the bigenerational cycles.


Seeing patterns in random noise is one of the basic human failure modes. Secular Cycles is so prima facie crackpottish that it should require mountains of data before we even start wondering if it might be true. I want to make it clear that the book – plus Turchin individually in some of his other books and papers – provides these mountains. I can’t show every single case study, graph, and table in this book review. But the chapter above on the Roman Principate included 25 named figures and graphs, plus countless more informal presentations of data series, from “real wages of agricultural laborers in Roman Egypt during the second century” to “mean annual real land rents for wheat fields in artabas per aroura, 27 BC to 268 CE” to “imperial handouts per reign-year” to “importation of African red slip ware into the Albegna Valley of Etruria, 100 – 600”. And this is just one chapter, randomly chosen. There are seven others just like this. This book understands the burden of proof it is under, and does everything it can to meet it.

Still, we should be skeptical. How many degrees of freedom do T&N have, and is it enough to undermine their case?

First, they get some freedom in the civilizations they use as case studies. They could have searched through every region and period and cherry-picked eight civilizations that rose and fell over a periods of three hundred years. Did they? I don’t think so. The case studies are England, France, Rome, and Russia. These are some of the civilizations of greatest interest to the English-speaking world (except Russia, which makes sense in context because the authors are both Russian). They’re also some of the civilizations best-studied by Anglophone historians and with the most data available (the authors’ methodology requires having good time-series of populations, budgets, food production, etc).

Also, it’s not too hard to look at the civilizations they didn’t study and fill in the gaps. The book barely mentions China, but it seems to fit the model pretty well (“the empire united longs to divide; divided longs to unite”). In fact, taking the quotation completely seriously – the empire was first united during the Qin Dynasty starting in 221 BC, which lasted only 20 years before seguing into the Han Dynasty in 202 BC. The Han expanded and prospered for about a century, had another century of complicated intrigue and frequently revolt, and then ended in disaster in the first part of the first century, with a set of failed reforms, civil war, the sack of the capital, some more civil war, peasant revolt, and even more civil war. The separate period of the Eastern Han Dynasty began in 25 AD, about 240 years after the beginning of the Qin-Han cycle. The Eastern Han also grew and prospered for about a hundred years, then had another fifty years of simmering discontent, then fell apart in about 184 AD, with another series of civil wars, peasant rebellions, etc. This was the Three Kingdoms Period during which “the empire united longs to divide, divided longs to unite” was written to describe. It lasted another eighty years until 266 AD, after which the Jin Dynasty began. The Jin Dynasty was kind of crap, but it lasted another 180 years until 420, followed by 160 years of division, followed by the Sui and Tang dynasties, which were not crap. So I don’t think it takes too much pattern-matching to identify a Western-Han-to-Eastern-Han Cycle of 240 years, followed by an Eastern-Han-to-Jin Cycle of 241 years, followed by a Jin-to-Sui/Tang-Cycle of 324 years.

One could make a more hostile analysis. Is it really fair to lump the Western Jin and Eastern Jin conveniently together, but separate the Western Han and Eastern Han conveniently apart? Is it really fair to call the crappy and revolt-prone Jin Dynasty an “integrative trend” rather than a disintegrative trend that lasted much longer than the theory should predict? Is it really fair to round off cycles of 240 and 320 years to “basically 300 years”?

I think the answer to all of these is “T&N aren’t making predictions about the length of Chinese dynasties, they’re making predictions about the nature of secular cycles, which are correlated with dynasties but not identical to them”. If I had the equivalent to lead core readings for China, or an “instability index”, or time series data for wages or health or pottery importation or so on, maybe it would be perfectly obvious that the Eastern and Western Han defined two different periods, but the Eastern and Western Jin were part of the same period – the same way one look at the lead core data for Rome shows that the Julio-Claudian dynasty vs. the Flavian Dynasty is not an interesting transition.

A secondary answer might be that T&N admit all sorts of things can alter the length of secular cycles. They tragically devote only a few pages to “Ibn Khaldun cycles”, the theory of 14th century Arabic historian Ibn Khaldun that civilizations in the Maghreb rise and fall on a one hundred year period. But they discuss it just enough to say their data confirm Ibn Khaldun’s observations. The accelerated timescale (100 vs. 300 years) is because the Maghreb is massively polygynous, with successful leaders having harems of hundreds of concubines. This speeds up the elite overproduction process and makes everything happen in fast-forward. T&N also admit that their theory only describes civilizations insofar as they are self-contained. This approximately holds for hegemons like Rome at its height, but fails for eg Poland, whose history is going to be much more influenced by when Russia or Germany decides to invade than by the internal mechanisms of Polish society. Insofar as external shocks – whether climatic, foreign military, or whatever else – affect a civilization, secular cycles will be stretched out, compressed, or just totally absent.

This sort of thing must obviously be true, and it’s good T&N say it, but it’s also a free pass to add as many epicycles as you need to explain failure to match data. All I can say looking at China is that, if you give it some wiggle room, it seems to fit T&N’s theories okay. The same is true of a bunch of other civilizations I plugged in to see if they would work.

Second, T&N get some degrees of freedom based on what statistics they use. In every case, they present statistics that support the presence of secular cycles, but they’re not the same statistics in every case. On the one hand, this is unavoidable; we may not have good wage data for every civilization, and levels of pottery importation might be more relevant to ancient Rome than to 19th-century Russia. On the other hand, I’m not sure what prevents them from just never mentioning the Instability Index if the Instability Index doesn’t show what they want it to show.

Here are some random Rome-related indicators I found online:

None of them show the same four-peaked Kingdom-Republic-Principate-Dominate pattern as the ones Secular Cycles cites, or the ones Turchin has online.

Third, a lot of the statistics themselves have some degrees of freedom. A lot of them are things like “Instability Index” or “Index of Social Well-Being” or “General Badness Index”. These seem like the kind of scores you can fiddle with to get the results you want. Turchin claims he hasn’t fiddled with them – his instability index is taken from a 1937 paper I haven’t been able to find. But how many papers like that are there? Am I getting too conspiratorial now?

Likewise, we don’t have direct access to the budget of the Roman Empire (or Plantagenet England, or…). Historians have tried to reconstruct it based on archaeology and the few records that have survived. T&N cite these people, and the people they cite are at the top of their fields and say what T&N say they say. But how much flexibility did they have in deciding which estimate of the Roman budget to cite? Is there enough disagreement that they could cite the high estimate for one period and the low estimate for another, then prove it had gone down? I don’t know (though a few hours’ work ought to be enough to establish this).

I wish I could find commentary by other academics and historians on Secular Cycles, or on Turchin’s work more generally. I feel like somebody should either be angrily debunking this, or else throwing the authors a ticker-tape parade for having solved history. Neither is happening. The few comments I can find are mostly limited to navel gazing about whether history should be quantitative or qualitative. The few exceptions find are blog posts by people I already know and respect urging me to read Turchin five years ago, advice I am sorry for not taking. If you know of any good criticism, please tell me where to find it.

Until then, my very quick double-checking suggests T&N are pretty much on the level. But there could still be subtler forms of overfitting going on that I don’t know enough about history to detect.


If this is true, does it have any implications for people today?

First, a very weak implication: it makes history easier to learn. I was shocked how much more I remembered about the Plantagenets, Tudors, Capetians, etc after reading this book, compared to reading any normal history book about them. I think the secret ingredient is structure. If history is just “one damn thing after another”, there’s no framework for figuring out what matters, what’s worth learning, what follows what else. The secular cycle idea creates a structure that everything fits into neatly. I know that the Plantagenet Dynasty lasted from 1154 – 1485, because it had to, because that’s a 331 year secular cycle. I know that the important events to remember include the Anarchy of 1135 – 1153 and the War of the Roses from 1455 – 1487, because those are the two crisis-depression periods that frame the cycle. I know that after 1485 Henry Tudor took the throne and began a new age of English history, because that’s the beginning of the integrative phase of the next cycle. All of this is a lot easier than trying to remember these names and dates absent any context. I would recommend this book for that reason alone.

Second, I think this might give new context to Piketty on inequality. T&N describe inequality as starting out very low during the growth phase of a secular cycle, rising to a peak during the stagflation phase, then dropping precipitously during the crisis. Piketty describes the same: inequality rising through the peaceful period of 1800 to 1900, dropping precipitously during the two World Wars, then gradually rising again since then. This doesn’t make a huge amount of sense, since I’m not sure you can fit the post industrial world into secular cycles. But I notice Piketty seems to think of this as a once-off event – inequality has been rising forever, broken only by the freak crisis of the two World Wars – and it’s interesting to read T&N talk about the exact same process recurring again and again throughout history.

Finally, and most important: is there any sense in which this is still going on?

The easiest answer would be no, there isn’t. The secular cycles are based around Malthusian population growth, but we are now in a post-Malthusian regime where land is no longer the limiting resource. And the cycles seem to assume huge crises killing off 30% to 50% of the population, but those don’t happen anymore in First World countries; the Civil War was the bloodiest period of US history, and even it only killed 2% of Americans. Even Germany only lost about 15% of its population in World Wars I + II.

But Turchin has another book, Ages Of Discord, arguing that they do. I have bought it and started it and will report back when I’m done.

Even without a framework, this is just interesting to think about. In popular understanding of American history, you can trace out optimistic and pessimistic periods. The national narrative seems to include a story of the 1950s as a golden age of optimism. Then everyone got angry and violent in the early 1970s (the Status 451 review of Days Of Rage is pretty great here, and reminds us that “people have completely forgotten that in 1972 we had over nineteen hundred domestic bombings in the United States”). Then everything suddenly got better once Reagan declared “morning in America” in the 1980s, with an era of optimism and good feelings lasting through the Clinton administration. Then things starting to turn bad sometime around Bush II. And now everybody hates each other, and fascists and antifa are fighting in the streets, and people are talking about how “civility” and “bipartisanship” are evil tools of oppression, and PredictIt says an avowed socialist has a 10% chance of becoming president of the US. To what extent is this narrative true? I don’t know, but it’s definitely the narrative.

One thing that strikes me about T&N’s cycles is the ideological component. They describe how, during a growth phase, everyone is optimistic and patriotic, secure in the knowledge that there is enough for everybody. During the stagflation phase, inequality increases, but concern about inequality increases even more, zero-sum thinking predominates, and social trust craters (both because people are actually defecting, and because it’s in lots of people’s interest to play up the degree to which people are defecting). By the crisis phase, partisanship is much stronger than patriotism and radicals are talking openly about how violence is ethically obligatory.

And then, eventually, things get better. There is a new Augustan Age of virtue and the reestablishment of all good things. This is a really interesting claim. Western philosophy tends to think in terms of trends, not cycles. We see everything going on around us, and we think this is some endless trend towards more partisanship, more inequality, more hatred, and more state dysfunction. But Secular Cycles offers a narrative where endless trends can end, and things can get better after all.

Of course, it also offers a narrative where sometimes this process involves the death of 30% – 50% of the population. Maybe I should read Turchin’s other books before speculating any further.

OT134: Open Zed

This is the bi-weekly visible open thread (there are also hidden open threads twice a week you can reach through the Open Thread tab on the top of the page). Post about anything you want, but please try to avoid hot-button political and social topics. You can also talk at the SSC subreddit or the SSC Discord server – and also check out the SSC Podcast. Also:

1. 26 teams have signed up for the adversarial collaboration contest so far! But don’t feel overwhelmed; if people flake out at the same rate as last year, there will still only be 10 or so final entries. I’m curious why the second post was so much more successful at encouraging signups than the first. Was it the rule that only people with A-M names could propose? The rule that nobody could post non-proposal comments in the comments section? Or did people just need more time?

2. I’ve been taking more advantage of a feature where any comment that more than three users report gets removed until I can check it over for appropriateness. Most of these comments are inappropriate but not worth banning people for, so I usually just keep them removed and take no further action. I know people don’t like moderator actions without transparency, but I don’t have enough time/energy to moderate in a transparent way and so you are stuck with this for now. Sorry.

3. Related – I want to remind people that it’s almost never a good choice to go too general. If a post like Against Against Billionaire Philanthropy is getting too many comments like “This proves that government is bad at everything” or “You are a free market ideologue too blinded to see that the free market has killed millions of people”, something has gone wrong, and it’s probably me not banning enough people. Feel free to report posts like this, though I may not ban all of them. I might crack down harder on this in the future; for now, re-read Arguments From My Opponent Believes Something.

4. Two new sidebar ads this month. 21st Night is a study program that combines spaced repetition with error logging. Sparrow is a charity app that links automatic donations to events in your life – for example, you can set it to donate 10% of your restaurant bills to ending world hunger.

Posted in Uncategorized | Tagged | 878 Comments