[Epistemic status: Very speculative, especially Parts 3 and 4. Like many good things, this post is based on a conversation with Paul Christiano; most of the good ideas are his, any errors are mine.]
I.
In the 1950s, an Austrian scientist discovered a series of equations that he claimed could model history. They matched past data with startling accuracy. But when extended into the future, they predicted the world would end on November 13, 2026.
This sounds like the plot of a sci-fi book. But it’s also the story of Heinz von Foerster, a mid-century physicist, cybernetician, cognitive scientist, and philosopher.
His problems started when he became interested in human population dynamics.
(the rest of this section is loosely adapted from his Science paper “Doomsday: Friday, 13 November, A.D. 2026”)
Assume a perfect paradisiacal Garden of Eden with infinite resources. Start with two people – Adam and Eve – and assume the population doubles every generation. In the second generation there are 4 people; in the third, 8. This is that old riddle about the grains of rice on the chessboard again. By the 64th generation (ie after about 1500 years) there will be 18,446,744,073,709,551,616 people – ie about about a billion times the number of people who have ever lived in all the eons of human history. So one of our assumptions must be wrong. Probably it’s the one about the perfect paradise with unlimited resources.
Okay, new plan. Assume a world with a limited food supply / limited carrying capacity. If you want, imagine it as an island where everyone eats coconuts. But there are only enough coconuts to support 100 people. If the population reproduces beyond 100 people, some of them will starve, until they’re back at 100 people. In the second generation, there are 100 people. In the third generation, still 100 people. And so on to infinity. Here the population never grows at all. But that doesn’t match real life either.
But von Foerster knew that technological advance can change the carrying capacity of an area of land. If our hypothetical islanders discover new coconut-tree-farming techniques, they may be able to get twice as much food, increasing the maximum population to 200. If they learn to fish, they might open up entirely new realms of food production, increasing population into the thousands.
So the rate of population growth is neither the double-per-generation of a perfect paradise, nor the zero-per-generation of a stagnant island. Rather, it depends on the rate of economic and technological growth. In particular, in a closed system that is already at its carrying capacity and with zero marginal return to extra labor, population growth equals productivity growth.
What causes productivity growth? Technological advance. What causes technological advance? Lots of things, but von Foerster’s model reduced it to one: people. Each person has a certain percent chance of coming up with a new discovery that improves the economy, so productivity growth will be a function of population.
So in the model, the first generation will come up with some small number of technological advances. This allows them to spawn a slightly bigger second generation. This new slightly larger population will generate slightly more technological advances. So each generation, the population will grow at a slightly faster rate than the generation before.
This matches reality. The world population barely increased at all in the millennium from 2000 BC to 1000 BC. But it doubled in the fifty years from 1910 to 1960. In fact, using his model, von Foerster was able to come up with an equation that predicted the population near-perfectly from the Stone Age until his own day.
But his equations corresponded to something called hyperbolic growth. In hyperbolic growth, a feedback cycle – in this case population causes technology causes more population causes more technology – leads to growth increasing rapidly and finally shooting to infinity. Imagine a simplified version of Foerster’s system where the world starts with 100 million people in 1 AD and a doubling time of 1000 years, and the doubling time decreases by half after each doubling. It might predict something like this:
1 AD: 100 million people
1000 AD: 200 million people
1500 AD: 400 million people
1750 AD: 800 million people
1875 AD: 1600 million people
…and so on. This system reaches infinite population in finite time (ie before the year 2000). The real model that von Foerster got after analyzing real population growth was pretty similar to this, except that it reached infinite population in 2026, give or take a few years (his pinpointing of Friday November 13 was mostly a joke; the equations were not really that precise).
What went wrong? Two things.
First, as von Foerster knew (again, it was kind of a joke) the technological advance model isn’t literally true. His hyperbolic model just operates as an upper bound on the Garden of Eden scenario. Even in the Garden of Eden, population can’t do more than double every generation.
Second, contra all previous history, people in the 1900s started to have fewer kids than their resources could support (the demographic transition). Couples started considering the cost of college, and the difficulty of maternity leave, and all that, and decided that maybe they should stop at 2.5 kids (or just get a puppy instead).
Von Foerster published has paper in 1960, which ironically was the last year that his equations held true. Starting in 1961, population left its hyperbolic growth path. It is now expected to stabilize by the end of the 21st century.
II.
But nobody really expected the population to reach infinity. Armed with this story, let’s look at something more interesting.
This (source) might be the most depressing graph ever:
The horizontal axis is years before 2020, a random year chosen so that we can put this in log scale without negative values screwing everything up. This is an arbitrary choice, but you can also graph it with log GDP as the horizontal axis and find a similar pattern.
The vertical axis is the amount of time it took the world economy to double from that year, according to this paper. So for example, if at some point the economy doubled every twenty years, the dot for that point is at twenty. The doubling time decreases throughout most of the period being examined, indicating hyperbolic growth.
Hyperbolic growth, as mentioned before, shoots to infinity at some specific point. On this graph, that point is represented by the doubling time reaching zero. Once the economy doubles every zero years, you might as well call it infinite.
For all of human history, economic progress formed a near-perfect straight line pointed at the early 21st century. Its destination varied by a century or two now and then, but never more than that. If an ancient Egyptian economist had modern techniques and methodologies, he could have made a graph like this and predicted it would reach infinity around the early 21st century. If a Roman had done the same thing, using the economic data available in his own time, he would have predicted the early 21st century too. A medieval Burugundian? Early 21st century. A Victorian Englishman? Early 21st century. A Stalinist Russian? Early 21st century. The trend was really resilient.
In 2005, inventor Ray Kurzweil published The Singularity Is Near, claiming there would be a technological singularity in the early 21st century. He didn’t refer to this graph specifically, but he highlighted this same trend of everything getting faster, including rates of change. Kurzweil took the infinity at the end of this graph very seriously; he thought that some event would happen that really would catapult the economy to infinity. Why not? Every data point from the Stone Age to the Atomic Age agreed on this.
This graph shows the Singularity getting cancelled.
Around 1960, doubling times stopped decreasing. The economy kept growing. But now it grows at a flat rate. It shows no signs of reaching infinity; not soon, not ever. Just constant, boring 2% GDP growth for the rest of time.
Why?
Here von Foerster has a ready answer prepared for us: population!
Economic growth is a function of population and productivity. And productivity depends on technological advancement and technological advancement depends on population, so it all bottoms out in population in the end. And population looked like it was going to grow hyperbolically until 1960, after which it stopped. That’s why hyperbolic economic growth, ie progress towards an economic singularity, stopped then too.
In fact…
This is a really sketchy graph of per capita income doubling times. It’s sketchy because until 1650, per capita income wasn’t really increasing at all. It was following a one-step-forward one-step-back pattern. But if you take out all the steps back and just watch how quickly it took the steps forward, you get something like this.
Even though per capita income tries to abstract out population, it displays the same pattern. Until 1960, we were on track for a singularity where everyone earned infinite money. After 1960, the graph “bounces back” and growth rates stabilize or even decrease.
Again, von Foerster can explain this to us. Per capita income grows when technology grows, and technology grows when the population grows. The signal from the end of hyperbolic population growth shows up here too.
To make this really work, we probably have to zoom in a little bit and look at concrete reality. Most technological advances come from a few advanced countries whose population stabilized a little earlier than the world population. Of the constant population, an increasing fraction are becoming researchers each year (on the other hand, the low-hanging fruit gets picked off and technological advance becomes harder with time). All of these factors mean we shouldn’t expect productivity growth/GWP per capita growth/technological growth to exactly track population growth. But on the sort of orders-of-magnitude scale you can see on logarithmic graphs like the ones above, it should be pretty close.
So it looks like past predictions of a techno-economic singularity for the early 21st century were based on extrapolations of a hyperbolic trend in technology/economy that depended on a hyperbolic trend in population. Since the population singularity didn’t pan out, we shouldn’t expect the techno-economic singularity to pan out either. In fact, since population in advanced countries is starting to “stagnate” relative to earlier eras, we should expect a relative techno-economic stagnation too.
…maybe. Before coming back to this, let’s explore some of the other implications of these models.
III.
The first graph is the same one you saw in the last section, of absolute GWP doubling times. The second graph is the same, but limited to Britain.
Where’s the Industrial Revolution?
It doesn’t show up at all. This may be a surprise if you’re used to the standard narrative where the Industrial Revolution was the most important event in economic history. Graphs like this make the case that the Industrial Revolution was an explosive shift to a totally new growth regime:
It sure looks like the Industrial Revolution was a big deal. But Paul Christiano argues your eyes may be deceiving you. That graph is a hyperbola, ie corresponds to a single simple equation. There is no break in the pattern at any point. If you transformed it to a log doubling time graph, you’d just get the graph above that looks like a straight line until 1960.
On this view, the Industiral Revolution didn’t change historical GDP trends. It just shifted the world from a Malthusian regime where economic growth increased the population to a modern regime where economic growth increased per capita income.
For the entire history of the world until 1000, GDP per capita was the same for everyone everywhere during all historical eras. An Israelite shepherd would have had about as much stuff as a Roman farmer or a medieval serf.
This was the Malthusian trap, where “productivity produces people, not prosperity”. People reproduce to fill the resources available to them. Everyone always lives at subsistence level. If productivity increases, people reproduce, and now you have more people living at subsistence level. OurWorldInData has an awesome graph of this:
As of 1500, places with higher productivity (usually richer farmland, but better technology and social organization also help) population density is higher. But GDP per capita was about the same everywhere.
There were always occasional windfalls from exciting discoveries or economic reforms. For a century or two, GDP per capita would rise. But population would always catch up again, and everyone would end up back at subsistence.
Some people argue Europe broke out of the Malthusian trap around 1300. This is not quite right. 1300s Europe achieved above-subsistence GDP, but only because the Black Plague killed so many people that the survivors got a windfall by taking their land.
Malthus predicts that this should only last a little while, until the European population bounces back to pre-Plague levels. This prediction was exactly right for Southern Europe. Northern Europe didn’t bounce back. Why not?
Unclear, but one answer is: fewer people, more plagues.
Broadberry 2015 mentions that Northern European culture promoted later marriage and fewer children:
The North Sea Area had an advantage in this area because of its approach to marriage. Hajnal (1965) argued that northwest Europe had a different demographic regime from the rest of the world, characterised by later marriage and hence limited fertility. Although he originally called this the European Marriage Pattern, later work established that it applied only to the northwest of the continent. This can be linked to the availability of labour market opportunities for females, who could engage in market activity before marriage, thus increasing the age of first marriage for females and reducing the number of children conceived (de Moor and van Zanden, 2010). Later marriage and fewer children are associated with more investment in human capital, since the womenemployed in productive work can accumulate skills, and parents can afford to invest more in each of the smaller number of children because of the “quantity-quality” trade-off (Voigtländer and Voth, 2010).
This low birth rate was happening at the same time plagues were raising the death rate. Here’s another amazing graph from OurWorldInData:
British population maxes out around 1300 (?), declines substantially during the Black Plague of 1348-49, but then keeps declining. The List Of English Plagues says another plague hit in 1361, then another in 1369, then another in 1375, and so on. Some historians call the whole period from 1348 to 1666 “the Plague Years”.
It looks like through the 1350 – 1450 period, population keeps declining, and per capita income keeps going up, as Malthusian theory would predict.
Between 1450 and 1550, population starts to recover, and per capita incomes start going down, again as Malthus would predict. Then around 1560, there’s a jump in incomes; according to the List Of Plagues, 1563 was “probably the worst of the great metropolitan epidemics, and then extended as a major national outbreak”. After 1563, population increases again and per capita incomes decline again, all the way until 1650. Population does not increase in Britain at all between 1660 and 1700. Why? The List declares 1665 to be “The Great Plague”, the largest in England since 1348.
So from 1348 to 1650, Northern European per capita incomes diverged from the rest of the world’s. But they didn’t “break out of the Malthusian trap” in a strict sense of being able to direct production toward prosperity rather than population growth. They just had so many plagues that they couldn’t grow the population anyway.
But in 1650, England did start breaking out of the Malthusian trap; population and per capita incomes grow together. Why?
Paul theorizes that technological advance finally started moving faster than maximal population growth.
Remember, in the von Foerster model, the growth rate increases with time, all the way until it reaches infinity in 2026. The closer you are to 2026, the faster your economy will grow. But population can only grow at a limited rate. In the absolute limit, women can only have one child per nine months. In reality, infant mortality, infertility, and conscious decision to delay childbearing mean the natural limits are much lower than that. So there’s a theoretical limit on how quickly the population can increase even with maximal resources. If the economy is growing faster than that, Malthus can’t catch up.
Why would this happen in England and Holland in 1650?
Lots of people have historical explanations for this. Northern European population growth was so low that people were forced to invent labor-saving machinery; eventually this reached a critical mass, we got the Industrial Revolution, and economic growth skyrocketed. Or: the discovery of America led to a source of new riches and a convenient sink for excess population. Or: something something Protestant work ethic printing press capitalism. These are all plausible. But how do they sync with the claim that absolute GDP never left its expected trajectory?
I find the idea that the Industrial Revolution wasn’t a deviation from trend fascinating and provocative. But it depends on eyeballing a lot of graphs that have had a lot of weird transformations done to them, plus writing off a lot of outliers. Here’s another way of presenting Britain’s GDP and GDP per capita data:
Here it’s a lot less obvious that the Industrial Revolution represented a deviation from trend for GDP per capita but not for GDP.
These British graphs show less of a singularity signature than the worldwide graphs do, probably because we’re looking at them on a shorter timeline, and because the Plague Years screwed everything up. If we insisted on fitting them to a hyperbola, it would look like this:
Like the rest of the world, Britain was only on a hyperbolic growth trajectory when economic growth was translating into population growth. That wasn’t true before about 1650, because of the plague. And it wasn’t true after about 1850, because of the Demographic Transition. We see a sort of fit to a hyperbola between those points, and then the trend just sort of wanders off.
It seems possible that the Industrial Revolution was not a time of abnormally fast technological advance or economic growth. Rather, it was a time when economic growth outpaced population growth, causing a shift from a Malthusian regime where productivity growth always increased population at subsistence level, to a modern regime where productivity growth increases GDP per capita. The world remained on the same hyperbolic growth trajectory throughout, until the trajectory petered out around 1900 in Britain and around 1960 in the world as a whole.
IV.
So just how cancelled is the singularity?
To review: population growth increases technological growth, which feeds back into the population growth rate in a cycle that reaches infinity in finite time.
But since population can’t grow infinitely fast, this pattern breaks off after a while.
The Industrial Revolution tried hard to compensate for the “missing” population; it invented machines. Using machines, an individual could do an increasing amount of work. We can imagine making eg tractors as an attempt to increase the effective population faster than the human uterus can manage. It partly worked.
But the industrial growth mode had one major disadvantage over the Malthusian mode: tractors can’t invent things. The population wasn’t just there to grow the population, it was there to increase the rate of technological advance and thus population growth. When we shifted (in part) from making people to making tractors, that process broke down, and growth (in people and tractors) became sub-hyperbolic.
If the population stays the same (and by “the same”, I just mean “not growing hyperbolically”) we should expect the growth rate to stay the same too, instead of increasing the way it did for thousands of years of increasing population, modulo other concerns.
In other words, the singularity got cancelled because we no longer have a surefire way to convert money into researchers. The old way was more money = more food = more population = more researchers. The new way is just more money = send more people to college, and screw all that.
But AI potentially offers a way to convert money into researchers. Money = build more AIs = more research.
If this were true, then once AI comes around – even if it isn’t much smarter than humans – then as long as the computational power you can invest into researching a given field increases with the amount of money you have, hyperbolic growth is back on. Faster growth rates means more money means more AIs researching new technology means even faster growth rates, and so on to infinity.
Presumably you would eventually hit some other bottleneck, but things could get very strange before that happens.
I predict the Singularity has not been cancelled. By the year 2100, the per-capita gross world product will exceed $10,000,000 (in year 2000 dollars).
Robert Lucas Jr. is wrong…per capita GWP growth will accelerate
You should be aware that England and Holland were dominant European Naval powers around that time.
See here: https://en.wikipedia.org/wiki/Dutch_West_India_Company
And here: https://en.wikipedia.org/wiki/East_India_Company
And don’t be misled by “company” — the East India Company was basically the first “joint stock corporation” that there was. Most of English GDP was touched by The Company, because:
The period you’re describing was the rise of (Colonial) Mercantilism. I.e. global trade networks were now unleashed thanks to technological advances. “GDP per Capita” in England, per your example, is leveraging the physical labor of an entire continent (India) who do not show up in that population graph.
Prior to Mercantilism, a “conquered” people would be subsumed within the “Empire” and become taxed subjects, with “Roman Empire GDP per Capita” adjusted to include the entire empire.
But in these graphs you can see the GDP of England going up without reflecting the full population who “labored” to produce that GDP.
Couple that with the population-depleting-plagues you mentioned and boom: Trap escaped by virtue (or lack thereof) of non-citizen labor.
“Who is generating this Wealth — and how — and are they able to capture any of the surpluses they generate?” is always a good question to ponder.
(Sorry for long comment, this is a pet topic of mine and I like to write about it a lot)
Even with a constant population, you can sustain exponential growth of researchers within a given field for quite some time.
Presumably AI and ML in particular will enjoy exponential gains in research talent for decades to come.
I disagree that hyperbolic growth has to be canceled, though I do see bottlenecks I’m not sure how to overcome.
My essential argument is that industrial revolution didn’t change the trend because it’s an essential part of the trend.
But I do see bottlenecks today and in many places don’t know exactly how to categorize them. E.g. bullshit jobs point towards an overachieving reproduction to the point where today’s economy can _support_ all the people around, but doesn’t have the capacity to apply their skills in any meaningful way.
Is this because there truly are no capacities left? Did we just max out the economie’s resource consumption to the rate of resource production? If so, wouldn’t we be better off w/o spending money on bullshit jobs at all? Is this just the only socially acceptable way of implementing welfare programs? Are bullshit jobs just an answer to low-IQ-jobs dying out?
Simply because I like the maxing-out-resources view, I think we actually maxed out on energy-production. We need better ways to produce more, cheaper energy. It fits nicely with my other view that there is no way around nuclear energy and that bad PR for nuclear energy is a major culprit why we don’t have cheap, “sustainable” fission reactors yet. At the same time, there is some hope for the next big leap coming from Princeton, claiming to develop low-mass, high power fusion reactors. (They caught my eye because their reactor is just so incredibly simple, yet controlling a highly complex system.)
Another bottleneck would be dry land surface on earth. I don’t see a lot of effort put into urbanization of sea surface and the Elon is a bit too far off to count. Vertical farming might be a big win, but right now it seems we might just increase efficiency with better controlled environments and GMOs?
But it might all just go to shit anyway due to climate change anyway. We do live in interesting times.
If China and India continue developing economically and become able to generate researchers out of each of their billion person populations, then that could create a massive research boom. Is part of what we’re looking at and living in today just the waiting period before those two booms explode?
For the parts of history where technological growth is negligible—thus social change can run its course—some details are of crucial importance, and they lead to very different societies. So far, I believe I have understood the economics of three boundary conditions:
1) Disease-limited, non-Malthusian. In the “Cursed Paradise”, there is—relatively speaking—an abundance of land, thus enough food can be produced for the whole population with comparatively little labor. The main consequence is that single mothers can raise their children, paternal investment is not particularly useful, pair bonds are weak. Men spend their effort on competing with other men for status and to impress the women in “sports”, art, rhetoric, and raid the neighboring tribes. (Cad-fast.) Evolutionarily, beside disease resistance, this condition rewards r-strategists—unlike the food-limited societies selecting for K-strategists—, possibly leading to pygmies, if the pressure remains for hundreds of thousands of years.
2) Malthusian, labor-limited. Given the food production technology, the climate and the plants/animals used, marginal labor has positive returns all the way up to and even past the limit of how much labor any person can put forth. Enormous paternal effort is necessary all the way until children reach adulthood, but the marriage market is relatively flat and balanced. The (parts of) societies that work like this are quite drab, both because nobody has spare effort, and because nobody has much reason for zero-sum competitions. In practice, some nobility/empire diverts some effort into flashier things via taxation. (Dad-coy, domestic.)
3) Malthusian, land-limited. (Other forms of Magic Box of Food can work out similarly.) Given the food production tech, etc., there is zero yield to marginal labor beyond some point that is significantly less than the humanly possible maximum. (“You can’t herd livestock harder.”) Reproductive opportunity is limited by land ownership. This leads to a skew marriage market, where brides and their families compete for the best grooms. (Why doesn’t polygyny vent the pressure? Because only a vanishingly small proportion of men have so much land=food that they could feed more children than one wife can bear, but a second wife is an extra mouth to feed.) This competition ranges from paying dowries, through oppressing women to signal chastity (basically locking them indoors, and some escort guarding them anytime they venture onto the streets), to abominations such as (trigger warning) FGM (female genital mutilation) and honor killings, to keep the family honor a.k.a. the credibility of the promise that the other daughters of the family are chaste. (/trigger warning) On the upside, surplus labor is available, and there are dowries to pay, so there is a lot of art and other flashy showing-off. (Dad-coy, public.)
Von Förster’s analysis is apparently robust to the fact that he assumed the 3. condition, while as far as I can tell, Europe and temperate East Asia (the typical subjects of this kind of analysis) were in the bulk of the population (peasantry) much closer to the 2. condition for most of their historical existence, with a 3.-type nobility (Chinese foot wrapping, anyone?). In equilibrium, this mistake makes no difference (the labor supply being exhausted looks just like labor having zero marginal returns), but after plagues, the surplus of land frees up less labor in the 3. condition than in the 2. condition.
_______________________________________________________________
If this comment is already so long, I might as well add some wild speculation on the demographic transition. Assuming that its primary cause is the high cost of additional children to working-out-of-home women—and seeing how we already use the public education system during the day—we could just complete the pattern, make them boarding schools and add pre-pre-pre-K until the entry age is 0. Or to be facetious about it, “Orphanages for everyone!”
Orphanages for everyone!
One of the features of many proposed communist utopias. Up to, past, and including The Dispossessed.
There are multiple ways to arrive at this idea. Communist utopians were mostly blank slatists, and thought this would equalize everyone. If someone believes nurture has limited effect, they aren’t very concerned about the perhaps worse shared-environmental effects ruining the people. (On the other hand, the things that we know to matter, notably disease and nutrient deficiency, are problems bureaucracy is extremely good at solving.) High Modernism contains two related approaches: one, “efficiencies of scale and specialization in everything”, figure out how to make childcare come in larger and more similar batches for a longer time; two, “move functions from the household to dedicated institutions”. Natalism and working mothers (-to-be) ask for ways to lower the additional cost of children, which is why we already have some parts of the system.
I think that if the end date for the time axis is increased, say to 2050, the recent data points won’t appear to have deviated from trend so much. The suggestion that the data show the singularity being cancelled may be somewhat sensitive to this choice.
“But the industrial growth mode had one major disadvantage over the Malthusian mode: tractors can’t invent things.”
The core of the article. What if they could?
Good article.
Warning – Point probably made by others before me, but hammering out preliminary thoughts in the 10 free minutes I have today
Let me grant the whole post for the sake of argument. It isn’t really that depressing. Okay, doubling time has remained static. But that is still exponential growth. A while back I idly ran numbers on current growth trends, and if things keep going just as they are, around about the year 2100, Botswana is basically Wakanda.
Even in my own field of science – molecular biology – I am staggered. Exponential increase in our abilities is the norm; half the tools now available for genetic engineering today would have been pure scifi when I was an undergrad (which isn’t that long ago….)
I think that what is sometimes underestimated here is the Giant Head Of Steam progress has going behind it. Pre-existing, proven tech can be exponential in its effects when it is just spread around or mixed up a bit.
Take your point about AI. Before we even get to general AI, we can do some pretty amazing things with AI as it stands – all that’s really required is getting the robotics people to talk with the biology people more than they do (which, again, is happening). Similarly, when someone invents a cool widget in the US, it can now be disseminated throughout the rest of the world, which has an exponential effect all on its own – in some cases leading to weird things, such as the women of Kibera needing remedial classes in keyboard and mouse use, since they’re so used to smartphones and touchscreens…
I’m religiously committed to intervening on points like this, so:
“…once AI comes around – even if it isn’t much smarter than humans…”
AI is already here, and it’s already much smarter than humans (in some ways, and lacks some other human capacities).
If someone wants to learn more about the economic theory side of this phenomena (and are mathematically prepared), I’d strongly recommend reading Acemoglu’s Introduction to Modern Economic Growth. Honestly this article makes great empirical evidence for the theories given for balanced growth paths, labor shocks and endogenous growth in the book.
Isn’t that last part basically Robin Hanson’s “Age of Em” argument? “More ems = more growth/etc”, except you can substitute AI to some degree.
I’m skeptical of this, because birth rates have fallen world-wide (including in places that have not gone through heavy industrialization). I think it has a lot more to do with reducing early childhood mortality and increased access to reliable contraception – it’s not a coincidence that the countries with the highest fertility rates are the poorest, most conflict-prone parts of the planet, and even their rates are lower than some of the historical rates (Colonial America’s fertility rate was higher than the highest fertility rate country today).
One thing that could screw things up is climate change. If we have to spend an increasing amount of our GDP fixing climate disasters that could soak up a lot of the advantages from AI.
There’s one big mathematical problem with the graphs: “Years before 2020” is doing most of the work toward the right side. Everything looks disappointing if you’re expecting the asymptote to be closer than it is. If you redraw those graphs with “Years before 2050”, the trend looks like it is still plausibly going, just with an outlier post-WW2 blip of superfast growth (similar to the outlier of 1300) that wore off.
(Not saying that this settles matters! Just that the graphs are less damning than they first appear.)
See the link to the log GDP graph.
I’ve written about this before, but it bears repeating:
Finite systems produce S-curves that resemble exponential growth… until they don’t. Don’t be fooled into fitting an exponential growth curve onto a finite system. You can only get infinite growth in infinite systems.
If population was going to drive growth, shouldn’t the trend in Africa be reversed? Their population has basically exploded because they got technology that spilled over from other parts of the world, and their population explosion hasn’t resulted in a tech explosion.
They’ve benefited from technology, but it hasn’t pulled them to the technological frontier where most new development occurs.
Under this model why would that matter?
The singularity always has been a somewhat far-fetched idea. In the end, we will end up limited by the same factor that has limited the growth and development of every organism in history: energy availability.
Anybody who is claiming that AI will become infinitely capable, is claiming that its energy efficiency will also become infinite (energy is, after all, finite). I see very little reason to believe that this is possible, and even less reason to believe that it is likely.
Trivial objection: the AI could just capture energy that 2019!humanity isn’t using yet. There’s a bunch of calculations on stuff like crosswind kites (big drop in the cost of wind power, can be deployed in deep water), dynamic tidal power, building solar+HVDC into the Sahara, vortex engines… The theme is that on paper they work out, but “in theory, there is no difference between theory and practice” wariness and general Moloch problems makes for glacially slow progress. An agenty AI could cut through most of these difficulties.
And, of course, it could always go out and start building a Dyson swarm.
Population Growth and Technological Change: One Million B.C. to 1990 by Kremer is extremely relevant.
Abstract:
Its explanation of the end of the growth trend is:
I understand using GDP as a crude proxy for comparing nations, but I’m skeptical about using it to compare fundamentally different modes of living and time periods.
If I invent a magical car that uses no gasoline and runs forever without getting old, and costs nothing to make, it should reduce the GDP because there are fewer economic transactions and less money changing hands, but obviously the car is a net good (I guess except for the people making the cars, but that’s more of a distribution question).
It seems like the real problem is that social scientists like to take a simple linear correlation: that today’s more developed countries have higher GDPs than agrarian societies and pretend that it corresponds to any kind of general underlying truth.
Another problem is that non-market production (especially home production) typically doesn’t get counted in GDP. For example, if you hire a cleaning service to scrub your bathtub, that counts in GDP, but if you do it yourself or make your kid do it, it doesn’t count. That’s mostly okay for modern developed countries, where market production is central enough to our concerns and overall prosperity that ignoring non-market production is mostly harmless.
In a subsistence agriculture society, on the other hand, to ignore home production is to ignore most of the goods and services produced by members of that society. A 21st century developed economy is still enormously richer than a subsistence agriculture economy, but the margin is somewhat smaller if you look at total production instead of just comparing market production.
One concern about your argument: global population has continued growing since 1960, and critically the share of the world population participating in scientific research has sky-rocketed. If what matters to hyperbolic growth is growing the population of available scientists, then why hasn’t accepting all the smart people from India and China into western graduate schools done the trick?
What you really want is a way of converting more money into more *brainpower*. And to keep doing that we don’t need AI; a method to increase human intelligence will suffice.
This is also safer from a value-preservation perspective — humans aren’t exactly friendly (or safe), but we at least have human values.
Of course, you also haven’t established that the singularity is a good thing. Perhaps we could stand to grow in wisdom and virtue before launching ourselves to the stars.
Wouldn’t the Malthusian Trap itself discourage technological advance? There’s not much time to innovate or research when you’re living at the subsistence level.
Also, China has been the most populous region/country/(culture?) of the world for at least the last two millennia, yet very few of the major advancements that have increased agricultural and industrial output of the last 500 years originated there. How does that fit in with the (admittedly crude) “more people means more technological advance” thesis?
China seems to have been technologically ahead for most of history, until the Great Divergence; the Great Divergence is admittedly really mysterious and often phrased as “given that China should have been ahead, why was Europe?” I discuss some reasons this might have happened in Part 3 of this post; some popular explanations are unique European customs of marrying later (therefore having fewer but better-educated kids) and Europe’s more limited population encouraging them to invent labor-saving machinery.
I know that it’s not in fashion to have explanations this detailed, but I think that you can also blame some things that don’t rise to the level of economic universalities.
Like, hand-writing and learning to read Chinese ideograms is not actually massively more difficult than hand-writing and learning to read alphabetic languages. But movable-type printing is much less useful if you have to have (and keep organized) several thousand character dies.
It may be that rice is simply less amenable to labor-saving agricultural inventions than wheat is. Etc.
Ian Morris says China was just slightly behind than Europe until the fall of Rome, and then stayed ahead during the dark ages, in energy capture (aka food production). After that, Europe became way more effective in applying fuel as an input to food production and jumped back ahead.
bah, my edit was eaten. Anyway, Morris says that Europe caught up around the XVIII century, probably by the combination of colonization and reinassance. If there’s a great divergence, it seems to be that the Chinese emperors stopped exploring, while Europe sent Marco Polo there. Insert here the honorable and traditional blaming of Confucianism. Morris himself blames geography, saying that China has more to gain by exploring the Indian Ocean, while Europe has to develop serious navigation chops to go anywhere, but I feel that this is a little adhoc/non falsifiable, as you could explore America by navigating near the coast from China to the north east.
Hold a moment. You say the doubling times in the first graph are taken from J Bradford DeLong’s paper ‘Estimates of World GDP, One Million B.C. – Present’. That paper doesn’t include doubling times, but does (as the name implies) include estimate of world real GDP in 1990 international dollars. Looking at the source data I see that you’ve used the second (ex-Nordhaus) set of estimates from pages 7-8 of the paper (except for some reason 1.431111111 has been entered for 10,000 BC rather than 1.38). It’s unclear why you’ve rejected DeLong’s preferred data set, but that’s an aside.
DeLong does not have any GDP data before 1820. What he has done is noted a relationship between population growth and GDP per capita in the period from the early nineteenth century to WW2. He has data for world population going back to a million BC and he extrapolates the observed relationship back (despite it failing after WW2), i.e. his GDP estimates before 1820 have been directly calculated from the global population. He even comments that he is “enough of a Malthusian” to do this. This incidentally is a different approach from that taken in the paper from which he takes his 1820-present GDP figures.
DeLong then makes a correction for the availability of new goods in later periods, but the graph has been drawn based on the “ex-Nordhaus” figures, i.e. without this correction, so that doesn’t matter.
It seems to me that this makes your conclusion circular: the data for global GDP have been calculated on the (explicitly Malthusian) assumption of a linear relationship between population growth and GDP per capita, and you’ve then used this to show that economic growth depends on population growth.
Well, that at the very least explains the paragraph saying that the Romans could have extrapolated to the early XXI century, tho dunno the rest. I think Ian Morris had better data for prehistory-early history, but I also remember it being mixed into his own development index, and dunno if that is translatable or if it suffered interpolations like here.
Nice catch. I was wondering how on earth they had come up with data for world GDP from 10000 BC, when even just measuring it for 3rd world countries today is a challenge, and arguably not a good method of measuring their standard of living.
I think you’ve pretty much explained how this works: loosely construct “GDP” based on population, plot that proxy version of population against actual population, then put them both on log axes and with a huge tolerance for error. Hey look, a straight line! It breaks down in the modern era because the numbers become small enough that you can actually notice the details, and because we have actual data instead of vague proxies.
I agree the DeLong data alone should not be used to prove Malthus; by my understanding, Malthus is already proved by many other things (and I included some evidence in here).
I agree that this alone cannot prove that economic growth is caused by population growth. The argument I meant to make here was that economic growth looks hyperbolic until 1960, and that given models that we might accept on armchair theorizing grounds, this would make sense since population works the same way.
It seems to me that you can’t say economic growth looks hyperbolic for the period 10,000 BC to 1820: all you can say is that population growth looks hyperbolic (as you’ve already observed in section 1) and if economic growth tracked population growth during pre-industrial times, then economic growth was also hyperbolic. In particular, it seems doubtful to extrapolate a trend backwards from 1820 and then note the absence of a discontinuity at the industrial revolution.
I think you’re right, none of the GDP data used prior to 1820 is directly estimated, instead it’s derived from a model of a population-to-GPD causal link that’s estimated from the 1820-1945 data (he does not exactly say when he stops after WW2, so I can’t tell).
So the fact that the pre-1820 GDP matches the trendline for 1820-1960 is not suprising at all: it would, by construction, regardless of (GDP) reality. All we’re seeing is consistent population growth.
I reran the numbers on my end using annualized rate rather than doubling-time (which uses more of the data), and while the overall trend is visible, and post 1960 (the last 6 data points) does diverge, there are also centuries-long periods of zero-to-negative GDP growth in the dataset.
Assuming the GDP estimates are accurate, you could also say the “singularity was cancelled” in 200 AD, 1250 AD, and 1340 AD. The trend resumed later. It’s just that we’re looking at the current time period under a microscope.
Looking at the last graph (total economic output in England since 1270) in isolation, I’d say that the Singularity was cancelled some time in the early 20th century, and World War I is the first suspect to leap to mind.
I can see three problems with this model, that cast doubt on its conclusions and may explain the post-1960 trend towards “stagnation”.
The first is that most technological advances are collaborative rather than individual efforts, and the scale of the necessary (or at least observed) collaboration increases as we move up the technology axis. And while 1960 may not mark the death of the “lone genius inventor” model, it may be about the time a majority of inventiveness shifted from lone geniuses and small groups to Big Science.
Second, for a gross economic effect of the sort being measured on these graphs, it is not sufficient for the technological advance to occur, it is necessary that it be broadly adopted across the global economy. The more people there are, the harder it is to convince all of them that the new way is the best way and the easier it is for an advance to wind up stuck in a local economic niche.
And finally, the model assumes that inventors are trying to invent things that promote material wealth and GDP growth, rather than trying to find new ways of ensuring positional or status growth. Not only is that not the case, but as an error it probably isn’t constant in time. Pre-1960, a large fraction of the human race suffered from absolute material deprivation, which highly motivates one to seek material improvements in life. Also, people who literally owe you their lives can probably be persuaded to devote some fraction of those lives to increasing your status. Post-1960, at least in the industrialized world where technological advances occur, most everyone is adequately fed, clothed, and sheltered, has access to health care that will almost certainly get them to threescore and ten with no major crises, has global reach w/re information and even personal travel, and is past the point of diminishing returns on the dollars-to-hedons curve. But status competition is still highly motivating.
And status competition impedes the spread of technological advance, because adopting the New and Better Way diminishes the status of the people at the apex of The Way Things Are Done Now. It probably also impedes research collaboration, to a small extent from its corrosive effect within teams but even more so by ensuring that the teams which are formed and funded are the ones with high-status rather than high-capability inventive leaders.
Nor does positing AI fix this, unless you imagine that AI scientists can write better grant proposals for (presumably still human) funding committees than can human scientists, that AI inventors are better at convincing humans to adopt their ideas than are other humans, and that AI will never ever be used for such base purposes as increasing the status of its human masters (or of the AIs themselves).
If the ceiling of physics is lower than we think, then maybe not so strange. I have a hunch the AI “Singularity” will be largely characterized by doing most already defined things much much faster, more so than discovering entirely new realms of existence.
We seem to be at a point in history where we’ve observed everything, we know what it all does, and we only lack the knowledge of how. Even things like dark energy and dark matter are well defined mysteries that cannot be captured in a bottle and technologically applied. It’s our models that contain holes. What remains hidden from our eyes is defined by the scope of what it hides within.
The spirituality of the Kurzweilian Singularity with its “waking the Universe” I imagine looking less like star beings communing with the fundamental soul of existence and more like robot space imperialism.
and
These two statements are in disagreement. If technological advance is mostly driven by population, as von Foerster’s model assumes, then why do we actually observe that most technological advance comes from developed countries far from the Malthusian trap? Why do 1 billion first-worlders produce more innovations than 6.5 billion second- and third-worlders? Why did plague-ridden 17th century Britain produce more innovations than India?
Good point. I think the idea is that the initial decline in population growth and lack of workers prompted a paradigm shift involving a turn to mechanisation and an increase in GDP growth per capita. But then you are totally right – no explanation is given (as far as I can see) for why this increase in GDP growth per capita prompted continued technological development and economic growth of the countries that are now the first world.
The missing link is I think what others have suggested above – that it is capital investment in innovation and technology that matters, and has far more of an acute effect that total population. Then you can argue that the initial growth and capital gains from the turn to mechanisation, as well as the increase in GDP per capita, enabled sustained investment in ‘researchers’ which drove continued development. You can also bring in colonialism (which is otherwise left out of the thesis) and show that the huge wealth transfers from Asia and America to Europe played a key role in pushing development.
While this is interesting and fun, I wouldn’t take it to seriously because a lot of this relies on mixing good data with really bad data. Take your years to double GDP graphs, the estimates from 10,000 years ago are not good data, if those first two data points are off then everything looks different. Say instead of being around 10,000 years to double you are at 6,000 years then the first dozen data points has a line of an entirely different slope than the whole line, and it starts to look like a paradigm shift (not that the other 10 data points should be taken as great data) at 2,000 years ago. This interpretation would then lead you to conclude that the birth of Christ was a major shift in the world economy or something something this point in the Roman Empire etc. You always want to be skeptical when the worst data is exerting a large influence on your interpretations.
Also extrapolating backward gives you odd results. 100 years ago the doubling time was every 30 years, 1,000 years ago it was every 500 years, 10,000 years ago it was every 10,000 years. That would imply that 100,000 years ago you are looking at a doubling time of something like 200,000 years (right? I’m not great at this type of mental extrapolation). Modern humans are ~200,000 years old so we are still a few hundred thousand years away from the first doubling actually occurring, and our proto-human ancestors are 10 million years away from doubling.
It does remind me a little bit of the old “Hockeystick graph” from global warming that used old tree ring data, then switched to actual data (whereas tree rings don’t show nearly the same spike).
Slightly a side-point on the industrial revolution: in Britain, it was caused largely by having too many people. The process was:
1. Landowners start to view their land as an investment by either a) kicking all the peasants out and replacing them with sheep or b) kicking most of the peasants out and using “I could knock this out in my garage”-level improved tools (e.g. seed drills).
2. All the displaced people end up in cities, where enterprising city-dwellers build massive workshops and employ them all doing handicrafts. This is the key point; the first British factories weren’t mechanised.
3. Once you’ve got 200 people all sitting in one place doing the same thing, people start coming up with simple machines to make things go quicker that wouldn’t make sense in a 5-person workshop. That then creates a market for more complicated machines, once “industrial engineer” becomes an actual job.
4. Everyone else in the world sees what’s happening and copies the British, essentially starting the process at stage 3 but scrabbling around for people to work the factories: America used immigrants, Russia had big agrarian reforms etc.
My recollection is that this was taught as the elementary school version of history at British school, but it seems that everyone else learns it as “some people made some machines, then found some workers” because that’s how it happened everywhere else.
The main point of this is that Britain would be a really poor place to look for an exponential economic boom from industrialisation, as it developed gradually out of handicrafts just after another GDP boom caused by the shift from post-feudalism to proto-agribusiness.
This is almost all backwards. You don’t suddenly start kicking people off land and grazing sheep, the sheep grazing happened as rural areas started emptying out to move to urban areas to work in new or expanding industries. Then people started getting kicked off land eventually because lots of reasons (like maintaining infrastructure is relatively more expensive now so you switch to low infrastructure production), I think I have posted some of the commonly accepted numbers for these population areas in open threads before, but the gist is that over long periods of time (hundreds of years) the population growth was basically zero (with lots of variance), new net out migration of even 1% would make a lot of these places ghost towns and the out migration of the early IR was probably a lot higher than that.
The agricultural revolution happened before the industrial revolution, increasing productivity of farmers and thereby making many farmhands redundant. They left for the cities, worked the mines, etc.
Population of England and Wales almost doubled from 1700-1800, and population of London rose by a similar amount. If memory serves its not until the IR that urban population growth significantly outstrips total population growth.
Lots of the early industrial revolution happened in ‘overgrown villages’ and not in established cities. Mostly because guilds in cities had the power to restrict trade to their liking.
Eg Sheffield was a long established centre of metalworking. But the industrial revolution happened in Birmingham. Similar around Manchester and textiles.
But a lot of that was IR, not AR driven. The population was still 65% rural in 1800 and that was down to the low 20s by 1900. The population of GB roughly tripled in this period from 10+ to 30+ million, so roughly 6.5 million people were living in rural areas in 1800 and roughly 6.5 million were still living there in 1900. If you find numbers from the 1700s (harder to find) you don’t see this pattern, you see an increase in population, a increase in toward Urban relative to rural but you also see a total increase in rural population, its just growing at a slower rate than the Urban. This makes the argument that the AR was displacing workers difficult.
The AR was also not based on labor saving devices, it wasn’t ‘one of you stay and drive the tractor, 9 of you are fired’, it was more an increase in productivity per acre which occurred because of things that probably increased the demand for labor, not decreased.
Huh? Then why were there were people protesting about the enclosure movement kicking them off the land they’d previously been able to use, not to mention being evicted in favour of sheep?
Some people were, but its not a representative sample and more of a small scale after effect of everything else. Scotland is the marginal land of Britain and the AR drew disproportionately more workers to higher productivity rural areas followed by the IR to higher productivity urban and urbanizing areas, this weakened these areas (economically/politically/militarily) and then came the booting of people for sheep. One of the reasons that sheep took over is that they require very little labor, but lots of land to produce. The falling effective population in a lot of these areas was a main, if not the main, cause of the explosion in the sheep population, not the other way around.
I’m not sure that’s right. The Highland clearances (definitely marginal UK-wide, but the main sheep thing), were in the second half of the 18th Century (the “Year of the Sheep” was 1792), immediately before the IR. In England, two of the agricultural revolution’s big eighteenth century inventions were the horse-drawn hoe and seed drill – both of these are clearly labour-savers (the other two, a plough that needed fewer horses and crop rotation, I would guess are labour-neutral – I can see the argument going the other way for crop rotation if you weren’t leaving land fallow).
London doubling in size is a big clue that something’s going on in the countryside – prior to modern sanitation, cities were population sinks.
Wait, a little less than a year ago Scott reviewed “Capital in the 21st Century by Picketty,” who argued that the economy always grows at 1% to 1.5% a year no matter what since the Industrial Revolution. According to this post, not only was Picketty wrong, he was super-ultra wrong, the economy grows way faster than that, even after the 1960 slowdown it is still growing at 2%.
Is Picketty using different metrics than von Foester? Are their theories just disagreeing on a fundamental level? I am I so bad at statistics that I can’t see some sort of obvious way to reconcile their theories.
Are you sure you read that right? My impression was that Picketty’s argument was that the rate of return will eventually surpass economic growth in the long run, not that economic growth rate always has a certain value. Either way, the economy certainly doesn’t constantly grow at 1%. In recent years, economists fret about our economic growth not reaching 3% or higher.
Quoting from Scott’s review of Picketty:
So I’m also totally confused about this discrepancy. Is Picketty’s data (he was measuring from Industrial Revolution on) just better than the really long-scale data presented here?
Why not Picketty’s data is per capita adjusted IIRC, which will obviously make it much lower (and basically zero for much of the timescale discussed here).
Oops – should have seen that. Yes that’s totally correct, its GDP per capita growing at a steady rate post Industrial Revolution, which makes perfect sense. Thanks.
How well can we distinguish hyperbolic from exponential growth in historical data? I understand that historical estimates for population size and GDP are quite sketchy (and GDP might not even be a good measure of economy size for pre-industrial societies where there is little trade compared to self-production), so can we really tell the two trends apart?
This doesn’t strike me as particularly ironical, rather it looks like a textbook example of interpolation vs. extrapolation failure, which suggests that von Foerster’s model was likely “overfitted” to the data available up to that point but it didn’t capture the underlying dynamics of the phenomenon.
Doubtful. If many different curves fit the same points within the error bars, then I don’t think we can really talk of a resilient trend.
I’m not an expert at historical productivity and resources but I am a scientist, and I’m suspicions of these plots that fit so far back in time.
It seems to me that as you go back further in time, the data should get more noisy, since average past prediction error should be a monotonically increasing function of time. But this implies that getting an accurate fit between the present and the past should be nearly impossible, even if the underlying distribution is identical (eg, there really is just one hyperbolic curve that defines everything) without having problems of reverse causality.
In other words, if you plot the data and naively fit it to a curve, then that curve will be much more influenced by the more recent data, which is dense and precise, and only weakly influenced by the data deep in the past, which is sparse and noisy; but this is reverse causality. The present data should be a function of past data, not the reverse.
If you just extrapolate from past data, then it should be so nosy that it would be very unlikely get the right fit for the modern data, even if they are from the same underlying distribution, simply due to noise. You’re effectively trying to fit a distribution from its tail, which is very hard to do.
Is there something I’m missing here?
The piece I’m missing in all this is the opacity (to me) of how GDP growth curves depend on decisions about the value of new technologies. Not even the wealthiest monarch could buy a refrigerator, or a phone, or a car in 1750. Those things didn’t exist. Now you can buy a large home refrigerator for some reasonable fraction of median monthly household income. I haven’t seen the principled argument for how you can turn “no refrigerators, at all” to “cheap, ubiquitous refrigerators” into a straight line, or a hyperbolic line, or any kind of line. And it seems that the choices that need to be made in evaluating the GDP-associated-value of thousands of novel products that have arisen over the last 1000 years are a fraught space of opportunities to fudge details to get exactly the curve shape that you want. Or, more specifically, to create straight continuous lines when maybe there should be a series of massive jump discontinuities coinciding with significant inventions.
I understand that economists need some way of estimating the value of money going back in time. It just raises red flags when you start running thought experiments. Take two time travelers, one starting in 1750 and one starting today. They swap places, the one from the future bringing $1000 into the past, the one from the past bringing $1000 into the future. (“Don’t you mean $1000 “real”, 2019 dollars?” Isn’t that the very question?) The standard calculations for GDP would ask how many bushels of cotton (or whatever) you could buy with that money. They wouldn’t ask how many smartphones you can buy in 1750, because you would get a divide-by-zero error when you compared (smartphones per $1000 in 2019 / smartphones per $1000 in 1750). Divide-by-zero errors seem like a serious problem in general.
refrigerators didn’t exist in 1750… but icehouses did.
refrigerators with [insert addon feature here] didn’t exist before the first fridge with that feature.. but the price/value isn’t utterly disconnected from the price of every other fridge that existed before that.
You can sort of go the other way: what’s the cash value of lost works of art if your time traveler brings lost paintings into the future?
There was no price for them the day before the time traveler arrived … but we can still get a decent figure from when lost works of art have turned up by more conventional means like turning up in someones shed.
I have a strong intuition that a refrigerator is not just a better icebox. For one thing, an icebox cannot ever freeze meat. Are all food-preserving technologies the same thing? Is a refrigerator just a very high-tech form of salt?
Anyway, it’s easy to find something that’s truly discontinuous, that truly has no antecedent. A clock is not just a better sundial. A car is not just a better horse. Antibiotics are not just better leeches. There are notes of similarity but these similarities are more misleading than they are helpful.
If I try to steelman the position that I am criticizing, it would be something like this: You can’t say that the worldwide GDP per capita in 2019, in real dollars, is 50 times more than the worldwide GDP per capida in 1 A.D., in real dollars. It obviously makes no sense. Too many things are different and incommensurable. But you can say that worldwide GDP per capita in 2019 is 1.021 times higher than it was in 2018. And in 2018 it was 1.023 times higher than it was in 2017. And you can step backward like this steadily, and when you encounter things that look like discontinuities, like the sudden arrival of affordable refrigerators, you can do your level best to account for their impact on overall quality of life.
This is about where my steelman derails itself, though. Imagine that tomorrow somebody found in a cave somewhere trillions of magic lamps, each lamp containing a perfectly obedient and benevolent genie. These lamps would obviously be very cheap. But you would have to agree that “nominal GDP” would almost immediately cease to mean anything at all. You can think of a smartphone or a refrigerator as a kind of smaller, more restricted genie. It introduces a discontinuity which you are only pretending to “smooth out” because the market imposes a price on the new technology.
Why not?
I bite that bullet. Indoor plumbing is just a better chamber pot and servant to empty it out. An iPod is just a cheaper band of musicians that you pay to follow you around full-time and play music. Texting is just a better messenger pigeon.
The alternative is to give up on comparing the present to the past, which is absurd.
I would model refrigeration in terms of falling costs of food.
In terms of measuring ‘cost’, I would model it in terms of the number of hours a median laborer has to work to acquire a particular item of food, before or after taxes depending on your point of view.
Refrigeration and improved farming are both going to make the # of hours lower.
For things like transportation you can make similar calculations in terms of hours of labor needed to pay for a given length of transportation.
The challenging part comes from comparing luxuries. Automobile companies make new cars that are roughly the same in terms of purchasing power but have more features. However, absence of these adjustments will almost always result in an understatement of growth, so we still have some lower bound to work with.
I find this argument compelling. As a thought experiment, try to imagine some good, preferably a household object, that would be worth $100k, and could be widely available, say to 50% of the US. The quick answer is a general purpose robot, but that avoids the question by answering “general” or all things.
I would suggest that in 1980, a modern phone, as seen on Star Trek, would have seemed like something worg $100k, but of course, phones never cost that much. Mass production has changed the go to market strategy of firms. In the past, objects were first made bespoke for the rich, and thus started very expensive, so economists could see the value created (as they only understand prices). Since the 1960s, objects begin as mass produced items, and target a mass market, so are priced accordingly. Cell phones were never expensive, despite having more computer power than a Cray 2. Self driving cars will never be expensive. There will not be a time when the rich can buy one for over $1m.
I think this change in marketing strategy has broken how economists value objects. Why isn’t a flat screen TV worth $100k? An old movie theater in your house used to cost that. Why isn’t access to all movies online worth millions? Buying a film library would have cost that in 1990. Why isn’t a modern antibiotic worth $100k. If there were few people who could use it, that is how much it would sell for, as that is how much life saving drugs that treat few people cost.
It is hard to understand how much change there has been since the 60s, and how much of this change is in items that are made generally available to everyone, as opposed to things that can only be had by the rich. Even when items are sold to the rich at inflated prices, they are essentially identical to the cheap items the poor purchase. My fridge cost $15k, and honestly, is indistinguishable from the one I bought in Costco for $300 twenty years ago in function.
tl;dr; Progress has continued, it is just not valued as it was, as the benefits start more broadly. We are still on track for a singularity in 2020.
Not sure about the singularity. But eg Wikipedia and Google search and email (and especially Gmail) are services that we mostly consume for free, but would have been extremely expensive to replicate only a few decades ago.
This is related to economists discussing inflation as well.
And another reason to prefer nominal GDP targeting for monetary policy instead of inflation targeting.
Cell phones have absolutely been expensive. The first cellphones, in the mid-Eighties, ran upwards of $3000 (in Eighties money!), and those were evolutions of car phones (available since the Forties) which started out at truly ridiculous prices. They still cost around a thousand dollars by the late Nineties; prices wouldn’t come down to the few hundred we’re familiar with until the early- to mid-2000s, and the type of phone you’d get for that money then would be basically free now.
Some current iPhones are $1000 so, comparably, early cell phones were not expensive. The cost of the service, being able to contact people wherever you are, almost instantly, would have been huge 200 years earlier, and would have dominated warfare, if one side had the technology and the other did not.
Cell phones never cost as much as they could have cost had their marketeers tried to maximize the price, as opposed to maximizing total revenue.
Pointing to an current-gen iPhone and saying that it’s as expensive as a 1999 Nokia is kind of like pointing to a high-end workstation or gaming rig and saying it costs as much as a first-gen Apple Macintosh. It’s technically true but it glosses over huge differences in capability and niche; the closest equivalent of that Nokia runs about twenty bucks now, and even that will come loaded with a browser and email capability and a bunch of other crap that would have blown my mind in 1999 (not that I owned a cellphone then).
Saying that cellphones were never toys for the wealthy is simply not true. It is true that there were never hand-built bespoke cellphones, but that’s mostly true because the hardware was too physically large to be handheld when cell technology was in its bespoke phase (so it ended up in cars instead).
The name for the concept you guys are talking about is a current-weighted price index, or Paasche price index vs the GDP numbers you normally see which are adjusted for a base-weighted, or Laspeyres inflation index. PPI and CPI are both base-weighted.
A Paasche index takes into account, for example, how do you compare free videos being watched on YouTube to VHS rentals in the past, or compare the cost of sending a letter in postage to the cost of a fax transmission to the cost of an email today.
Go one level of abstraction up. You aren’t buying a refrigerator, you’re buying fresh food. Or if it’s not enough, you buy “level of comfort”, which is comparable. Not item-by-item, I’m guessing a Roman patrician would have less ice cream in the summer but more… whatever he had. It’s a fair guess he still came ahead (like others said, he probably had an icebox). So you can find a level where it’s comparable to somebody that can afford a fridge.
In the end you’ll probably have to give up comparing fridges with iceboxes and will have to compare costs of lifestyles – fridge plus vacuum plus microwave equals small merchant with two slaves, for a total cost of $20000 per year or 5000 denarii per year.
I’m old enough to have beaten carpets as a child. Vacuum cleaners are far superior. Getting rid of a wood fire is a huge step forward.
I see the idea of comparing lifestyles, but this is very hard to do, as almost no-one remembers quite what it was like to live in the conditions that were common from 1500BC to 1600AD. During that time, there was essentially no change. The houses I saw in Akrotiri are just as nice (possibly nicer if murals are counted) as the houses from Northern Europe in 1600AD, one of which I grew up in.
That comparison difficulty is true, but you can approximate some of it by looking at how people live today in some very poor locations. Dirt floors, walking long distances to fetch potable water, etc…
I would advise caution when using any data pulled from OurWorldInData, they are a great example of the Garbage In, Garbage Out problem. They combine:
1) compiling data without regard for source quality
2) lacking the motivation or culture to do any sanity check of the results
3) ignoring people that point out mistakes
For a glaring example, see this page War and Peace, especially figure 1. The Taiping Rebellion circle is quite small, lower the Holocaust. Best estimates for the casualties is 20 to 30 millions (see WP), but the source they use report 2 millions, all military, and no civilian casualties.
And to stay in China: where are the other revolts? the death toll of the Ming’s downfall?
To be fair, my impression is that the error bar for Taiping Rebellion casualty estimates is enormous. That said, 2 million does sound pretty damn questionable.
It’s been happenening for a long time.
Anyhow, if there is no singularity of some sort this century, it would seem that dysgenic decline will see a period of technological stagnation lasting centuries. Fertility rates will increase, and Malthuisian conditions will be reimposed, as the population drifts up to the carrying capacity of the industrial economy. Then technological progress will set off again.
PS. These sorts of models have come a long way since then (e.g. see cliodynamics). Intro/book review here.
While we do need a way to turn money into research that is better than colleges, I do not believe it has to be AI. My suspicion is that our research institutions simply don’t scale as well as many other institutions we have. So they’re increasingly less adequate to the amounts of effort required to reach fruit that are less low-hanging.
We’re good at building institutions that nobody really fully understands. They work fine as long as each part of the system has somebody who understands it well enough. Maybe successful research depends on having a relatively large scope of understanding of the field? And our larger research institutions lack that because they in attempting to emulate the architecture of other large institutions, they have to make their members either specialists or administrators?
Companies are pretty good at innovating. Not only in the cutting-edge research sense, but in the sense of finding new things people want. To give a really simple example: Uber’s competition to established, heavily regulated taxis wasn’t any triumph of science, but it did increase productivity.
Given the replication crisis in a lot of softer fields of science, I am hopeful that machine learning will give them a big boost in the near future.
Eg Facebook is collecting a lot of data and finding out things that would be in the softer fields of science if they were to be published.
I am hopeful for machine learning not so much because of any specific result, but because its practitioners are obsessed with avoiding overfitting; and are committed to techniques like hold-out sets. That’s the very opposite of p-hacking in science.
Part of the reason, I suspect, is that ML is often used to make money. So there’s an immediate feedback loop that makes people care about their results being real; instead of just whether they can publish.
One of the GPT-2 demonstrations that had me very interested was the auto-summary and being able to answer questions about a text.
If the quality is good have you any idea how valuable it would be to be able to point something like that at a few thousand research papers and ask a set of simple questions about their text?
Researchers can’t keep track of everything published in the field… but auto-curation to, for example spot common bad practices, filtering and then being able to ask direct questions about a corpus would save more researcher time than you could imagine.
The counter-argument is that major scientific breakthroughs come as a result of pure-research, which isn’t something companies tend to do very often. Product development takes existing technologies and improves them.
The classic example is the internet and computing, which began as something fairly expensive and impractical that only a technologically minded military would have the resources and willingness to invest in, but once these things were developed and became economical, the private economy took over in innovation.
I bring this up not necessarily because I agree w/ it 100%
The other concern I hold is that research in technology is focused on better manipulating people rather than manipulating matter and systems. The classic example using big data to target customers. Social media has the problem that there isn’t as much of a trade-off between “advertising” and “improving” the product in the way it is for most other physical products. Having your product more effectively grab the user’s attention is both of these things at the same time, and doing so doesn’t necessarily make people materially [or mentally] better off.
That’s an excellent example! The military did indeed play a major role in early electronic computers. But: International Business Machines Corporation wasn’t too far behind, and in our world would have invented the electronic computer just a bit later, eg had it been developed by the military but kept secret.
In the counterfactual without military spending, a company like IBM would have likely invented the electronic computer even earlier, because there would have been more private spending. See also Konrad Zuse, a German civil engineer, who invented the computer (by some definition) to help with boring calculations. (I linked to the German Wikipedia because it has more background on Zuse’s motivation. Use Google Translate, if necessary.)
WW2 probably deformed a lot of that possible history. A bunch of early computer developments were just attempts to weaponize math, either by cryptography or by better aiming methods or whatever.
Maybe the counterfactual world where Hitler is admitted to the Vienna Art Academy would have seen IBM inventing the computer, but maybe it would also have seen the world sliding into an earlier Cold War, and then both sides would have had interests in computers anyway.
My impression is that companies are great at sequels to existing products, but not so good at inventing entirely new markets. Exceptions tend to have people with a very good overview of the entire thing, like Elon Musk.
Re: AI and innovation… there’s an old throwaway pratchett joke that’s stuck with me a bit.
Throw in “The Road Not Taken” where it turns out FTL and anti-gravity is actually simple enough that most species discover it around about the stage when they’re building wooden boats and every human engineer who sees an FTL drive is shocked they didn’t think of it themselves because it’s so obvious and simple….
https://en.wikipedia.org/wiki/The_Road_Not_Taken_(short_story)
Occasionally I hear about AI being thrown at a problem and discovering something that had been ignored or discarded by humans. A trivial example was some opening moves in Go that had been considered bad for centuries until suddenly people became interested in it after deepmind showed them to be good moves.
And I think how interesting it would be if there turned out to be a number of really really simple things we’ve been missing due to quirks in human cognition.
While this is not impossible, a _lot_ of physical processes will have been discovered randomly by evolution in the millions of years before us. Specifically, we are still trying to recreate intelligence which is done in the brain without much fanfare, but to us is proving a hard nut to crack, but there are other examples.
Of course, we don’t see a carnivore animal that evolved a laser pointer in its tail to trap cats, so maybe even evolution will ignore certain roads after all. Meanwhile evolution never tried to play Go, so it is not rare that AIs can discover new moves in an artificial setting.
Any discussion of Malthus needs Lemin Wu’s papers. See eg If Not Malthusian, Then Why?:
>ctrl+f Romer
>0 results
You’ve more or less reinvented endogenous growth theory. Romer won a Nobel for it: “for integrating technological innovations into long-run macroeconomic analysis”. The gist of it is basically more people = more tech improvement = more growth…which may or may not feed back into more people.
You mean von Foerster reinvented it?
According to Wikipedia, the basis for endogenous growth theory began in 1962 with Kenneth Arrow. Von Foerster’s paper was published in 1960, so if anything EGT might have been inspired by von Foerster.
I’m skeptical that the problem is that we can’t produce enough researchers (or perhaps “innovators” would be better). We are certainly perfectly capable of producing highly educated people with the training to advance the state of knowledge (PhDs), and most of them go begging for lack of positions to hire them into. Over in high tech there are lots of capable engineers and business people who can start new businesses, but when they go looking for capital to grow, VCs reject pretty much all of them.
It seems to me that the problem isn’t a lack of capable people who want to do new things, and who to all appearances are capable of doing new things, but rather that we aren’t willing (or able?) to give them the funds with which to do so.
Keep in mind: lots of ideas actually suck.
There are a lot of smart people out there and lots of them have project proposals … but a lot of the projects are fairly crap.
Those people who can’t get VC capital? an uncomfortably large number of them don’t have much in the way of a plan other than “like facebook but better” for their “product idea”.
There are also worthy projects that fail to get funding but from talking to someone on a board looking at grant proposals: a depressing number are shit, poorly thought out with people just playing buzzword bingo while trying to recycle the 3 good ideas they had in their entire career.
with degree inflation more and more postgrad students are just people trying to stand out from the crowd to keep up with forever-inflating demands for ridiculous qualifications for entry-level jobs.
Well that’s another point where diminishing returns apply. Nowadays you won’t get much research done in garages, and seed money can be p high. It’s probably fixable but the basic problem remains that at some point miniaturization will need better understanding of physics and that comes from megaprojects nowadays, that VCs ain’t gonna invest into, even if they somehow wanted to.
I think that is exactly right. I wanted to calculate when our technological and scientific enterprise is going to collapse, due to dysgenics, but I had to conclude that there is currently no direct relationship between human capital and scientific productivity. The bottleneck is funding.
If the bottleneck is funding it raises the question is the required funding moving up faster than economic growth. If not then economic growth still drives innovation faster which drives economic growth faster. But it may be that the required funding is growing at a rate faster than general economic growth.
The entire “straight line on a log scale” trend directly contradicts the concept of ‘reaching zero’.
On those graphs of ‘log doubling time’ to ‘log years before 2020’, you can’t point to where a data point would be if the doubling time in 2020 was 0.
Try changing the x-axis to ‘log years before 100000000, and see if now your line predicts something about the year 100000000- which it will, since doubling time HAS tended to decrease in the past, so when you force a log-log-linear fit, it will point towards (inf, -inf)
I’ve edited in a link to another graph that uses log(GDP) as the horizontal axis; hopefully that overcomes this objection.
Well, that graph certainly wouldn’t have the objection of 2020 being infinitely far off the right side… but a doubling time of zero is still infinitely far down, so even if you draw a straight line instead of one that is concave up, projecting that line doesn’t get you to infinite growth. And of course, the trend does not appear to be a linear one even if you accept that “Gross World Product in 250000BC” is something that we know to three significant figures.
Even granting all the premises, it seems to me that increased prosperity should lead to an era of at least maintaining inventiveness because less human capacity is wasted even if there aren’t as many people.
I am very weary about the graphs in parts 1 and 2 for the following reasons:
– any demographic and economic data before around 1700-1800 has very high uncertainty,
– the graph is very noisy, and I wouldn’t be very confident in the regression line,
– you can’t just arbitrary choose a zero on an exponential graph. Moving a zero to a different point would transform it non-linearly.
In short, the part “A medieval Burugundian? Early 21st century. A Victorian Englishman? Early 21st century” is simply wrong. In middle ages you would definitely not have enough data to predict singularity in 21st century. It is even questionable whether you’d be able to predict it in 19th century.
I know you reviewed it, but I’m surprised you didn’t also mention Age of Em, which purported to do exactly this (model GDP growth as if human talent were no longer the bottleneck), where Hanson also claimed that about 2000 years of subjective time (or about 2 years of objective time) would pass before the economics entered another regime.
I concur. Throughout the piece, I couldn’t help drawing similairities between this and Age of Em. What seems novel is the interpretation of the Industrial Revolution’s economic gains as a natural consequence of population growth slowing down. Though Hanson has mentioned several times that the “more money = more computers = more workers = more money” equality being broken at the moment is probably a short lived deviation from the norm.
/s Maybe Scott is subconcsiously plagarising Hanson’s work without realising it? /s
The really relevant Hanson here is http://mason.gmu.edu/~rhanson/longgrow.html . I originally was going to include Paul’s rebuttal against it, but a lot of it was beyond my understanding and I was just blindly relaying quotes from people smarter than I was, so I decided to just avoid bringing Hanson up.
What about Universal Basic Income (and accompanying automation) as a way to achieve this? It’s a potential solution to the problem of not having enough people to be ‘researchers’ that societies could have a go at now – unlike strong AI which seems a way off to me.
As others have alluded to, it should be possible to affect the rate at which society produces ‘research’ (tech or system/process) advancements from its general population. So if we want to maintain decent productivity growth in an environment of declining population growth rates, we need to enable a higher amount of the population to go do ‘research’.
UBI would enable people to not go do all the bullshit jobs (which in turn would force employers to spend capital on automation to replace them) – and then all these newly liberated people could go tinker in sheds, start enterprises with new ideas (happily taking risks, backed up by their UBC), or just literally go do academic research. Plus there’s more money redistributed to the bottom of the pyramid (ideally) which should further stimulate growth.
Of course this sort of economic restructuring is politically difficult – and the effects of UBI aren’t so rigorously tested yet. But liberalising workplace norms at scale (perhaps through UBI) so that people can move away from clock-punching 9-5s – this itself an innovation of the Industrial Revolution – might be the next revolutionary idea that compensates for insufficient population growth, improves tech development, and gets GDP per capita growth back nearer to a hyperbolic track for a bit.
UBI is a means, not a game changer. And it’s still disputable if it’s a good idea – maybe more people will be researches, but maybe a lot more people will chose to do competitive e-gaming.
And – I really don’t want to start a UBI thread, but this point is relevant – UBI comes with a heavy tax burden. Which translates into lower productivity, harder entrepreneurship, higher cost of startups and failure, less resources for basic or risky research and so on.
Not just more people playing e-games, more people inventing e-games. I suppose it depends on your values for whether you want to count that as productivity.
omegaphobia
I agree that UBI is a means – should have made that clearer and been a bit less of a UBI-evangelist. Forgetting UBI specifically, my general point is that, as an alternative to looking ahead to AI, more work can be done to empower workers to focus on thinking / creating.
Also I would suggest that the heavy tax burden of UBI is part of the point – in that society needs a way of transforming surplus capital into ‘researchers’. But sure – its up for debate whether it would be more effective to leave that capital in the hands of business to just fund their own R&D, or try some other way.
Ah. I see your point. But we’re going to be spilling a lot of ink on a very empirical question: do people on UBI become more productive in creative ways, or do they prefer to play games and browse reddit all day? Until somebody tries to actually answer it…
The only pilot study I know of, in Finland, was not particularly optimistic. Recipients of UBI were obviously a bit happier (because, well, extra money) but not more likely to find jobs.
One thread that a lot of people in favor of the UBI seem to have is the idea of “bullshit jobs”. I know of no such person who’s extreme innovation is being suppressed by a 9-5. I know of people who’s extreme laziness or drunkeness is being suppressed by a 9-5, but not brilliance.
Innovative people usually seem intrinsically motivated. Laws to maximize work times typically can’t stop workaholic people from working (even if just at home), but seem intended to prevent excessive extrinsic motivation by employers.
Right.
I get nervous when we have elaborate explanations for why some previously-unimportant second-order term suddenly drove human progress into a different but remarkably virtuous hunk of phase space, but we don’t spend much time thinking about other possible second- or third-order terms that are lurking out there, just waiting for their chance to mess things up.
Human history seems to be remarkably free of global chaotic behavior. There are certainly systems of non-linear differential equations that have that property. But there are also a lot that don’t.
PS: There is an old saying, often, I think, attributed to John von Neumann: everything is linear on a log-log chart.
This page calls it “Mar’s Law”:
No idea who Mar is…
Since the page is “Akin’s Laws of Spacecraft Design”, by Dr. David Akin, I suspect “Mar” might be a joke like “Cole” in “Cole’s law: finely sliced cabbage.”
Google Oded Galor and/or Unified Growth Theory.
There are demand-side problems to extrapolating historical GDP (per capita) curves as well as a supply-side one. Well there is one if you are an economic substantivist, holding that economic activity is context-dependent, or what I call a etymological fundamentalist, holding that activity is only economic if it maintains or improves the well-being of households.
The context we have now is a winner-take-all system. Once the only companies in the Anglo world are the FAANGs (and Intel), their sole focus will be maintaining their hold on power.
From a fundamentalist point of view the distribution of advances in well-being is increasingly unbalanced (thanks to the political power imbalances), and harms are increasingly concentrated on the poorest. It could be argued on the basis of a concave-down utility curve that we are past the point where innovation is net negative on aggregate well-being.
So the demand for inventions that increase GDP per capita is will decline, if it has not already.
Companies are a pretty recent invention.
What are some falsifiable predictions of your ‘winner-take-all’ assertion?
If these graphs are right, there’s a secret rule about where technology comes from.
We think technology comes from “invention” — from having enough bright people to think of cool ideas.
But if these graphs are right, technology really comes from “specialization”– from having enough wealth and variety to support new developments.
You can see that’s true, because otherwise we can’t answer two obvious riddles.
The first riddle is: why didn’t progress slow down after the Black Death?
The Black Death killed off as much of half of Europe. If progress is about the number of people around to be inventors, technology should have collapsed or slowed way down.
But if progress is about specialization, then the Black Death can have helped or been neutral, because more land per farmer allowed more investment and more specialization per farmer.
The second riddle is: how did the population in China help create steam engines in England?
If new technology is all about “inventors,” it shouldn’t matter how many people are in China, or Africa, or anywhere else in the world outside England. It’s only in England that there’s enough coal mining and machinery to justify developing steam engines.
You can have as many bright people in Shanghai or Moscow as you want, and none of them are going to be looking at deep coal mines and saying, hey, we could clear water from these mines by burning coal to make steam to drive pumps.
But if technology is about “specialization,” then the population even in China matters for inventing steam engines in England. Because the more people there are everywhere else in the world, the more people there are to trade with, the more people in England can be specialized and focused on the coal-mining business – and notice that a steam engine would be helpful.
So if the pace of invention depended on world population, the gatekeeper for invention wasn’t “how many potential inventors are there, anywhere in the world?” The gatekeeper for invention was “how specialized an economy does the world support?”
It’s not that invention breeds new specialties. It’s that specialization affords new inventions.
(This reminds me of S.M.Stirling’s novel “Island in the Sea of Time,” where Nantucket gets sent back to the Bronze Age. Stirling mentions that his authorial idea in sending a whole island’s population back, rather than just one person, is that the island’s population is large enough that they could plausibly maintain a decent technology level. Whereas one person, even if he’s Robinson Crusoe, isn’t going to be able to afford the different specialized tools that support serious technology. The economies of scale are too big for one person alone to conquer.)
I guess the counter to this argument would be that invention itself spurs specialization, in which case you have cause and effect reversed.
When somebody invents something fundamental, lots of people rush to exploit the natural extensions of the fundamental breakthrough. That creates a new specialty, as more and more people pursue increasingly limited returns on the new breakthrough. But eventually, something new and fundamental happens, and everybody rushes off to exploit it instead.
I don’t see much evidence that the fundamental breakthroughs come from specialists. Indeed, I think they’re more likely to come from interdisciplinary goofing off. And that really does sound more statistical in nature than hitching the behavior to an increase in specialization.
The counter-counter argument is that specialization drives productivity even in the absence of technology, see Adams Smiths pin factory. Simply doing the same task over and over again makes one more productive. If one was trying to make a bunch of sandwiches (a task consisting of a number of steps) they would be much more productive if they did each step to all the sandwiches in turn, rather than did the entire progression of steps to each sandwich (assuming the total number is somewhat reasonable).
Having more people (up to a certain limit) allows each person to focus on a specific task, and so become more productive at it, perhaps, but not necessarily, by inventing new technology. This increase in productivity might allow for some people to have enough free time to do nothing but think of new ways to improve productivity.
My counter-counter-counter to that is that there’s a pretty hard limit on how much additional productivity you can wring out of a fixed-paradigm system before you run into diminishing returns. But when an innovation creates a whole new field of endeavor, that point of diminishment is way, way out there.
Most hyperbolic curves are logistic curves in the end, so that’s not a surprise – something was bound to slow the growth.
Anyways, in practical terms I don’t really care about singularity. I care very much if the growth is fast enough to get us to life extension & health. After that, I’m perfectly ok with the 1.5% per year or whatever the last century number is.
A lot of people look at the demographic transition and see our future as one of stable or even declining population. But it’s not inevitable. Maybe children become a status symbol again and people scramble to have them. Maybe we learn how to prolong a women’s ability to have children or invent artificial wombs. Maybe through the power of behavioral genetics, those with the genes that cause them to find children more desirable take over the world. Whatever happens, we could find ourselves with a hyperbolic population growth again, even in developed countries. And you might think the Earth doesn’t have the capacity but I doubt the ancients could have comprehended the enormity of contemporary Tokyo. And when we do start having serious carrying capacity problems, then the opportunity costs of space colonization go down.
Life span and child bearing years remained relatively constant (or didn’t change as dramatically as other variables). Once we stop dying at 80 and can have children at any age most people will eventually have them, especially since it won’t be that much more expensive anymore. I can easily see it coupled with welfare.
I think it will have more to do with life extension. If people are living a thousand years, then population will grow over time even if it’s pretty slow as long as births (eventually) outnumber deaths.
I don’t think we’ll have truly hyperbolic population growth on the time-scale of decades again, though. We can see it in current fertility trends, which have mostly dropped worldwide despite some pretty huge differences between countries. Once you drastically reduce childhood mortality and greatly increase access to effective contraception, birth rates seem to at least drop down to either at or slightly above the replacement rate.
Am I the only one looking at these graphs and thinking thank god there’s to be no singularity soon? A world in which the lives of people remain miserable because all of the productivity growth and resources are devoted to increasing the number of humans is a depressing joke. A world in which the lives of people remain miserable because all of the productivity growth and resources are devoted to increasing the number of operating research subroutines sounds no less horrifying to me!
The big problem with this model is that every human being doesn’t have some percentage chance of creating a technological breakthrough. You yourself noted that the student body of a single Hungarian high school produced more innovation in a generation than the populations of whole continents have over the entirety of recorded history. Increasing the population of, say, Niger or Honduras tenfold isn’t going to lead to more von Neumanns springing out of the soil. If you want to reliably produce geniuses, you need an environment where genius provides a selective advantage and then maintain that for generation after generation.
Which brings us back to the 1960s.
It’s almost as if something might have happened right around that time, some sort of revolution spurred by a new technological development that might have made intelligent people substantially less likely to reproduce compared to the general population. So much so that even as the world population skyrocketed to record heights, the number of people capable of building and maintaining a technological society shrank to the point that the country which invented the hydrogen bomb can no longer produce tritium. Can’t put my finger on it, maybe something to do with water fluoridation or leaded gasoline?
It’s a shame that Jim is banned, because his technological decline hypothesis explains a lot more of the facts here than this toy model.
The US isn’t unable to produce tritium because the country has lost the knowledge. It’s because the country has lost the will.
Supposedly.
People say the same thing about skyscrapers. We could build more or maintain our existing supply if we wanted to, they just coincidentally went out of fashion right around the time that nearly every other technological trend started to slow or stop. That’s why the hideously-expensive Freedom Tower is still shorter than either tower of the World Trade Center was: as important as it was to symbolically resurrect the Twin Towers, it was much more important that we maintain the city’s Feng Shui by making sure that it has fewer habitable floors than the Empire State Building.
Our thermonuclear arsenal is the centerpiece of America’s national defense. Somehow I doubt that the bipartisan hawkish consensus in Washington abruptly decided to pursue unilateral nuclear disarmament while incessantly talking about bombing Iran or encircling Russia. If we could do it, it would be done.
And it’s not so much the construction technology that’s missing: you can eg get Chinese companies to come and build. What’s being lost is the social technology that allows skyscrapers in America.
The growth in other areas of the economy likely slowed down, because the movement of people from less to more productive parts of the country stopped. The economically most dynamic and productive parts of the US are closed access cities like New York or San Francisco, that have essentially a fixed or declining housing stock.
This is a weird chain of logic…are you claiming that WTC1 wasn’t built higher because we don’t know how? You are aware that the structural design of the Burj Khalifa was by Skidmore Owings & Merrill, an American engineering firm, and the chief structural engineer was Bill Baker, an American, right?
No, @The Nybbler is correct(ish) about it being a matter of “will.” I’d quibble a bit by saying that “will” isn’t quite the correct word. I’d phrase it as that we now value other things more than we want to have these things you’re talking about.
To use the tritium example, it’s not that you’re wrong about the Washington consensus being in favor of nuclear weapons, but they also have a consensus about no new highly-enriched uranium plants due to proliferation concerns, and these two consensuses (consensi?) are in direct conflict; they cannot both be satisfied. So there’s an endless debate about it, which privileges the “no new tritium” course of action, because the old reactors at Savannah River have been shut down. Since it requires positive action to start a new plant, as long as we’re debating what to do, we’re not starting construction of a new plant.
Note that there’s also sort of a weak consensus in Washington about “no new nuclear plants” that feeds into this, but that’s actually just a further epicycle of this process: many of the pro-nuclear politicians will not say “No New Nuclear Plants” if asked (and probably believe in new nuclear plants), but they’re also not willing to grant exemptions to the laws that prevent new construction (because they’d get skull-dragged in their next election).
For skyscrapers, as I understand it, there’s been a general feeling that supertall buildings have turned out to not pencil out economically. I believe the Empire State Building, for example, is only about 2/3 occupied, and WTC1 isn’t much better (this article says 69% occupation). So there is going to be a lack of enthusiasm among private developers to pony up the cash to build supertall buildings in e.g., NYC. However, in Dubai, where the government sees prestige in having the tallest building in the world, they front the cash to do it despite the fact that there may be no positive return on the investment. If you want to characterize that as “will” there’s an argument to be made there, but it’s certainly not a lack of knowledge.
Consensi is not a word as I keep telling my supervisor. The latin plural is consensus. The English plural is consensuses.
I’m not sure any of you are actually disagreeing. Lost “social technology” and “loss of will” are gesturing at roughly the same thing.
I have a response to your oddly specific concern!
Thanks for the link, I remembered reading it a few years back but my Google Fu wasn’t up to snuff yesterday.
I don’t find it very convincing, because while you attempt to adjust for the height of spires, you don’t use the height to occupied floor used by the Council on Tall Buildings and Urban Habitat which is a more accurate measure of how much of a buildings so-called “architectural height” is actually occupied. The engineering challenge of skyscrapers isn’t just in erecting the tallest radio tower, but in keeping things like indoor plumbing or sprinkler systems working over a thousand feet in the air.
The Freedom Tower is a 94-story building with a top floor labeled “104” and nearly a quarter (~24%) of it’s architectural height unoccupied. That one 1,268′ building cost $4.353 billion inflation-adjusted dollars, replacing a pair of buildings which stood at 1,355′ and 1,348′ respectively and cost $2.322 billion inflation-adjusted dollars together.
This is the consistent story when you look at 21st century skyscrapers. Low top floors, with 20-40% of the building’s height unoccupied, at a price tag several fold higher than comparable buildings from the 70s and 80s even after adjusting for inflation.
Missed the edit window but I underestimated how much of One World Trade was unoccupied. I forgot to update that percentage to use the highest occupied floor, ironically making the same mistake I was criticising.
It’s actually over a quarter unoccupied, ~28.6%.
That Jim guy needs to come back. Very funny and intriguing.
What if we treat 1 WTC as an outlier (on the grounds that it’s really high-profile, which drives up cost in a lot of ways) and look at Central Park Tower. Top occupied floor of 1,455′, cost estimated at $3 billion. Yes, it’s undoubtedly more expensive than the original WTC was. On the other hand, design standards have undoubtedly changed since then, too, and that almost always drives up cost.
As for Jim coming back, not going to happen. I went looking for the post he got perma-banned for, and he’s nobody I would want around either.
According to the argument in this post, increasing the population of Hungary in the 18th/19th century was apparently essential to producing von Neumann and similar great thinkers. What makes you think that the population of Niger or Honduras is incapable of producing von Neumann-level geniuses?
Incapable and unlikely are different things.
It is possible for anyone to be an extreme outlier. You could have a Nigerian who is the smartest person in the world. It is, however, statistically unlikely unless there are dozens of more Nigerians than there are Germans, just based on statistics.
This is an old La Griff conclusion.
As an aside, are people from Niger Nigerians? How do you distinguish them from people from Nigeria (Nigerians)?
Nigeriens.
Niger was a French colony and Nigeria a British one, so a good way to distinguish between them is probably to talk to them in English and/or French.
If I understand what you’re saying here, you’re assuming that the average IQ of the nation won’t get pulled up as its GDP expands. Given that that happened in the US and other western nations due to improvements in health and nutrition (and possibly other effects), I think that assumption is flawed.
Well, the IQ of the gulf states wasn’t.
It’s worth pointing out that since African Americans score significantly better on IQ tests than the national IQ scores of African countries, we know that at least some portion of the international IQ gaps are due to non-genetic factors.
African Americans aren’t directly comparable to sub-Saharan Africans, genetics-wise, for a number of reasons. There’s a fair amount of European and native admixture in African-American genetics, for one thing; and there’s a huge amount of genetic diversity in Africa and only a relatively small part of it was heavily involved in the Atlantic slave trade, for another.
That said, I’d still take IQ surveys in sub-Saharan Africa with several grains of salt. Just not for this reason.
If you’re referring to the Flynn effect, my understanding is that the increase in IQ scores didn’t correspond to an increase in general intelligence.
People use the terms interchangeably but it’s important to keep in mind that the measurement is different from the quantity being measured. We only care about IQ to the extent that it reflects g, and we only care about g to the extent that it predicts outcomes.
@Eponymous,
We don’t actually know that.
African Americans have between 10-20% European admixture, and over 60% of the remaining African ancestry is Yoruba (one of the seemingly high-performing minorities of Nigeria). There’s more than just environment going on there.
A better example would be to look at recent African immigrants. That’s how people noticed that the Nigerian Igbo and Yoruba tend to do a lot better than one would expect from the national IQ score of 84.
@Nornagest:
~20% Euro admixture in African Americans if memory serves. Shouldn’t explain much. Black/white IQ gap in the US is around 15 points. So accounting for 20% Euro admixture puts you at ~19 points. But people report African countries with national IQs in the 60s.
Genetic diversity in SSA is less than you might think. There are some highly divergent populations (Mbuti, San), but they have low population sizes. Most people in West, Central, and Southern Africa are descendants of Bantu-speaking farmers with a single point of origin somewhere around Benin ~2-3kya. Nearly all African slaves in the New World were descended from such groups.
@Nabil:
And my understanding is that this is a hotly debated question. And if the Flynn effect is due to non-intelligence factors, then shouldn’t that make us skeptical of international IQ data?
I addressed Euro admixture in my response to Nornagest. I’d never heard the Yoruba claim before, and it strikes me as wildly implausible. 10.6 million slaves (more at point of origin — I think 12 milllion), so 60% would be over 6 million — definitely not that many Yoruba. That’s crazy.
Looking it up, Wikipedia lists the 10 most common source ethnicities, and Yoruba are #8.
My source for the Yoruba numbers was this article, which may be out of date.
Yes. Concerns about being culture-free cause people to use Raven’s Matrices for international comparisons, but it has the largest Flynn effect, probably meaning that it’s a crappy test.
———
Nabil, your source is very crude and just isn’t trying to distinguish Yoruba from anyone else in Nigeria. It’s just trying to distinguish West Africa (Yoruba, Mandinka) from Central Africa (Bantu).
Eponymous, the Bantu family is related to Yoruba, but does not subsume it.
The claim that the Flynn effect only applies to IQ and not to other psychometric measures that feed into g is certainly made by Murray in The Bell Curve, but it’s unsupported by the data. Pietschnig et al did a solid meta-analysis that shows, at least in German-speaking countries, that the Flynn effect has a significant positive impact on both crystallized and fluid measures of intelligence, two of the factors in g. There are components of g that aren’t impacted by the Flynn effect, but it reflects more than just improvements in test-taking skills, as some suggest.
@Eponymous
IQ tests in African nations probably underestimate the actual IQ. The question is by how much.
Eponymous is correct but even the substantial increases in living standards from the US are far from fully closing the gap, and likewise not far enough to generate a critical-mass of geniuses that we assume are required.
Higher exposure to test-like problems in real life likely improves raw scores, but most of the innovations known to increase ‘G’ are low-hanging fruit like lead and nutrition. Nothing else AFAIK is known to permanently and reliably increase G. Increasing Raw scores themselves don’t drive innovation as a Ramanujan or Von nueman might have done substantially worse than what would be predicted today simply because of less cultural familiarity with those kinds of tests.
Hungarians certainly aren’t dim, but the famous “Martians” weren’t ethnic Hungarians but rather Hungarian Ashkenazim who made up ~5% of the population in 1910. Ashkenazi Jews were and are responsible for a ridiculously outsized proportion of the world’s scientific output, with von Neumann being an outlier even within that outlying population.
Niger, like the similarly-named Nigeria, is majority Hausa but unlike Nigeria doesn’t seem to have any relatively high-performing minorities like the Igbo or Yoruba. Honduras is likewise majority Mestizo with no notable high-performing minority groups. Even modern Hungary, with essentially none of it’s Jewish population remaining after the Holocaust, is much more likely to produce genius inventors than either of them.
Now that you’re probably good and mad, let’s reel this back to my original point. Increasing the population is meaningless unless it reflects an increase in those subpopulations intellectually capable of producing technological advances. If the Kingdom of Hungary had a population one tenth its size, but 50% Jewish instead of 5%, would the history of nuclear physics be any different? Almost certainly not.
Compare these good news for IQ in Africa.
You do have a point. One of the things that changed forever and won’t come back is that smart people used to reproduce at least as much as anybody else. Now we have the number of children inversely correlated with education. And looking at modern society, most likely factors that will encourage multiple children are free time and welfare.
This being said, the arguments in the article work equally well all around the globe, not just in the Hungarian neighborhood.
You can also see it as an extremely low hanging fruit waiting to be picked. Even in the absolutely worst scenarios, intelligence is half environment (and probably another fair part epigenetic). So we’re expecting some 3-6 billion people times 10-30 IQ points in our lifetimes.
Not with that attitude it won’t come back.
People always seem to think their current societal norms will stretch out into infinity. It’s very unlikely that anything has changed forever, especially things with a direct negative evolutionary impact. Evolution is very, very good at selecting for reproduction.
We’ve already completely bypassed the strongest insurance evolution had for reproduction – sex without having babies. We’re on backups already, and we barely began.
The insurance is that people who avail themselves of sex without babies will have fewer babies than those who don’t, or those who do so less. The exact mechanisms beyond that are just implementation details.
SMBC link appreciated, but we’re already talking about intervals a couple of orders of magnitude shorter than evolution needs to change penises, or even behavior.
We’re already a cultural animal for about 80k years. There is no “human behavior”, no “human diet”, no “human mating” – it’s all mixed with culture. The Inuit eat fish fat, Hadza eat honey. Some populations are monogamous, some have teenage boys drink the semen of older men for strength and wisdom. (it’s NOT pure culture btw, the way naive postmodernists tried to say – that way leads to darkness and horror).
Point is, the genes got left in the dust quite some time ago. We’ve already been running on a gene/culture mix. And now we started to mess with the biology directly, like with the contraceptive pill. At some point we’ll make the differences permanent by messing with the genes, but honestly that’s just superfluous at this stage – whatever modifications we put in the genes, they’ll be obsolete by the time the children get to reproduce themselves.
@Radu
On the specific question of genes and reproduction, I think you’re right. But even with your addendum about culture not being everything, I think you’re still underrating genes. Disease is a good example. During the age of exploration, Europeans easily conquered the Americas but couldn’t make any inroads in to Subsaharan Africa until the late 19th century. Why? Because any time Europeans explored Africa, they were at serious risk of spreading disease while when they went to America they usually spread it. We can see the effects of this today where most of the population of the Americas is white/mestizo/black while Subsaharan Africa is still overwhelmingly black. This difference is pretty much purely genes and just from the last few hundred years.
@Wrong Species
Well, yeah, but that was absolutely brutal. It was basically selecting for immunity by killing everybody else, for a few successive generations. I really hope we won’t be seeing that kind of evolutionary pressures any time soon.
And it still took some time, while we’re on a thread discussing if history will end in 7 years or somewhat longer 😀 I honestly believe plain old pure genetic evolution is dead. By the time it gets to do anything, we’ll already have designer babies (we do already, as proof of concept at least).
We’re in no danger of running out of people any time soon. And if the thrust of the article is correct (and I’m understanding it correctly), less people –> higher standard of living for those remaining. So that leaves at least two possibilities aside from voluntary human extinction. One is that the population reduces until people find that having children at replacement level is no longer a negative. Provided it doesn’t happen too fast, there shouldn’t be any disasters here. Another is that there exists a heritable fecundity, and those fecund people make up an ever larger percentage of the population and eventually replace the non-fecund.
@Wrong Species
Another important reason Europeans didn’t explore Central Africa is that sleeping sickness (called nagana in animals) killed their packhorses (and cattle). The late 19th century brought steamboats.
My understanding is that there haven’t been IQ gains in China over the last 20 years despite massive development, which should be a source of pessimism.
But I’m pretty skeptical of international (and intertemporal) IQ comparisons in general.
First hit in google shows moderate growth. Didn’t dig very deep though.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3834612/
Hyperbolic growth requires x dot be proportional to x^2, which assumes something about network structures and the value of random pairs.
I suspect as much as anything else THIS is where it all falls apart. Communications tech allowed for more useful random pairs for a while, and this kept going a long time — letters, then printing, the phones, then internet. But that’s likely been exhausted, brains have hit their limits. Which gets to the same conclusion as Scott regarding AI, but different causality as to the slowdown.
Just for the record, you get hyperbolic growth (in the sense of having a singularity at finite time) for any exponent n bigger than 1. So as long as productivity has gains that are better-than-linear, the argument works.
I can’t figure out right now how to bound the rate at which the exponent should converge to 1 in order to still have a finite time divergence. But a few numerical checks suggest that something like n=1+1/t still develops the singularity (or at least a finite time absurdly-many-orders-of-magnitude jump). The point is, even with diminishing returns on the network effects, it seems you would really need to have something else kicking growth down to “cancel the singularity”.
This is actually quite an interesting question about modelling. I’m not sure the explicitly non-autonomous DE you suggest is a good way to understand if superlinear things always blow up, but a quick counterexample shows that they don’t have to (link below). So then there’s a quite difficult question of what kind of plausible models give rise to singularities etc? Anyway it’s a good and difficult question, especially if we are unsure if the singularity means something real or if it is just the model breaking down.
https://math.stackexchange.com/a/291695
Nobody says you have a single term. Think about bacteria reproduction, you have something like:
GrowthFactor + WasteFactor
Where both are exponential with density, but waste starts slower. Eventually they’re more or less equal, and you can’t get too much density.
The axiom is that “any hyperbolic curve is actually just the first half of a logistic curve”. Except, I guess, black holes.
Absolutely true of course (any exponent greater than 1 leads to singularity in finite time). But 2 is easily understood as number of two-person links which was the point I wanted to tackle as a point of failure of the model that had not been raised.
I never thought of bizarro exponents like 1+1/t ! That certainly seems to give something singular for all practical purposes! BUT it goes outside the frame of the model in that the explicit dependence on t does not fit with the structure of the model.
An exponent like say 1.5 rather than 2, or even 1+epsilon still gives a finite time singularity, and fits my idea of the model, but is more difficult to interpret. It’s something like “not just each individual alone has a constant probability of a good idea, it’s the LINKS of a network of individuals that does; but the network is not all-to-all [which would be x^2] but something sparser than that” — which is probably a better approximation to all the various aspects of the problem.
I don’t know enough graph theory, but I expect there’s some sort of model for creating graphs like this, though you probably need something more than JUST graph theory, some sort of notion of a metric so that you’re fairly well connected to locally close nodes, with ever sparser connection to further nodes.
Either way, I think the number of significant links/node can’t scale much beyond the Dunbar number (a few occasional boosts from communication technologies) and THAT is what slows down the pipeline of ever more innovations.
I’d mentioned this on another comment thread awhile back but the Western, in particular, American University system is probably the biggest contemporary culprit. And I don’t think saying this is terribly controversial at least on SSC simply because all of the ‘components’ pointing to said culprit have been identified by Scott himself in some of his posts. [Albeit piecemeal]
You have in essence a system where the brightest people of each generation are put through a battery of make-work during their most fertile years (ranging from 4-12) and saddled with tens of thousands of dollars in debt for the privilege of doing so. All this well before they’ve accumulated any equity, earned any income, or acquired any career experience. Rationally they put off childbearing until their late 20s and early 30s [if ever].
Combine that with the fact that high paying and/or high status job opportunities are concentrated in fertility sinks (by virtue of being high density areas, which amounts to higher rents and generally higher taxes)
This tends to be a bit self-reinforcing because fertility (and forgoing college) ends up becoming a marker of low social status, and well educated people might also consciously believe they’re doing the world a service by limiting their own fertility because world-wide overpopulation can be an issue.
Civilization signals itself into an early intellectual grave.
C.S. Lewis and Dorothy Sayers pointed out that at the age when people are starting their intellectual careers and should be reading widely, the PhD system forces them to specialize.
That’s a different kind of complaint. I can understand the need for someone who is trying to become knowledgeable at something to specialized given the limited amount of info someone can put in their own head. And up until that time they’re often not specializing at all.
I am more pointing to the fact that in times past most bright people could begin working between the ages of 16-18 and do so with no debt and lower expectations of what creature comforts are needed to call oneself middle class.
Cute, but selection effects don’t work that fast; if this was being driven by a selection effect, it would look like the curve on the log graph gradually flattening out over a period of generations, not going from log-linear to flat inside ten years. It almost has to be a cultural thing. Of course, there was no shortage of cultural changes around the 1960s, either.
Well part of that is that I don’t actually trust Scott’s graph. Other commenters have mentioned Mar’s Law and GIGO already, so I won’t rehash their comments. There was a decline somewhere in the late 20th century but pinpointing it to a specific year is entirely symbolic (which is why I personally prefer 1968).
The other part is, of course, accurate. The pill enabled the sexual revolution and greatly accelerated breakdown of the family, but you wouldn’t see an effect until that generation grew up (or rather, didn’t). So somewhere in the mid 1980s at the earliest. I was being overly simplistic.
IQ testing, so that smart people joined the ranks of the upper class that already wasn’t reproducing?
The pill
the “managerial revolution”?
> Can’t put my finger on it, maybe something to do with water fluoridation or leaded gasoline?
This is an idea that I occasionally toy with as well. Some X-factor that we don’t really understand. (For the record I don’t think its either of the two things you mention.) Cost disease, lower tolerance for risk, increased competitiveness without corresponding increases in output, more widespread fear and outrage, status quo bias, cultural stagnation, fewer big accomplishments like putting a man on the moon. Even declining sex and weaker grip strength. It all somehow feels connected.
Joel Spolsky is well known for saying that the most important thing to look for in employees is “smart and gets shit done.” At the end of the day as long as someone’s smart and gets shit done, most likely they’ll be able to achieve great things. I don’t think we’ve seen a secular decline in smartness. If anything because of Flynn, there’s probably been a slight increase.
But I do think on a generational level we’re much worse in terms of “gets shit done” than prior cohorts. It’s really hard to put your finger on it, but think of the difference between Elon Musk and your typical PhD. Elon’s probably not that much smarter, but goddamn does that guy get shit done. Prior generations just seemed to have a lot more Elon Musks than today.
You are missing the joke.
The answer is “The Pill” but he didn’t want to say that.
According to Isaac Newton, the end of the world would be in 2060 (for somewhat different reasons).
As some others have hinted, the issue of human intelligence and its potential growth seems to be a major consideration. It may have been slow or even absent in the time periods discussed, but that may change, perhaps soon.
Is there any adjustment for post 1960 population growth containing more non-productive people that still consume? Changes such as increase in life expectancy may make population growth from 1,000 years ago not the same as population growth today (in terms of contributing to the innovation probability engine).
Maybe. But on the other hand we have proportionally fewer kids around these days, as well. (But they are less productive than they used to be. We rather bore our young ones to death than let them be productive members of society.)
We rather bore our young ones to death than let them be productive members of society.
But can they be productive in today’s society? Back in 1900 if they left school at fourteen, they could go work on the land as ploughboys or get work in cities in factories and sweatshops (both sexes). Today, what job are you going to put a fourteen year old into – working at McDonalds? All the low-paid jobs that are argued should be low-paid because they’re only for teenagers who are not expected to support themselves? Manual labour? A fourteen year old will still need to serve an apprenticeship for several years before they can be employable, you can’t just hand them a wrench and say “There you go, you’re a plumber now”. And factory work is automating as fast as it can so even the factory jobs are not there in the same numbers.
The productive work is in the ‘knowledge economy’, and while there may be fourteen year olds capable of jumping straight into a coding job on the same basis as an adult, not all of them will be that able. Nowadays to contribute, you need skills, and education/training is how you get those skills – so the “boring the young ones to death” is necessary, even if not efficient. The alternative is let them drop out at fourteen, and I think if you look at the real-world situations where that happens, they’re only ‘productive’ in countries where they can work rag-picking on dumps and the like.
Maybe the solution is let them all leave school at fourteen, then put them to work doing the farm labour that immigrant labour is needed for today! Kill two birds with one stone!
The German dual vocational system could be the clue to an answer: you can apprentice at a company to learn a trade, but still have a portion of formal teaching. Either one day a week, or a few weeks en-bloc a year. (The former tends to be better liked by the adolescents, the latter has easier logistics.)
Yes, young people aren’t all that productive. Partially just because they are missing the maturity of more years. Though giving them some responsibility and someone who actually cares about their output (in a commercial setting!) might do more for maturity than the make-believe work of schooling.
By the way, nothing wrong with schooling if you either actually learn something useful and/or enjoy it. Sadly, for most people in most subjects it’s neither. See Caplan’s Case Against Education..
Btw, Caplan also has some interesting bits to say about why a white collar worker these days seems to need so much education, despite the irrelevant curriculum of which they learn so little and forget so much almost immediately.
Well the income doesn’t need to be above subsistence level given that they are supported by some combination of government/family. Any income earned is an increase from what it is currently. What a young person gains from this is beyond money is some notion of life in the real economy, a sense of the value of money, etc.
The question is whether taking them out of school necessarily harms their job prospects. In the current environment it does but that’s partly product of cultural expectations. I can imagine a secondary education system that was more barebones on necessary vs elective academics (especially for those who express no interest in them) and where electives included job-shadow programs. The extreme case where work-load would harm your ability to achieve your potential maximum literacy or numeracy is possible but I don’t think it would occur for most people under such a system.
The key is that work/equity/family formation on the one hand and credentials/study are seen as something to be done in parallel not serial. I think this would also help alleviate the cultural stigma about someone becoming a mother/father vs. getting a doctorate.
What about the quality of the human capital? You can have people growth and having 1 billion of low IQ and not future oriented human capital, and is asure way to not having a singularity happening for a long time. Anyway, I picture that the historical equation only work when on malthusian world, after that the relationship breaks down, and you have to computer for “variety” and quality of the population.
Interestingly, Enlightenment intellectuals didn’t necessarily believe that population growth was at all inevitable or even terribly likely, which was why Ben Franklin’s 1754 report “Observations Concerning the Increase in Mankind” in which he reported that the American colonies were doubling in population just through natural increase every 20 or 25 years was a bombshell.
Before then, intellectuals tended to have a model of diminishing returns in population growth. The world could get tired or worn down and population might well fall. The population of a fertile and reasonably well-government great Kingdom, France, might not have grown much since before the Black Death.
Franklin instead pointed to simple economic explanations for population growth: Americans have a lot of land per capita, so their wages are high and the cost of land is low. This affordable family formation leads to a higher percent getting married and at a younger age. Thus Americans enjoyed higher living standards than Europeans. (Franklin went on from there to argue for immigration restriction.)
Franklin’s essay was a big influence on both Malthus and Darwin. We have Darwin’s copy of Malthus’s book in which Darwin underlined the part citing Franklin.
Darwin’s notion of the “struggle for existence” traces back to Franklin. Before Franklin, thinkers tended to worry that coming into existence was the hard part but staying in existence wasn’t that tough. Darwin, in contrast, conceives of coming into existence as easy but outcompeting all the other life forms in existence for resources is difficult.
Interesting.
I’m not sure if he read Franklin, but Adam Smith also brings up the topic of American population increase in Wealth of Nations as evidence of the high standard of living there:
http://geolib.com/smith.adam/won1-08.html
This is maybe tangentially related, but I’ve been curious for some time about the paradox that economic historians list America’s GDP per capita as being lower than Britain’s throughout its entire history from the Thirteen Colonies until around WW1, despite American wages and standard of living being higher, sometimes much higher, than Britain’s throughout this period and beyond.
I’ve never read a good discussion of this topic and would love to find one.
That Smith quote attributes the paradox to the higher rate of growth in the U.S. driving higher wages. This seems a pretty inadequate explanation, or else China’s wages would be higher than those in the West.
Presumably a part of the issue is that GDP doesn’t have any way to reflect the low-cost acquisition of unimproved arable land. But I think it likely Britain’s higher per capita GDP was still real in some sense, and contributed to its government’s better ability to project power than the wage-rich but GDP-poor U.S in the period 1783-1914.
The basics of GDP accounting would say that Britain must in this case have had a much larger capital stock, which claimed a much larger share of GDP. That this could contribute so much to GDP makes some intuitive sense in the 19th century, with early-industrializing Britain having much larger per capita investments in factories and railroads.
But what does this massive difference in capital stock and corresponding contribution to GDP represent in the 17th and 18th centuries?
Inequality could explain that. Average wealth can be higher in a society with great inequality, where the median wealth is lower.
By wealth, I presume you mean income? The distinction is key here.
I interpret the figures to indicate that economic historians calculate Britain to have had a much larger per capita capital stock, i.e., wealth, which was invested in things that generated good returns. The alternative explanation I think you’re suggesting is that wages in the U.S. weren’t really higher in aggregate; the U.S. had a higher median wage while Britain had a higher mean.
This is possible, but then what occupations were generating such high earnings in Britain, and why? I think it would imply that some small slice of British occupations were earning many, many times more in labor income than their American equivalents, even while the vast majority of American occupations earned more than the British.
Landed gentry. They got a lot of their income from tenant farmers.
“less than a century and a half ago, all land was owned by 4.5 per cent of the population and the rest owned nothing at all.”
The big difference with the US was that American farmers far more often owned their land.
Rent from tenant farmers would be capital income, not labor.
But more importantly, GDP measurements are supposed to include the imputed cost of rent for owner-occupied real estate, so it shouldn’t affect GDP one way or the other if real estate is owner-occupied or not.
Mismeasurement could still be part of the problem though, as I alluded to in my first post. The imputed rent on rural U.S. real estate would be very low relative to its productive value, since land was cheap. So the cheap land, actually a benefit of the U.S., would reduce GDP relative to England and make the country appear poorer. But I’m inclined to think that the experts in this area have considered this problem, and there’s something more to this.
Average age of first marriage for brides in England from 1200 to 1800 remained in the 24 to 26 range, according to Gregory Clark. David Landes points out that Europeans and Chinese tended to differ in age of marriage for women, with the Chinese at around 17 or 18. China thus tended to have population explosions when times were good and then the Malthusian hammer would come down when the government didn’t manage to maintain as high of a level of competence. (The late 1950s Great Leap Forward famine was, hopefully, the last of these collapses.) Europe was more resilient than China because population wouldn’t grow as fast in good times due to later marriages.
From what I recall of Clark’s comparisons of England to China, he also found that China had a lower per-capita-GDP-adjusted disease rate than Europe, probably due to better hygiene and sanitation practices. This, combined with the cultural differences in times of marriage, lead to China’s population density reaching Maltheusian equilibrium at a lower standard of living, and it seems like it would have also contributed to steeper rates of population increase during prosperous times.
That sounds counter-intuitive to me, I would have guessed that increased hygiene would lead to higher productivity.
It doesn’t matter. If you’re having six kids, and are at max, four of the kids have to die.
If the kids all die of getting eaten by wolves (or in this case disease), you have an economic surplus.
If you’ve solved wolves and disease, it doesn’t matter how high productivity is, every single bit of it is going to feed the extra mouths that made it, you cap out at a higher population, everyone STILL has 6 kids, and 4 of them starve to death. And eat all your economic surplus in the process.
This ignores the fact that productivity is done by people. Basically the lower disease burden isn’t improving the productivity of workers? Not being sick in the field as often or being unable to work or having to care for said ill child? I don’t see how the two can be independent variables.
Clark was talking about the schedule of disease rates relative to per capita income, with income serving mostly as a proxy for quality of diet, not the absolute disease rate in the population. If you’re malnourished due to extreme poverty, you’re more likely to get sick and less likely to recover if you do. With better hygiene, you’re less likely to get sick at a given level of nourishment, but levels of nourishment aren’t being held constant: because Chinese peasants are healthier at a given nutrition level, their population keeps growing (and per capita production declines, since there are more peasants on the same amount of land) longer, and they reach equilibrium (where they’re poor enough that death rates balance out birth rates) at a lower level of income per person.
It may be the case that a 100-person medieval Chinese peasant village would produce more crops than 100-person medieval European peasant village working the same land with the same quality of agricultural tools and techniques, for the reasons you stated. But Clark didn’t attempt that analysis, probably in part because there are so many potential confounding variables (crop species and varieties, tools, land, techniques, incentives, etc).
@ Eric Rall
This doesn’t answer the fundamental question, to take the original quote again
Matheusian** equilibrium is something like productivity per person * number of people who can work an acre at that level * number of acres. That gives you your maximum production for a given land area, and how many people it will require, subtract one for the other and you have the number of non farmers an area can support.
How do you get a lower standard of living out of this with a lower disease burden?
** really, really lazy approximation
If population were held constant at a level that optimizes the agricultural surplus, I think you’d be correct. By what mechanism do you expect population to be held constant at that level?
If there is no such mechanism, then the bound of population is the feedback of 1) people avoiding having kids they know they can’t support, and 2) people dying faster due to malnutrition and other poverty-related factors and their follow-on effects reducing births and raising deaths as per-capita (not per-farmer, since non-farmers also need to eat) food production declines. So if population expands beyond the level that maximized agricultural surplus and reaches the natural births=deaths (which is what I’ve been referring to as Maltheusian Equilibrium; it sounds like you’re using the term differently), you don’t necessarily have a larger non-farmer population, and if they do, then both farmers and non-farmers are still short of food.
You are correct that a larger non-farmer population would allow society to produce more non-food goods. But historically, 90+% of even relatively urbanized societies were farmers, and food dominated most family budgets, to the extent that a food shortage tended to swamp other factors when assessing standard of living.
Food production below carrying capacity (Maltheusian Equilibrium) is basically = individual productivity * total individuals. If disease lowers productivity then having a higher disease burden basically means that every wave of disease can reduce agricultural output. Outside of diseases that only hit the very old then a higher disease incidence will drive average productivity down in the long run.
Pre-modern productivity is mostly agricultural productivity, and depends a lot on whether you’re measuring production per person (ie per peasant) or production per acre.
Increased hygiene might lead to higher productivity per person, but it also leads to more people. Once you run out of land to expand into, increasing population can’t increase production (much). That’s the Malthusian trap reexpressed in different language.
Increased productivity per person means you can support more non farmers and build a more advanced economy. You can’t wave that away by adding more people who are also more productive.
“Can”, but not necessarily “will”.
The food needs to get from the farmers to the non-farmers, usually by trade, by taxation, or by land rents. Taxes and land rents are boring here, since from the farmers’ perspective they’re not much different from lower productivity. There’s definitely a mode where farmers trade their increasing surplus production for better tools and equipment (increasing productivity further) as well as luxury goods and services: we know that mode exists because we’re living in it.
However, getting into that mode in the first place seems to be difficult, since that’s not the pattern we’ve seen for most of human history. Or at least it’s so subtle that we only see it by zooming out to a scale of centuries. Clark’s thesis is that a cultural change reached a tipping point around 1800 AD in England and the Netherlands and spread from there. There are a bunch of other theories, including Scott’s argument in this post that there’s a steady hyperbolic trend all the way back to the neolithic era with no major breaks until the 20th century, and the apparent tipping point of the Industrial Revolution is just the slope becoming steep enough to notice on a scale of decades or less.
They’re only more productive at a given population density. Until you switch into the mode where industrialization and commerce drive ever-increasing agricultural productivity, the added people either work the fields (reducing average productivity since there’s more people working the same amount of land) or they just represent a one-time increase in the size and wealth of the small non-farmer portion of the society.
You still can’t get a LOWER sper capita income with these caveats.
Basic Malthusian trap: 1,000 acres can (with current technology) produce food for 2,000 people. Group A and B each start with identical populations, land conditions, technology etc. Group A has a lower disease burden and so is more productive per capita but also has a higher growth rate. In 100 years group A has hit carrying capacity, 2,000 total people but only 1,500 of them need to be farmers, the other 500 sit and literally do nothing all day and eat surplus from the farmers. Group B takes 200 years to hit carrying capacity which is still 2,000 people, but it requires 2,000 farmers to maintain that level.
Total productivity (ie income per capita) is the same at carrying capacity, but income per capita from 100 people to 2,000 people is higher for group A. How do you twist this to ever get a LOWER income per capita for group A?
The lower per-capita income comes from the group with worse hygiene and sanitation needing more calories per day to stay healthy enough to maintain a steady population. Differences in hygiene and sanitation might mean that at 1800 calories/day, a Chinese peasant is roughly as healthy as an English peasant eating 2500 calories/day, so both have the same death rates (assuming no significant differences in death rates for causes other than than the combined effects of malnutrition and disease). If the birth rates are similar in the two populations at a given health level, then the Chinese population will reach equilibrium at a lot fewer calories per person per day than the English population. And if food dominates your family budget, then fewer calories probably means less income overall.
I think the idea is that China’s population stayed close to the Malthusian limit a higher fraction of the time, so there was less opportunity to leverage surpluses into productivity improvements; the plagues repeatedly knocking England back from the resource constraint provided enough breathing room for the technological flywheel to get going.
One element you are not discussing is communication. Technological change in Eurasia had no effect on the New World prior to 1492. Technological change in China had a very limited effect on Europe, and visa versa, until transport and communication was good enough to spread information. Gunpowder and the compass and the traction trebuchet eventually reached Europe, but how long did it take?
So technological growth ought, on the model, to depend on world population somehow adjusted by the degree to which different parts of the world were in communication with each other. On that version, growth should have been faster in the past few centuries than a model fit to earlier periods would predict.
I’d be interested in seeing an estimate of the trend in the number of books in Europe before and after the printing press. Books were expensive to copy by hand and they were lost due to fire, floods, and looting. So, it was possible for knowledge to be lost. E.g., most of the works of Plato weren’t available to anybody in Western Europe to read until c. 1400.
But after the 1450s, the number of printing presses grew tremendously. It’s wonderful to read about their spread. So virtually no important knowledge has been lost since c. 1500.
AI Impacts is working on precisely that, as it happens. Stay tuned.
Wasn’t the bottleneck for Plato interest? People didn’t even bother to translate him until then.
What was in this explosion of books?
James Hannam further to claim that the printing press caused knowledge to regress.
Sour grapes. It must not have been important if people didn’t bother to save it.
The works of Plato are also pretty useless for progress in science in technology.
For example, Practical manuals are useful. Or the works of Archimedes.
A century before Petrarch commissioned the translations of Plato, Aquinas sent William of Moerbeke to Greece to translate Aristotle. Why? I don’t know.
Aquinas already had translations of Aristotle, passed through Arabic, but he just wanted more accuracy. This seems to me like a poor use of resources. Better to get translations of new documents, like Plato. Maybe he thought that Aristotle, being the successor to Plato was better. Touching on your comment, maybe he thought that Aristotle’s practical interests made him better, but I doubt that’s what he thought.
[My best defense of Aquinas was if he only had a few volumes of Aristotle, but commentary on the rest.]
Anyhow, William happened to translate some Archimedes while he was there. Why? I don’t know. Does this represent general demand for Archimedes over Plato? Thomas or William’s personal taste? Because his name started with Ar-?
And yet things like this are treasure troves of information today:
https://www.theguardian.com/books/2019/apr/10/extraordinary-500-year-old-library-catalogue-reveals-books-lost-to-time-libro-de-los-epitomes
Here you go. I wrote (partly) about that here.
This is neither here nor there, but when I learned about this idea in an Econ class, they used technological differences between regions as evidence for the theory. The claim went that Europe+Africa+Asia had the most people and the most tech; the New World had the second most and the second best tech; and Australia had the third most and the worst tech of the three.
Yeah, this meshes pretty well with Jared Diamond’s theory that Eurasia was one big intermingling system, and everywhere else was small pockets of development that had harder times advancing as quickly.
This makes more sense in the context of older technology, which was more region-dependent, eg technology for growing wheat (if your region happened to have wheat) or for hunting aurochs (if your region happened to have aurochs) or for building things out of bamboo (if your region happened to have bamboo).
The New World in 1491 was pretty advanced in some aspects of technology and social organization and way behind in others. This erratic aspect aspect is what you’d expect when you have to make it up on your own.
The most interesting “behind” technology for me was iron-working. It must be really path-dependent on bronze-working capability first, because AFAIK none of the New World civilizations developed large-scale iron- and steel-making (although they seemed to be developing bronze).
Working iron requires advanced furnace technology, and developing advanced furnace technology requires a far greater investment of specialized labor and resources than any pre- or early agrarian society can afford when the expected payback is “let’s see what happens when we put different sorts of rocks into a really hot fire, maybe it will be different and better than the boring nothing that happens to rocks in an ordinary fire”
Working copper requires noticing that native copper exists and is better than rock for some sorts of toolmaking. If you do that and you have ordinary campfires, you’ll eventually note that fire makes copper easier to work. And eventually that fire makes some of the funny rocks you found next to the native copper, turn into more copper.
When you’ve got everybody doing that, someone will notice that reworking copper in a kiln or fire pit that happens to be made from a different sort of funny rock, will make something even better than copper.
Now, you have a civilization which has a motive to build lots of decent furnaces, and both increasing the furnace temperature another couple hundred degrees and throwing still other sorts of funny rocks into furnaces to see what happens become practical experiments.
Some parts of the Americas had gotten through most of that list, though. They didn’t make anything close to the kind of use the later Eurasian Bronze Age did of it, but cultures in the Andes, Central America and Western Mexico smelted bronze and used it for ornaments, tokens of exchange (“axe-monies”), and a few tools. Some tribes around the Great Lakes also worked copper, which you can find in native form around that area, but they don’t seem to have alloyed it.
Interestingly, some of the PacNW cultures had iron arrowheads — which they forged cold, out of nails from (mostly) Japanese shipwrecks.
Very interesting – in which book does Diamond say that? I’ve been looking for a reference on that for a while…
“Guns, Germs and Steel”
The basic contention is (roughly) that the effect of latitude on crops means that the basic agricultural package developed in the mid-east was able to spread across Europe, and similarly in Asia.
Agriculture allows population density which allows specialization and you get steel, and the same mechanisms that enabled the agriculture package to spread also produces the technology sharing that gives you “guns”. Population density especially dependent on close habitation with farm birds and mammals, give you plague diseases, the germs.
Thus, first contact for everyone not on Eurasia, with Eurasians, went badly. First contact with Eurasia might even just be “dying to measles”.
I’m not gonna effortpost it out, but just go to reddit to see the extensive effortposts by people more or less completely debunking the whole GG&S story. Its a perfect example of a “just so” explanation (that also just so happens to align with what his political slants are) that falls apart when fisked.
Potentially relevant.
It’s been a while since I had much contact with Diamond, so I’m not posting this so much to defend him specifically as to point out that doing this kind of thing is hard, and that the presence of an internet fisking, particularly on a topic with political overtones, doesn’t actually prove much.
Eh. Academic history has its own problems, and this is a heuristic rather than a sure thing, but when you’ve got a book in the popular press pushing a nice neat theory that just happens to wrap up in 400 pages and not include anything messy or politically inconvenient, and then you’ve got the rest of the field saying, essentially, “it’s more complicated than that”, my sympathies are with the field.
Everything I’ve read of historians attacking Diamond is that someone else deserves the credit.
Anthropologists generally complain that he’s virtue signalling, that he doesn’t believe it. Which is pretty analogous.
@bean
I’m not one to go so far as saying he is definitely wrong, just that his book is basically the weaving of a narrative from anecdata. It indeed is a plausible theory, but there are other plausible theories as well, but his seems to always be the one that gets shunted to position #1 in pop culture, when it doesn’t deserve that level of prominence.
“It’s more complicated than that” and “It’s as close as you can come to accurately describing that in one book, which is good enough to be useful for people who aren’t going to read ten books”, can both be simultaneously true. And my sympathies are rarely with people who say that outsiders shouldn’t express an opinion on their filed unless they’ve read ten books.
Diamond may be wrong, or there may be someone who is more right in the same volume, but that requires something stronger than “it’s more complicated than that”. It’s always more complicated than that, and we’re usually right not to care.
“It’s more complicated than that” implies that the single book you just finished reading is true so far as it goes. And I simply refuse to believe that any useful field is so complicated that someone can’t manage to fit a good if simplified version of its core theories into a 400-page book. Yes, there will be people standing around nitpicking, and to a large extent, we learn more from people nitpicking at established work. But it’s very different from “the original work is wrong”.
Maybe this is compensated by the time spent reading previous literature to see if whatever you thought of was not already done at all. Or maybe all those conferences that go really well don’t go that really well.
Shouldn’t that crazy-huge population in the first section be an even number? (Excel says it should be 18,446,744,073,709,600,000, but that’s wrong too because it shouldn’t be divisible by 10.)
Good catch. I copied it off Wikipedia, which seems to have been off by one.
The Wiki article starts from 1, doubles 63 times, then adds them all up to get the total number of grains on the chessboard. You just looked at doubling 64 times. Of course, 1 + 2 + 4 + … + 2^63 = 2^64 – 1, so it ends up being off by 1.
Also, when typing that, I had to stop myself from wrapping it in $ signs. #JustScientistThings?
Thanks.
AIs are more likely to increase the productivity of researchers long before they actually perform research themselves. I know several people trying to apply machine learning techniques to my field (energy storage), at least.
I think this is true, but I’m not sure it’s enough for hyperbolic growth. The key factor for hyperbolic growth is that you can increase the speed of research arbitrarily by throwing money at it. I get the impression that this is more like within a certain technological regime, you can use AI to increase the productivity of researchers X amount and no more. If one AI doubles a researcher’s progress, you can’t give them two AIs and triple it.
Recent AI research has shown that “throw more resources at the problem” does work to some extent. It is sublinear along individual dimensions but you can combine parallelism, more training time and more data to get significant progress. AlphaStar, OpenAI5 and GPT2 are all results that to a large extent consist of making things more scalable so that the deep-pocketed orgs can just scale up the resources they throw at the problem.
Until recently every increase in scale also had to be accompanied by new techniques. Now we’re in some transition region where you can scale in principle but the factors involved still make it unpalatable for most orgs.
I think https://openai.com/blog/sparse-transformer/ exemplifies this quite well. With O(N²) scaling you slam into limits after let’s say 1-3 orders of magnitude even when you throw a lot of money at the problem. O(N √N) does not put you in linear regime yet but if you throw industrial amounts of money at it and maybe combine with some more modest improvements in other factors you get several more orders of magnitude.
We’re getting these kinds of improvements every few months now. The exponents are falling.
I think the distinction between those things doesn’t actually matter, for the same reason that it doesn’t matter whether tractors “increased the productivity of farmers” or “acted as non-human farmers increasing productivity like a human farmer would”. As long as the number of humans needed per cumulative AI+human research output falls in a way that we can accelerate by throwing more research at the problem, research can become like farming where there are technically still humans required to farm, but the amount of food we produce is mostly unrelated to the per-peasant food production rates of pre-industrial-revolution Europe.
The potential applications for machine learning in my field are pretty impressive, though. Some of the people I know are talking about short-circuiting the time required for iterations of research by as much as 75%.
Could GPT-2 be used as a research assistant to write review papers (with a fair bit of post-editing probably)? @Murphy mentions something similar in a sub thread below.
It’s v possible that current AI will speed your work by, say, 100%, but then soon you will hit limits to what the AI can help you discover and you will be back to square 1, except that now your field has a couple of courses on using AI and programming in python required to do anything remotely useful in it.
Dunno if better AI is guaranteed to be there for you once you reach that point.
Couldn’t that be said of any technological improvement in productivity in any field, not just research? It was predicted decades ago that information technology would increase the productivity of researchers simply through making papers easier to access; that improvement was predictable at the time, but the role of AI in directly facilitating research wasn’t.
Sure. That’s how you end with graphics about the Moore law. All the advances amount to a small % of reduction in transistor size. Each advance is the new baseline.
You prolly won’t notice in a Moore law graphic when Intel’s technicians started accessing papers more freely, tho; chances are Intel already paid for that, and if it somehow became cheaper, they pocketed the difference.
I kind of wonder about using absolute time as the metric in so many models. Like if the overall story is that things that were formerly serial become parallel (and massively so) (including, potentially, extending the other way back to prehumans in terms of a serial search strategy in evolution vs a parallel one in the human brain?) then we go from a regime where we think about how long it takes for an idea to spread (and thus be built upon) in terms of absolute time and would like to take into account a slowdown due to the sheer size of the world, inferential distances between vastly separated fields etc.
It would be interesting to take the perspectives and tools of media studies – i.e. that cultures are in large part shaped by the form of media consumed – and apply them to means of transportation or dissemination of information. Do we divide the world into the foot, horse, sail, and cable ages? Are those meaningful distinctions? I have found it interesting how quickly Native North American Great Plains tribes adopted cultural trappings that in the Old World we would associate with Steppe Peoples like the Mongols – e.g. supreme skill in horseback archery – despite such a short timespan to adopt the use of horses. Is there something inherent in the means of transportation that commends itself to other cultural manifestations?